Compare commits

..

1 Commits

Author SHA1 Message Date
stevenhorsman
aa11441c1a workflows: Create workflow to stale issues based on date
The standard stale/action is intended to be run regularly with
a date offset, but we want to have one we can run against a specific
date in order to run the stale bot against issues created since a particular
release milestone, so calculate the offset in one step and use it in the next.

At the moment we want to run this to stale issues before 9th October 2022 when Kata 3.0 was release, so default to this.

Note the stale action only processes a few issues at a time to avoid rate limiting, so why we want a cron job to it can get through
the backlog, but also to stale/unstale issues that are commented on.
2026-01-22 11:32:01 +00:00
68 changed files with 565 additions and 1848 deletions

View File

@@ -32,7 +32,6 @@ jobs:
matrix:
vmm:
- qemu
- qemu-runtime-rs
k8s:
- kubeadm
runs-on: arm64-k8s

41
.github/workflows/stale_issues.yaml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: 'Stale issues with activity before a fixed date'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
inputs:
date:
description: "Date of stale cut-off. All issues not updated since this date will be marked as stale. Format: YYYY-MM-DD e.g. 2022-10-09"
default: "2022-10-09"
required: false
type: string
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
stale:
name: stale
runs-on: ubuntu-24.04
permissions:
actions: write # Needed to manage caches for state persistence across runs
issues: write # Needed to add/remove labels, post comments, or close issues
steps:
- name: Calculate the age to stale
run: |
echo AGE=$(( ( $(date +%s) - $(date -d "${DATE:-2022-10-09}" +%s) ) / 86400 )) >> "$GITHUB_ENV"
env:
DATE: ${{ inputs.date }}
- name: Run the stale action
uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
stale-pr-message: 'This issue has had no activity for at least ${AGE} days. Please comment on the issue, or it will be closed in 30 days'
days-before-pr-stale: -1
days-before-pr-close: -1
days-before-issue-stale: ${AGE}
days-before-issue-close: 30

1
Cargo.lock generated
View File

@@ -4005,7 +4005,6 @@ version = "0.1.0"
dependencies = [
"anyhow",
"common",
"containerd-shim-protos",
"go-flag",
"logging",
"nix 0.26.4",

View File

@@ -1 +1 @@
3.26.0
3.25.0

View File

@@ -46,12 +46,16 @@ fi
[[ ${SELINUX_PERMISSIVE} == "yes" ]] && oc delete -f "${deployments_dir}/machineconfig_selinux.yaml.in"
# Delete kata-containers
helm uninstall kata-deploy --wait --namespace kube-system
pushd "${katacontainers_repo_dir}/tools/packaging/kata-deploy" || { echo "Failed to push to ${katacontainers_repo_dir}/tools/packaging/kata-deploy"; exit 125; }
oc delete -f kata-deploy/base/kata-deploy.yaml
oc -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
oc apply -f kata-cleanup/base/kata-cleanup.yaml
echo "Wait for all related pods to be gone"
( repeats=1; for _ in $(seq 1 600); do
oc get pods -l name="kubelet-kata-cleanup" --no-headers=true -n kube-system 2>&1 | grep "No resources found" -q && ((repeats++)) || repeats=1
[[ "${repeats}" -gt 5 ]] && echo kata-cleanup finished && break
sleep 1
done) || { echo "There are still some kata-cleanup related pods after 600 iterations"; oc get all -n kube-system; exit 1; }
oc delete -f kata-cleanup/base/kata-cleanup.yaml
oc delete -f kata-rbac/base/kata-rbac.yaml
oc delete -f runtimeclasses/kata-runtimeClasses.yaml

View File

@@ -51,13 +51,13 @@ apply_kata_deploy() {
oc label --overwrite ns kube-system pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=baseline pod-security.kubernetes.io/audit=baseline
local version chart
version='0.0.0-dev'
version=$(curl -sSL https://api.github.com/repos/kata-containers/kata-containers/releases/latest | jq .tag_name | tr -d '"')
chart="oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy"
# Ensure any potential leftover is cleaned up ... and this secret usually is not in case of previous failures
oc delete secret sh.helm.release.v1.kata-deploy.v1 -n kube-system || true
echo "Installing kata using helm ${chart} ${version} (sha printed in helm output)"
echo "Installing kata using helm ${chart} ${version}"
helm install kata-deploy --wait --namespace kube-system --set "image.reference=${KATA_DEPLOY_IMAGE%%:*},image.tag=${KATA_DEPLOY_IMAGE##*:}" "${chart}" --version "${version}"
}

View File

@@ -157,16 +157,6 @@ if [[ -z "${CAA_IMAGE}" ]]; then
fi
# Get latest PP image
#
# You can list the CI images by:
# az sig image-version list-community --location "eastus" --public-gallery-name "cocopodvm-d0e4f35f-5530-4b9c-8596-112487cdea85" --gallery-image-definition "podvm_image0" --output table
# or the release images by:
# az sig image-version list-community --location "eastus" --public-gallery-name "cococommunity-42d8482d-92cd-415b-b332-7648bd978eff" --gallery-image-definition "peerpod-podvm-fedora" --output table
# or the release debug images by:
# az sig image-version list-community --location "eastus" --public-gallery-name "cococommunity-42d8482d-92cd-415b-b332-7648bd978eff" --gallery-image-definition "peerpod-podvm-fedora-debug" --output table
#
# Note there are other flavours of the released images, you can list them by:
# az sig image-definition list-community --location "eastus" --public-gallery-name "cococommunity-42d8482d-92cd-415b-b332-7648bd978eff" --output table
if [[ -z "${PP_IMAGE_ID}" ]]; then
SUCCESS_TIME=$(curl -s \
-H "Accept: application/vnd.github+json" \

View File

@@ -125,7 +125,7 @@ If you want to enable SELinux in Permissive mode, add `enforcing=0` to the kerne
Enable full debug as follows:
```bash
$ sudo sed -i -E 's/^(\s*enable_debug\s*=\s*)false/\1true/' /etc/kata-containers/configuration.toml
$ sudo sed -i -e 's/^# *\(enable_debug\).*=.*$/\1 = true/g' /etc/kata-containers/configuration.toml
$ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.log=debug initcall_debug"/g' /etc/kata-containers/configuration.toml
```

View File

@@ -51,7 +51,6 @@ containers started after the VM has been launched.
Users can check to see if the container uses the `devicemapper` block
device as its rootfs by calling `mount(8)` within the container. If
the `devicemapper` block device is used, the root filesystem (`/`)
will be mounted from `/dev/vda`. Users can enable direct mounting of
the underlying block device by setting the runtime
[configuration](README.md#configuration) flag `disable_block_device_use` to
`false`.
will be mounted from `/dev/vda`. Users can disable direct mounting of
the underlying block device through the runtime
[configuration](README.md#configuration).

View File

@@ -50,7 +50,7 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.hypervisor.default_max_vcpus` | uint32| the maximum number of vCPUs allocated for the VM by the hypervisor |
| `io.katacontainers.config.hypervisor.default_memory` | uint32| the memory assigned for a VM by the hypervisor in `MiB` |
| `io.katacontainers.config.hypervisor.default_vcpus` | float32| the default vCPUs assigned for a VM by the hypervisor |
| `io.katacontainers.config.hypervisor.disable_block_device_use` | `boolean` | disable hotplugging host block devices to guest VMs for container rootfs |
| `io.katacontainers.config.hypervisor.disable_block_device_use` | `boolean` | disallow a block device from being used |
| `io.katacontainers.config.hypervisor.disable_image_nvdimm` | `boolean` | specify if a `nvdimm` device should be used as rootfs for the guest (QEMU) |
| `io.katacontainers.config.hypervisor.disable_vhost_net` | `boolean` | specify if `vhost-net` is not available on the host |
| `io.katacontainers.config.hypervisor.enable_hugepages` | `boolean` | if the memory should be `pre-allocated` from huge pages |

View File

@@ -752,6 +752,15 @@ fn parse_mount(m: &Mount) -> (MsFlags, MsFlags, String) {
(flags, pgflags, data.join(","))
}
// This function constructs a canonicalized path by combining the `rootfs` and `unsafe_path` elements.
// The resulting path is guaranteed to be ("below" / "in a directory under") the `rootfs` directory.
//
// Parameters:
//
// - `rootfs` is the absolute path to the root of the containers root filesystem directory.
// - `unsafe_path` is path inside a container. It is unsafe since it may try to "escape" from the containers
// rootfs by using one or more "../" path elements or is its a symlink to path.
fn mount_from(
cfd_log: RawFd,
m: &Mount,

View File

@@ -4,7 +4,7 @@
// SPDX-License-Identifier: Apache-2.0
//
use std::collections::{BTreeMap, HashMap};
use std::collections::HashMap;
use std::fs;
use std::io::{self, Result};
use std::path::{Path, PathBuf};
@@ -206,8 +206,8 @@ impl TomlConfig {
}
/// Get agent-specfic kernel parameters for further Hypervisor config revision
pub fn get_agent_kernel_params(&self) -> Result<BTreeMap<String, String>> {
let mut kv = BTreeMap::new();
pub fn get_agent_kernel_params(&self) -> Result<HashMap<String, String>> {
let mut kv = HashMap::new();
if let Some(cfg) = self.agent.get(&self.runtime.agent_name) {
if cfg.debug {
kv.insert(LOG_LEVEL_OPTION.to_string(), LOG_LEVEL_DEBUG.to_string());

View File

@@ -22,7 +22,6 @@ cloud-hypervisor = ["runtimes/cloud-hypervisor"]
[dependencies]
anyhow = { workspace = true }
containerd-shim-protos = { workspace = true }
go-flag = { workspace = true }
nix = { workspace = true }
tokio = { workspace = true, features = ["rt", "rt-multi-thread"] }

View File

@@ -130,23 +130,8 @@ FCJAILERPATH = $(FCBINDIR)/$(FCJAILERCMD)
FCVALIDJAILERPATHS = [\"$(FCJAILERPATH)\"]
PKGLIBEXECDIR := $(LIBEXECDIR)/$(PROJECT_DIR)
# EDK2 firmware names per architecture
ifeq ($(ARCH), aarch64)
EDK2_NAME := aavmf
endif
# Set firmware paths from QEMUFW/QEMUFWVOL if defined
FIRMWAREPATH :=
FIRMWAREVOLUMEPATH :=
ifneq (,$(QEMUCMD))
ifneq (,$(QEMUFW))
FIRMWAREPATH := $(PREFIXDEPS)/share/$(EDK2_NAME)/$(QEMUFW)
endif
ifneq (,$(QEMUFWVOL))
FIRMWAREVOLUMEPATH := $(PREFIXDEPS)/share/$(EDK2_NAME)/$(QEMUFWVOL)
endif
endif
ROOTMEASURECONFIG ?= ""
KERNELTDXPARAMS += $(ROOTMEASURECONFIG)
@@ -389,11 +374,6 @@ ifneq (,$(QEMUCMD))
ifeq ($(ARCH), s390x)
VMROOTFSDRIVER_QEMU := virtio-blk-ccw
DEFBLOCKSTORAGEDRIVER_QEMU := virtio-blk-ccw
else ifeq ($(ARCH), aarch64)
# NVDIMM/virtio-pmem has issues on arm64 (cache coherency problems with DAX),
# so we use virtio-blk-pci instead.
VMROOTFSDRIVER_QEMU := virtio-blk-pci
DEFBLOCKSTORAGEDRIVER_QEMU := virtio-scsi
else
VMROOTFSDRIVER_QEMU := virtio-pmem
DEFBLOCKSTORAGEDRIVER_QEMU := virtio-scsi

View File

@@ -4,16 +4,12 @@
# SPDX-License-Identifier: Apache-2.0
#
# ARM 64 settings
MACHINETYPE := virt
MACHINETYPE :=
KERNELPARAMS := cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1
MACHINEACCELERATORS := usb=off,gic-version=host
MACHINEACCELERATORS :=
CPUFEATURES := pmu=off
QEMUCMD := qemu-system-aarch64
QEMUFW := AAVMF_CODE.fd
QEMUFWVOL := AAVMF_VARS.fd
# dragonball binary name
DBCMD := dragonball

View File

@@ -2296,14 +2296,6 @@ impl<'a> QemuCmdLine<'a> {
}
fn add_iommu(&mut self) {
// vIOMMU (Intel IOMMU) is not supported on the "virt" machine type (arm64)
if self.machine.r#type == "virt" {
self.kernel
.params
.append(&mut KernelParams::from_string("iommu.passthrough=0"));
return;
}
let dev_iommu = DeviceIntelIommu::new();
self.devices.push(Box::new(dev_iommu));

View File

@@ -28,13 +28,8 @@ use std::str::FromStr;
use std::time::Duration;
use qapi_spec::Dictionary;
use std::thread;
use std::time::Instant;
/// default qmp connection read timeout
const DEFAULT_QMP_READ_TIMEOUT: u64 = 250;
const DEFAULT_QMP_CONNECT_DEADLINE_MS: u64 = 5000;
const DEFAULT_QMP_RETRY_SLEEP_MS: u64 = 50;
pub struct Qmp {
qmp: qapi::Qmp<qapi::Stream<BufReader<UnixStream>, UnixStream>>,
@@ -63,43 +58,29 @@ impl Debug for Qmp {
impl Qmp {
pub fn new(qmp_sock_path: &str) -> Result<Self> {
let try_new_once_fn = || -> Result<Qmp> {
let stream = UnixStream::connect(qmp_sock_path)?;
let stream = UnixStream::connect(qmp_sock_path)?;
stream
.set_read_timeout(Some(Duration::from_millis(DEFAULT_QMP_READ_TIMEOUT)))
.context("set qmp read timeout")?;
// Set the read timeout to protect runtime-rs from blocking forever
// trying to set up QMP connection if qemu fails to launch. The exact
// value is a matter of judegement. Setting it too long would risk
// being ineffective since container runtime would timeout first anyway
// (containerd's task creation timeout is 2 s by default). OTOH
// setting it too short would risk interfering with a normal launch,
// perhaps just seeing some delay due to a heavily loaded host.
stream.set_read_timeout(Some(Duration::from_millis(DEFAULT_QMP_READ_TIMEOUT)))?;
let mut qmp = Qmp {
qmp: qapi::Qmp::new(qapi::Stream::new(
BufReader::new(stream.try_clone()?),
stream,
)),
guest_memory_block_size: 0,
};
let info = qmp.qmp.handshake().context("qmp handshake failed")?;
info!(sl!(), "QMP initialized: {:#?}", info);
Ok(qmp)
let mut qmp = Qmp {
qmp: qapi::Qmp::new(qapi::Stream::new(
BufReader::new(stream.try_clone()?),
stream,
)),
guest_memory_block_size: 0,
};
let deadline = Instant::now() + Duration::from_millis(DEFAULT_QMP_CONNECT_DEADLINE_MS);
let mut last_err: Option<anyhow::Error> = None;
let info = qmp.qmp.handshake()?;
info!(sl!(), "QMP initialized: {:#?}", info);
while Instant::now() < deadline {
match try_new_once_fn() {
Ok(qmp) => return Ok(qmp),
Err(e) => {
debug!(sl!(), "QMP not ready yet: {}", e);
last_err = Some(e);
thread::sleep(Duration::from_millis(DEFAULT_QMP_RETRY_SLEEP_MS));
}
}
}
Err(last_err.unwrap_or_else(|| anyhow!("QMP init timed out")))
.with_context(|| format!("timed out waiting for QMP ready: {}", qmp_sock_path))
Ok(qmp)
}
pub fn set_ignore_shared_memory_capability(&mut self) -> Result<()> {

View File

@@ -6,54 +6,39 @@
use std::{
io,
os::unix::{
fs::{FileTypeExt, OpenOptionsExt},
io::RawFd,
prelude::AsRawFd,
os::{
fd::IntoRawFd,
unix::{
fs::OpenOptionsExt,
io::{FromRawFd, RawFd},
net::UnixStream as StdUnixStream,
prelude::AsRawFd,
},
},
pin::Pin,
task::{Context as TaskContext, Poll},
};
use anyhow::{Context, Result};
use anyhow::{anyhow, Context, Result};
use tokio::{
fs::{File, OpenOptions},
fs::OpenOptions,
io::{AsyncRead, AsyncWrite},
net::UnixStream as AsyncUnixStream,
};
use url::Url;
/// Clear O_NONBLOCK for an fd (turn it into blocking mode).
fn set_flag_with_blocking(fd: RawFd) {
let flag = unsafe { libc::fcntl(fd, libc::F_GETFL) };
if flag < 0 {
error!(sl!(), "failed to fcntl(F_GETFL) fd {} ret {}", fd, flag);
return;
}
let ret = unsafe { libc::fcntl(fd, libc::F_SETFL, flag & !libc::O_NONBLOCK) };
if ret < 0 {
error!(sl!(), "failed to fcntl(F_SETFL) fd {} ret {}", fd, ret);
}
}
fn open_fifo_write(path: &str) -> Result<File> {
fn open_fifo_write(path: &str) -> Result<AsyncUnixStream> {
let std_file = std::fs::OpenOptions::new()
.write(true)
// It's not for non-block openning FIFO but for non-block stream which
// will be add into tokio runtime.
.custom_flags(libc::O_NONBLOCK)
.open(path)
.with_context(|| format!("open fifo for write: {path}"))?;
.with_context(|| format!("open {path} with write"))?;
let fd = std_file.into_raw_fd();
let std_stream = unsafe { StdUnixStream::from_raw_fd(fd) };
// Debug
let meta = std_file.metadata()?;
if !meta.file_type().is_fifo() {
debug!(sl!(), "[DEBUG]{} is not a fifo (type mismatch)", path);
}
set_flag_with_blocking(std_file.as_raw_fd());
Ok(File::from_std(std_file))
AsyncUnixStream::from_std(std_stream).map_err(|e| anyhow!(e))
}
pub struct ShimIo {
@@ -73,6 +58,14 @@ impl ShimIo {
"new shim io stdin {:?} stdout {:?} stderr {:?}", stdin, stdout, stderr
);
let set_flag_with_blocking = |fd: RawFd| {
let flag = unsafe { libc::fcntl(fd, libc::F_GETFL) };
let ret = unsafe { libc::fcntl(fd, libc::F_SETFL, flag & !libc::O_NONBLOCK) };
if ret < 0 {
error!(sl!(), "failed to set fcntl for fd {} error {}", fd, ret);
}
};
let stdin_fd: Option<Box<dyn AsyncRead + Send + Unpin>> = if let Some(stdin) = stdin {
info!(sl!(), "open stdin {:?}", &stdin);
@@ -105,7 +98,9 @@ impl ShimIo {
None => None,
Some(out) => match Url::parse(out.as_str()) {
Err(url::ParseError::RelativeUrlWithoutBase) => {
Url::parse(&format!("fifo://{}", out)).ok()
let out = "fifo://".to_owned() + out.as_str();
let u = Url::parse(out.as_str()).unwrap();
Some(u)
}
Err(err) => {
warn!(sl!(), "unable to parse stdout uri: {}", err);
@@ -116,25 +111,26 @@ impl ShimIo {
}
};
let stdout_url = get_url(stdout);
let get_fd = |url: &Option<Url>| -> Option<Box<dyn AsyncWrite + Send + Unpin>> {
info!(sl!(), "get fd for {:?}", &url);
if let Some(url) = url {
if url.scheme() == "fifo" {
let path = url.path();
match open_fifo_write(path) {
Ok(f) => return Some(Box::new(ShimIoWrite::File(f))),
Err(err) => error!(sl!(), "failed to open fifo {} error {:?}", path, err),
Ok(s) => {
return Some(Box::new(ShimIoWrite::Stream(s)));
}
Err(err) => {
error!(sl!(), "failed to open file {} error {:?}", url.path(), err);
}
}
} else {
warn!(sl!(), "unsupported io scheme {}", url.scheme());
}
}
None
};
let stdout_url = get_url(stdout);
let stderr_url = get_url(stderr);
Ok(Self {
stdin: stdin_fd,
stdout: get_fd(&stdout_url),
@@ -145,7 +141,7 @@ impl ShimIo {
#[derive(Debug)]
enum ShimIoWrite {
File(File),
Stream(AsyncUnixStream),
// TODO: support other type
}
@@ -155,20 +151,20 @@ impl AsyncWrite for ShimIoWrite {
cx: &mut TaskContext<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
match &mut *self {
ShimIoWrite::File(f) => Pin::new(f).poll_write(cx, buf),
match *self {
ShimIoWrite::Stream(ref mut s) => Pin::new(s).poll_write(cx, buf),
}
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut TaskContext<'_>) -> Poll<io::Result<()>> {
match &mut *self {
ShimIoWrite::File(f) => Pin::new(f).poll_flush(cx),
match *self {
ShimIoWrite::Stream(ref mut s) => Pin::new(s).poll_flush(cx),
}
}
fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut TaskContext<'_>) -> Poll<io::Result<()>> {
match &mut *self {
ShimIoWrite::File(f) => Pin::new(f).poll_shutdown(cx),
match *self {
ShimIoWrite::Stream(ref mut s) => Pin::new(s).poll_shutdown(cx),
}
}
}

View File

@@ -6,15 +6,10 @@
use std::{
ffi::{OsStr, OsString},
io::Write,
path::PathBuf,
};
use anyhow::{anyhow, Context, Result};
use containerd_shim_protos::{
protobuf::Message,
types::introspection::{RuntimeInfo, RuntimeVersion},
};
use nix::{
mount::{mount, MsFlags},
sched::{self, CloneFlags},
@@ -34,13 +29,11 @@ enum Action {
Delete(Args),
Help,
Version,
Info,
}
fn parse_args(args: &[OsString]) -> Result<Action> {
let mut help = false;
let mut version = false;
let mut info = false;
let mut shim_args = Args::default();
// Crate `go_flag` is used to keep compatible with go/flag package.
@@ -53,7 +46,6 @@ fn parse_args(args: &[OsString]) -> Result<Action> {
flags.add_flag("publish-binary", &mut shim_args.publish_binary);
flags.add_flag("help", &mut help);
flags.add_flag("version", &mut version);
flags.add_flag("info", &mut info);
})
.context(Error::ParseArgument(format!("{args:?}")))?;
@@ -61,8 +53,6 @@ fn parse_args(args: &[OsString]) -> Result<Action> {
Ok(Action::Help)
} else if version {
Ok(Action::Version)
} else if info {
Ok(Action::Info)
} else if rest_args.is_empty() {
Ok(Action::Run(shim_args))
} else if rest_args[0] == "start" {
@@ -93,8 +83,6 @@ fn show_help(cmd: &OsStr) {
enable debug output in logs
-id string
id of the task
-info
output the runtime info as protobuf (for containerd v2.0+)
-namespace string
namespace that owns the shim
-publish-binary string
@@ -126,25 +114,6 @@ fn show_version(err: Option<anyhow::Error>) {
}
}
fn show_info() -> Result<()> {
let mut version = RuntimeVersion::new();
version.version = config::RUNTIME_VERSION.to_string();
version.revision = config::RUNTIME_GIT_COMMIT.to_string();
let mut info = RuntimeInfo::new();
info.name = config::CONTAINERD_RUNTIME_NAME.to_string();
info.version = Some(version).into();
let data = info
.write_to_bytes()
.context("failed to marshal RuntimeInfo")?;
std::io::stdout()
.write_all(&data)
.context("failed to write RuntimeInfo to stdout")?;
Ok(())
}
fn get_tokio_runtime() -> Result<tokio::runtime::Runtime> {
let worker_threads = std::env::var(ENV_TOKIO_RUNTIME_WORKER_THREADS)
.unwrap_or_default()
@@ -186,7 +155,6 @@ fn real_main() -> Result<()> {
}
Action::Help => show_help(&args[0]),
Action::Version => show_version(None),
Action::Info => show_info().context("show info")?,
}
Ok(())
}

View File

@@ -250,7 +250,7 @@ DEFSECCOMPSANDBOXPARAM :=
DEFENTROPYSOURCE := /dev/urandom
DEFVALIDENTROPYSOURCES := [\"/dev/urandom\",\"/dev/random\",\"\"]
DEFDISABLEBLOCK := true
DEFDISABLEBLOCK := false
DEFSHAREDFS_CLH_VIRTIOFS := virtio-fs
DEFSHAREDFS_QEMU_VIRTIOFS := virtio-fs
# Please keep DEFSHAREDFS_QEMU_COCO_DEV_VIRTIOFS in sync with TDX/SNP

View File

@@ -9,9 +9,7 @@ import (
"fmt"
"os"
containerdtypes "github.com/containerd/containerd/api/types"
shimapi "github.com/containerd/containerd/runtime/v2/shim"
"google.golang.org/protobuf/proto"
shim "github.com/kata-containers/kata-containers/src/runtime/pkg/containerd-shim-v2"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
@@ -23,25 +21,6 @@ func shimConfig(config *shimapi.Config) {
config.NoSubreaper = true
}
func handleInfoFlag() {
info := &containerdtypes.RuntimeInfo{
Name: types.DefaultKataRuntimeName,
Version: &containerdtypes.RuntimeVersion{
Version: katautils.VERSION,
Revision: katautils.COMMIT,
},
}
data, err := proto.Marshal(info)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to marshal RuntimeInfo: %v\n", err)
os.Exit(1)
}
os.Stdout.Write(data)
os.Exit(0)
}
func main() {
if len(os.Args) == 2 && os.Args[1] == "--version" {
@@ -49,9 +28,5 @@ func main() {
os.Exit(0)
}
if len(os.Args) == 2 && os.Args[1] == "-info" {
handleInfoFlag()
}
shimapi.Run(types.DefaultKataRuntimeName, shim.New, shimConfig)
}

View File

@@ -109,20 +109,6 @@ memory_slots = @DEFMEMSLOTS@
# > amount of physical RAM --> will be set to the actual amount of physical RAM
default_maxmemory = @DEFMAXMEMSZ@
# Disable hotplugging host block devices to guest VMs for container rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:
# - virtio-fs (default)
# - virtio-fs-nydus

View File

@@ -159,18 +159,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -145,18 +145,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -185,18 +185,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -162,18 +162,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -144,18 +144,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -153,18 +153,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -184,18 +184,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -161,18 +161,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -144,18 +144,12 @@ memory_offset = 0
# Default false
enable_virtio_mem = false
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -103,18 +103,12 @@ default_maxmemory = @DEFMAXMEMSZ@
# Default 0
memory_offset = 0
# Disable hotplugging host block devices to guest VMs for container rootfs.
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
# WARNING:
# Don't set this flag to false if you don't understand well the behavior of
# your container runtime and image snapshotter. Some snapshotters might use
# container image storage devices that are not meant to be hotplugged into a
# guest VM - e.g., because they contain files used by the host or by other
# guests.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:

View File

@@ -1,7 +1,7 @@
module github.com/kata-containers/kata-containers/src/runtime
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
// WARNING: Do NOT use `replace` directives as those break dependabot:
// https://github.com/kata-containers/kata-containers/issues/11020

View File

@@ -861,10 +861,6 @@ func (q *qemu) createPCIeTopology(qemuConfig *govmmQemu.Config, hypervisorConfig
return fmt.Errorf("Cannot get VFIO device from IOMMUFD with device: %v err: %v", dev, err)
}
} else {
if q.config.ConfidentialGuest {
return fmt.Errorf("ConfidentialGuest needs IOMMUFD - cannot use %s", dev.HostPath)
}
vfioDevices, err = drivers.GetAllVFIODevicesFromIOMMUGroup(dev)
if err != nil {
return fmt.Errorf("Cannot get all VFIO devices from IOMMU group with device: %v err: %v", dev, err)

View File

@@ -1,7 +1,7 @@
module kata-containers/csi-kata-directvolume
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
// WARNING: Do NOT use `replace` directives as those break dependabot:
// https://github.com/kata-containers/kata-containers/issues/11020

View File

@@ -1,7 +1,7 @@
module github.com/kata-containers/kata-containers/src/tools/log-parser
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
require (
github.com/BurntSushi/toml v1.1.0

View File

@@ -1,375 +0,0 @@
#!/usr/bin/env bats
# Copyright (c) 2025 NVIDIA Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# End-to-end tests for kata-deploy custom runtimes feature
# These tests deploy kata-deploy with custom runtimes and verify pods can run
#
# Required environment variables:
# DOCKER_REGISTRY - Container registry for kata-deploy image
# DOCKER_REPO - Repository name for kata-deploy image
# DOCKER_TAG - Image tag to test
# KATA_HYPERVISOR - Hypervisor to test (qemu, clh, etc.)
# KUBERNETES - K8s distribution (microk8s, k3s, rke2, etc.)
load "${BATS_TEST_DIRNAME}/../../common.bash"
repo_root_dir="${BATS_TEST_DIRNAME}/../../../"
load "${repo_root_dir}/tests/gha-run-k8s-common.sh"
# Load shared helm deployment helpers
source "${BATS_TEST_DIRNAME}/lib/helm-deploy.bash"
# Test configuration
CUSTOM_RUNTIME_NAME="special-workload"
CUSTOM_RUNTIME_HANDLER="kata-my-custom-handler"
TEST_POD_NAME="kata-deploy-custom-verify"
CHART_PATH="$(get_chart_path)"
# =============================================================================
# Template Rendering Tests (no cluster required)
# =============================================================================
@test "Helm template: ConfigMap is created with custom runtime" {
helm template kata-deploy "${CHART_PATH}" \
-f "${CUSTOM_VALUES_FILE}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
> /tmp/rendered.yaml
# Check that ConfigMap exists
grep -q "kind: ConfigMap" /tmp/rendered.yaml
grep -q "kata-deploy-custom-configs" /tmp/rendered.yaml
grep -q "${CUSTOM_RUNTIME_HANDLER}" /tmp/rendered.yaml
}
@test "Helm template: RuntimeClass is created with correct handler" {
helm template kata-deploy "${CHART_PATH}" \
-f "${CUSTOM_VALUES_FILE}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
> /tmp/rendered.yaml
grep -q "kind: RuntimeClass" /tmp/rendered.yaml
grep -q "handler: ${CUSTOM_RUNTIME_HANDLER}" /tmp/rendered.yaml
}
@test "Helm template: Drop-in file is included in ConfigMap" {
helm template kata-deploy "${CHART_PATH}" \
-f "${CUSTOM_VALUES_FILE}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
> /tmp/rendered.yaml
grep -q "dropin-${CUSTOM_RUNTIME_HANDLER}.toml" /tmp/rendered.yaml
grep -q "dial_timeout = 999" /tmp/rendered.yaml
}
@test "Helm template: CUSTOM_RUNTIMES_ENABLED env var is set" {
helm template kata-deploy "${CHART_PATH}" \
-f "${CUSTOM_VALUES_FILE}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
> /tmp/rendered.yaml
grep -q "CUSTOM_RUNTIMES_ENABLED" /tmp/rendered.yaml
grep -A1 "CUSTOM_RUNTIMES_ENABLED" /tmp/rendered.yaml | grep -q '"true"'
}
@test "Helm template: custom-configs volume is mounted" {
helm template kata-deploy "${CHART_PATH}" \
-f "${CUSTOM_VALUES_FILE}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
> /tmp/rendered.yaml
grep -q "mountPath: /custom-configs/" /tmp/rendered.yaml
grep -q "name: custom-configs" /tmp/rendered.yaml
}
@test "Helm template: No custom runtime resources when disabled" {
helm template kata-deploy "${CHART_PATH}" \
--set image.reference=quay.io/kata-containers/kata-deploy \
--set image.tag=latest \
--set customRuntimes.enabled=false \
> /tmp/rendered.yaml
! grep -q "kata-deploy-custom-configs" /tmp/rendered.yaml
! grep -q "CUSTOM_RUNTIMES_ENABLED" /tmp/rendered.yaml
}
@test "Helm template: Custom runtimes only mode (no standard shims)" {
# Test that Helm chart renders correctly when all standard shims are disabled
# using shims.disableAll and only custom runtimes are enabled
local values_file
values_file=$(mktemp)
cat > "${values_file}" <<EOF
image:
reference: quay.io/kata-containers/kata-deploy
tag: latest
# Disable all standard shims at once
shims:
disableAll: true
# Enable only custom runtimes
customRuntimes:
enabled: true
runtimes:
my-only-runtime:
baseConfig: "qemu"
dropIn: |
[hypervisor.qemu]
enable_debug = true
runtimeClass: |
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: kata-my-only-runtime
handler: kata-my-only-runtime
scheduling:
nodeSelector:
katacontainers.io/kata-runtime: "true"
containerd:
snapshotter: ""
crio:
pullType: ""
EOF
helm template kata-deploy "${CHART_PATH}" -f "${values_file}" > /tmp/rendered.yaml
rm -f "${values_file}"
# Verify custom runtime resources are created
grep -q "kata-deploy-custom-configs" /tmp/rendered.yaml
grep -q "CUSTOM_RUNTIMES_ENABLED" /tmp/rendered.yaml
grep -q "kata-my-only-runtime" /tmp/rendered.yaml
# Verify SHIMS env var is empty (no standard shims)
local shims_value
shims_value=$(grep -A1 'name: SHIMS$' /tmp/rendered.yaml | grep 'value:' | head -1 || echo "")
echo "# SHIMS env value: ${shims_value}" >&3
}
# =============================================================================
# End-to-End Tests (require cluster with kata-deploy)
# =============================================================================
@test "E2E: Custom RuntimeClass exists with correct properties" {
# Check RuntimeClass exists
run kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o name
if [[ "${status}" -ne 0 ]]; then
echo "# RuntimeClass not found. kata-deploy logs:" >&3
kubectl -n kube-system logs -l name=kata-deploy --tail=50 2>/dev/null || true
fail "Custom RuntimeClass ${CUSTOM_RUNTIME_HANDLER} not found"
fi
echo "# RuntimeClass ${CUSTOM_RUNTIME_HANDLER} exists" >&3
# Verify handler is correct
local handler
handler=$(kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o jsonpath='{.handler}')
echo "# Handler: ${handler}" >&3
[[ "${handler}" == "${CUSTOM_RUNTIME_HANDLER}" ]]
# Verify overhead is set
local overhead_memory
overhead_memory=$(kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o jsonpath='{.overhead.podFixed.memory}')
echo "# Overhead memory: ${overhead_memory}" >&3
[[ "${overhead_memory}" == "640Mi" ]]
local overhead_cpu
overhead_cpu=$(kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o jsonpath='{.overhead.podFixed.cpu}')
echo "# Overhead CPU: ${overhead_cpu}" >&3
[[ "${overhead_cpu}" == "500m" ]]
# Verify nodeSelector is set
local node_selector
node_selector=$(kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o jsonpath='{.scheduling.nodeSelector.katacontainers\.io/kata-runtime}')
echo "# Node selector: ${node_selector}" >&3
[[ "${node_selector}" == "true" ]]
# Verify label is set (Helm sets this to "Helm" when it manages the resource)
local label
label=$(kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o jsonpath='{.metadata.labels.app\.kubernetes\.io/managed-by}')
echo "# Label app.kubernetes.io/managed-by: ${label}" >&3
[[ "${label}" == "Helm" ]]
BATS_TEST_COMPLETED=1
}
@test "E2E: Custom runtime can run a pod" {
# Check if the custom RuntimeClass exists
if ! kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" &>/dev/null; then
skip "Custom RuntimeClass ${CUSTOM_RUNTIME_HANDLER} not found"
fi
# Create a test pod using the custom runtime
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: ${TEST_POD_NAME}
spec:
runtimeClassName: ${CUSTOM_RUNTIME_HANDLER}
restartPolicy: Never
nodeSelector:
katacontainers.io/kata-runtime: "true"
containers:
- name: test
image: quay.io/kata-containers/alpine-bash-curl:latest
command: ["echo", "OK"]
EOF
# Wait for pod to complete or become ready
echo "# Waiting for pod to be ready..." >&3
local timeout=120
local start_time
start_time=$(date +%s)
while true; do
local phase
phase=$(kubectl get pod "${TEST_POD_NAME}" -o jsonpath='{.status.phase}' 2>/dev/null || echo "Unknown")
case "${phase}" in
Succeeded|Running)
echo "# Pod reached phase: ${phase}" >&3
break
;;
Failed)
echo "# Pod failed" >&3
kubectl describe pod "${TEST_POD_NAME}" >&3
fail "Pod failed to run with custom runtime"
;;
*)
local current_time
current_time=$(date +%s)
if (( current_time - start_time > timeout )); then
echo "# Timeout waiting for pod" >&3
kubectl describe pod "${TEST_POD_NAME}" >&3
fail "Timeout waiting for pod to be ready"
fi
sleep 5
;;
esac
done
# Verify pod ran successfully
local exit_code
exit_code=$(kubectl get pod "${TEST_POD_NAME}" -o jsonpath='{.status.containerStatuses[0].state.terminated.exitCode}' 2>/dev/null || echo "")
if [[ "${exit_code}" == "0" ]] || [[ "$(kubectl get pod "${TEST_POD_NAME}" -o jsonpath='{.status.phase}')" == "Running" ]]; then
echo "# Pod ran successfully with custom runtime" >&3
BATS_TEST_COMPLETED=1
else
fail "Pod did not complete successfully (exit code: ${exit_code})"
fi
}
# =============================================================================
# Setup and Teardown
# =============================================================================
setup_file() {
ensure_helm
echo "# Using base config: ${KATA_HYPERVISOR}" >&3
echo "# Custom runtime handler: ${CUSTOM_RUNTIME_HANDLER}" >&3
echo "# Image: ${DOCKER_REGISTRY}/${DOCKER_REPO}:${DOCKER_TAG}" >&3
echo "# K8s distribution: ${KUBERNETES}" >&3
# Create values file for custom runtimes
export DEPLOY_VALUES_FILE=$(mktemp)
cat > "${DEPLOY_VALUES_FILE}" <<EOF
customRuntimes:
enabled: true
runtimes:
${CUSTOM_RUNTIME_NAME}:
baseConfig: "${KATA_HYPERVISOR}"
dropIn: |
[agent.kata]
dial_timeout = 999
runtimeClass: |
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: ${CUSTOM_RUNTIME_HANDLER}
labels:
app.kubernetes.io/managed-by: kata-deploy
handler: ${CUSTOM_RUNTIME_HANDLER}
overhead:
podFixed:
memory: "640Mi"
cpu: "500m"
scheduling:
nodeSelector:
katacontainers.io/kata-runtime: "true"
containerd:
snapshotter: ""
crio:
pullType: ""
EOF
echo "# Deploying kata-deploy with custom runtimes..." >&3
deploy_kata "${DEPLOY_VALUES_FILE}"
echo "# kata-deploy deployed successfully" >&3
}
setup() {
# Create temporary values file for template tests
CUSTOM_VALUES_FILE=$(mktemp)
cat > "${CUSTOM_VALUES_FILE}" <<EOF
customRuntimes:
enabled: true
runtimes:
${CUSTOM_RUNTIME_NAME}:
baseConfig: "${KATA_HYPERVISOR:-qemu}"
dropIn: |
[agent.kata]
dial_timeout = 999
runtimeClass: |
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: ${CUSTOM_RUNTIME_HANDLER}
labels:
app.kubernetes.io/managed-by: kata-deploy
handler: ${CUSTOM_RUNTIME_HANDLER}
overhead:
podFixed:
memory: "640Mi"
cpu: "500m"
scheduling:
nodeSelector:
katacontainers.io/kata-runtime: "true"
containerd:
snapshotter: ""
crio:
pullType: ""
EOF
}
teardown() {
# Show pod details for debugging if test failed
if [[ "${BATS_TEST_COMPLETED:-}" != "1" ]]; then
echo "# Test failed, gathering diagnostics..." >&3
kubectl describe pod "${TEST_POD_NAME}" 2>/dev/null || true
echo "# kata-deploy logs:" >&3
kubectl -n kube-system logs -l name=kata-deploy --tail=100 2>/dev/null || true
fi
# Clean up test pod
kubectl delete pod "${TEST_POD_NAME}" --ignore-not-found=true --wait=false 2>/dev/null || true
# Clean up temp file
[[ -f "${CUSTOM_VALUES_FILE:-}" ]] && rm -f "${CUSTOM_VALUES_FILE}"
}
teardown_file() {
echo "# Cleaning up..." >&3
kubectl delete pod "${TEST_POD_NAME}" --ignore-not-found=true --wait=true --timeout=60s 2>/dev/null || true
uninstall_kata
[[ -f "${DEPLOY_VALUES_FILE:-}" ]] && rm -f "${DEPLOY_VALUES_FILE}"
}

View File

@@ -31,9 +31,6 @@ load "${BATS_TEST_DIRNAME}/../../common.bash"
repo_root_dir="${BATS_TEST_DIRNAME}/../../../"
load "${repo_root_dir}/tests/gha-run-k8s-common.sh"
# Load shared helm deployment helpers
source "${BATS_TEST_DIRNAME}/lib/helm-deploy.bash"
setup() {
ensure_helm
@@ -82,6 +79,7 @@ EOF
# Install kata-deploy via Helm
echo "Installing kata-deploy with Helm..."
local helm_chart_dir="tools/packaging/kata-deploy/helm-chart/kata-deploy"
# Timeouts can be customized via environment variables:
# - KATA_DEPLOY_TIMEOUT: Overall helm timeout (includes all hooks)
@@ -99,11 +97,43 @@ EOF
echo " DaemonSet rollout: ${daemonset_timeout}s (includes image pull)"
echo " Verification pod: ${verification_timeout}s (pod execution)"
# Deploy kata-deploy using shared helper with verification options
HELM_TIMEOUT="${helm_timeout}" deploy_kata "" \
helm dependency build "${helm_chart_dir}"
# Disable all shims except the one being tested
helm upgrade --install kata-deploy "${helm_chart_dir}" \
--set image.reference="${DOCKER_REGISTRY}/${DOCKER_REPO}" \
--set image.tag="${DOCKER_TAG}" \
--set debug=true \
--set k8sDistribution="${KUBERNETES}" \
--set shims.clh.enabled=false \
--set shims.cloud-hypervisor.enabled=false \
--set shims.dragonball.enabled=false \
--set shims.fc.enabled=false \
--set shims.qemu.enabled=false \
--set shims.qemu-runtime-rs.enabled=false \
--set shims.qemu-cca.enabled=false \
--set shims.qemu-se.enabled=false \
--set shims.qemu-se-runtime-rs.enabled=false \
--set shims.qemu-nvidia-gpu.enabled=false \
--set shims.qemu-nvidia-gpu-snp.enabled=false \
--set shims.qemu-nvidia-gpu-tdx.enabled=false \
--set shims.qemu-sev.enabled=false \
--set shims.qemu-snp.enabled=false \
--set shims.qemu-snp-runtime-rs.enabled=false \
--set shims.qemu-tdx.enabled=false \
--set shims.qemu-tdx-runtime-rs.enabled=false \
--set shims.qemu-coco-dev.enabled=false \
--set shims.qemu-coco-dev-runtime-rs.enabled=false \
--set "shims.${KATA_HYPERVISOR}.enabled=true" \
--set "defaultShim.amd64=${KATA_HYPERVISOR}" \
--set "defaultShim.arm64=${KATA_HYPERVISOR}" \
--set runtimeClasses.enabled=true \
--set runtimeClasses.createDefault=true \
--set-file verification.pod="${verification_yaml}" \
--set verification.timeout="${verification_timeout}" \
--set verification.daemonsetTimeout="${daemonset_timeout}"
--set verification.daemonsetTimeout="${daemonset_timeout}" \
--namespace kube-system \
--wait --timeout "${helm_timeout}"
rm -f "${verification_yaml}"
@@ -149,5 +179,7 @@ EOF
}
teardown() {
uninstall_kata
pushd "${repo_root_dir}"
helm uninstall kata-deploy --ignore-not-found --wait --cascade foreground --timeout 10m --namespace kube-system --debug
popd
}

View File

@@ -1,127 +0,0 @@
#!/bin/bash
# Copyright (c) 2025 NVIDIA Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# Shared helm deployment helpers for kata-deploy tests
#
# Required environment variables:
# DOCKER_REGISTRY - Container registry for kata-deploy image
# DOCKER_REPO - Repository name for kata-deploy image
# DOCKER_TAG - Image tag to test
# KATA_HYPERVISOR - Hypervisor to test (qemu, clh, etc.)
# KUBERNETES - K8s distribution (microk8s, k3s, rke2, etc.)
HELM_RELEASE_NAME="${HELM_RELEASE_NAME:-kata-deploy}"
HELM_NAMESPACE="${HELM_NAMESPACE:-kube-system}"
# Get the path to the helm chart
get_chart_path() {
local script_dir
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo "${script_dir}/../../../../tools/packaging/kata-deploy/helm-chart/kata-deploy"
}
# Generate base values YAML that disables all shims except the specified one
# Arguments:
# $1 - Output file path
# $2 - (Optional) Additional values file to merge
generate_base_values() {
local output_file="$1"
local extra_values_file="${2:-}"
cat > "${output_file}" <<EOF
image:
reference: ${DOCKER_REGISTRY}/${DOCKER_REPO}
tag: ${DOCKER_TAG}
k8sDistribution: "${KUBERNETES}"
debug: true
# Disable all shims at once, then enable only the one we need
shims:
disableAll: true
${KATA_HYPERVISOR}:
enabled: true
defaultShim:
amd64: ${KATA_HYPERVISOR}
arm64: ${KATA_HYPERVISOR}
runtimeClasses:
enabled: true
createDefault: true
EOF
}
# Deploy kata-deploy using helm
# Arguments:
# $1 - (Optional) Additional values file to merge with base values
# $@ - (Optional) Additional helm arguments (after the first positional arg)
deploy_kata() {
local extra_values_file="${1:-}"
shift || true
local extra_helm_args=("$@")
local chart_path
local values_yaml
chart_path="$(get_chart_path)"
values_yaml=$(mktemp)
# Generate base values
generate_base_values "${values_yaml}"
# Add required helm repos for dependencies
helm repo add node-feature-discovery https://kubernetes-sigs.github.io/node-feature-discovery/charts 2>/dev/null || true
helm repo update
# Build helm dependencies
helm dependency build "${chart_path}"
# Build helm command
local helm_cmd=(
helm upgrade --install "${HELM_RELEASE_NAME}" "${chart_path}"
-f "${values_yaml}"
)
# Add extra values file if provided
if [[ -n "${extra_values_file}" && -f "${extra_values_file}" ]]; then
helm_cmd+=(-f "${extra_values_file}")
fi
# Add any extra helm arguments
if [[ ${#extra_helm_args[@]} -gt 0 ]]; then
helm_cmd+=("${extra_helm_args[@]}")
fi
helm_cmd+=(
--namespace "${HELM_NAMESPACE}"
--wait --timeout "${HELM_TIMEOUT:-10m}"
)
# Run helm install
"${helm_cmd[@]}"
local ret=$?
rm -f "${values_yaml}"
if [[ ${ret} -ne 0 ]]; then
echo "Helm install failed with exit code ${ret}" >&2
return ${ret}
fi
# Wait for daemonset to be ready
kubectl -n "${HELM_NAMESPACE}" rollout status daemonset/kata-deploy --timeout=300s
# Give it a moment to configure runtimes
sleep 10
return 0
}
# Uninstall kata-deploy
uninstall_kata() {
helm uninstall "${HELM_RELEASE_NAME}" -n "${HELM_NAMESPACE}" \
--ignore-not-found --wait --cascade foreground --timeout 10m || true
}

View File

@@ -19,7 +19,6 @@ if [[ -n "${KATA_DEPLOY_TEST_UNION:-}" ]]; then
else
KATA_DEPLOY_TEST_UNION=( \
"kata-deploy.bats" \
"kata-deploy-custom-runtimes.bats" \
)
fi

View File

@@ -566,8 +566,11 @@ function helm_helper() {
[[ -n "${HELM_K8S_DISTRIBUTION}" ]] && yq -i ".k8sDistribution = \"${HELM_K8S_DISTRIBUTION}\"" "${values_yaml}"
if [[ "${HELM_DEFAULT_INSTALLATION}" = "false" ]]; then
# Disable all shims at once, then enable only the ones specified in HELM_SHIMS
yq -i ".shims.disableAll = true" "${values_yaml}"
# Disable all shims first (in case we started from an example file with shims enabled)
# Then we'll enable only the ones specified in HELM_SHIMS
for shim_key in $(yq '.shims | keys | .[]' "${values_yaml}" 2>/dev/null); do
yq -i ".shims.${shim_key}.enabled = false" "${values_yaml}"
done
# Use new structured format
if [[ -n "${HELM_DEBUG}" ]]; then
@@ -583,7 +586,7 @@ function helm_helper() {
# HELM_SHIMS is a space-separated list of shim names
# Enable each shim and set supported architectures
# TEE shims that need defaults unset (will be set based on env vars)
tee_shims="qemu-se qemu-se-runtime-rs qemu-cca qemu-snp qemu-snp-runtime-rs qemu-tdx qemu-tdx-runtime-rs qemu-coco-dev qemu-coco-dev-runtime-rs qemu-nvidia-gpu-snp qemu-nvidia-gpu-tdx"
tee_shims="qemu-se qemu-se-runtime-rs qemu-cca qemu-snp qemu-tdx qemu-coco-dev qemu-coco-dev-runtime-rs qemu-nvidia-gpu-snp qemu-nvidia-gpu-tdx"
for shim in ${HELM_SHIMS}; do
# Determine supported architectures based on shim name
@@ -601,11 +604,7 @@ function helm_helper() {
yq -i ".shims.${shim}.enabled = true" "${values_yaml}"
yq -i ".shims.${shim}.supportedArches = [\"amd64\"]" "${values_yaml}"
;;
qemu-runtime-rs)
yq -i ".shims.${shim}.enabled = true" "${values_yaml}"
yq -i ".shims.${shim}.supportedArches = [\"amd64\", \"arm64\", \"s390x\"]" "${values_yaml}"
;;
qemu-coco-dev|qemu-coco-dev-runtime-rs)
qemu-runtime-rs|qemu-coco-dev|qemu-coco-dev-runtime-rs)
yq -i ".shims.${shim}.enabled = true" "${values_yaml}"
yq -i ".shims.${shim}.supportedArches = [\"amd64\", \"s390x\"]" "${values_yaml}"
;;
@@ -679,7 +678,7 @@ function helm_helper() {
# HELM_ALLOWED_HYPERVISOR_ANNOTATIONS: if not in per-shim format (no colon), convert to per-shim format
# Output format: "qemu:foo,bar clh:foo" (space-separated entries, each with shim:annotations where annotations are comma-separated)
# Example: "foo bar" with shim "qemu-tdx" -> "qemu-tdx:foo,bar"
if [[ -n "${HELM_ALLOWED_HYPERVISOR_ANNOTATIONS}" && "${HELM_ALLOWED_HYPERVISOR_ANNOTATIONS}" != *:* ]]; then
if [[ "${HELM_ALLOWED_HYPERVISOR_ANNOTATIONS}" != *:* ]]; then
# Simple format: convert to per-shim format for all enabled shims
# "default_vcpus" -> "qemu-tdx:default_vcpus" (single shim)
# "image kernel default_vcpus" -> "qemu-tdx:image,kernel,default_vcpus" (single shim)
@@ -697,7 +696,7 @@ function helm_helper() {
fi
# HELM_AGENT_HTTPS_PROXY: if not in per-shim format (no equals), convert to per-shim format
if [[ -n "${HELM_AGENT_HTTPS_PROXY}" && "${HELM_AGENT_HTTPS_PROXY}" != *=* ]]; then
if [[ "${HELM_AGENT_HTTPS_PROXY}" != *=* ]]; then
# Simple format: convert to per-shim format for all enabled shims
# "http://proxy:8080" -> "qemu-tdx=http://proxy:8080;qemu-snp=http://proxy:8080"
local converted_proxy=""
@@ -711,7 +710,7 @@ function helm_helper() {
fi
# HELM_AGENT_NO_PROXY: if not in per-shim format (no equals), convert to per-shim format
if [[ -n "${HELM_AGENT_NO_PROXY}" && "${HELM_AGENT_NO_PROXY}" != *=* ]]; then
if [[ "${HELM_AGENT_NO_PROXY}" != *=* ]]; then
# Simple format: convert to per-shim format for all enabled shims
# "localhost,127.0.0.1" -> "qemu-tdx=localhost,127.0.0.1;qemu-snp=localhost,127.0.0.1"
local converted_noproxy=""

View File

@@ -1,7 +1,7 @@
module github.com/kata-containers/tests
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
// WARNING: Do NOT use `replace` directives as those break dependabot:
// https://github.com/kata-containers/kata-containers/issues/11020

View File

@@ -218,6 +218,15 @@ kbs_set_resource_from_file() {
kbs-client --url "$(kbs_k8s_svc_http_addr)" config \
--auth-private-key "${KBS_PRIVATE_KEY}" set-resource \
--path "${path}" --resource-file "${file}"
kbs_pod=$(kubectl -n "${KBS_NS}" get pods -o NAME)
kbs_repo_path="/opt/confidential-containers/kbs/repository"
# Waiting for the resource to be created on the kbs pod
if ! kubectl -n "${KBS_NS}" exec -it "${kbs_pod}" -- bash -c "for i in {1..30}; do [ -e '${kbs_repo_path}/${path}' ] && exit 0; sleep 0.5; done; exit -1"; then
echo "ERROR: resource '${path}' not created in 15s"
kubectl -n "${KBS_NS}" exec -it "${kbs_pod}" -- bash -c "find ${kbs_repo_path}"
return 1
fi
}
# Build and install the kbs-client binary, unless it is already present.

View File

@@ -1,59 +0,0 @@
#!/usr/bin/env bats
#
# Copyright (c) 2025 NVIDIA Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
load "${BATS_TEST_DIRNAME}/../../common.bash"
load "${BATS_TEST_DIRNAME}/lib.sh"
load "${BATS_TEST_DIRNAME}/tests_common.sh"
setup() {
setup_common || die "setup_common failed"
pod_name="no-layer-image"
get_pod_config_dir
yaml_file="${pod_config_dir}/${pod_name}.yaml"
# genpolicy fails for this unusual container image, so use the allow_all policy.
add_allow_all_policy_to_yaml "${yaml_file}"
}
@test "Test image with no layers cannot run" {
# Error from run-k8s-tests (ubuntu, qemu, small):
#
# failed to create containerd task: failed to create shim task: the file sleep was not found
#
# Error from run-k8s-tests-on-tee (sev-snp, qemu-snp):
#
# failed to create containerd task: failed to create shim task: rpc status:
# Status { code: INTERNAL, message: "[CDH] [ERROR]: Image Pull error: Failed to pull image
# ghcr.io/kata-containers/no-layer-image:latest from all mirror/mapping locations or original location: image:
# ghcr.io/kata-containers/no-layer-image:latest, error: Internal error", details: [], special_fields:
# SpecialFields { unknown_fields: UnknownFields { fields: None }, cached_size: CachedSize { size: 0 } } }
#
# Error from run-k8s-tests-coco-nontee-with-erofs-snapshotter (qemu-coco-dev, erofs, default):
#
# failed to create containerd task: failed to create shim task: failed to mount
# /run/kata-containers/shared/containers/fadd1af7ea2a7bfc6caf26471f70e9a913a2989fd4a1be9d001b59e48c0781aa/rootfs
# to /run/kata-containers/fadd1af7ea2a7bfc6caf26471f70e9a913a2989fd4a1be9d001b59e48c0781aa/rootfs, with error:
# ENOENT: No such file or directory
kubectl create -f "${yaml_file}"
local -r command="kubectl describe "pod/${pod_name}" | grep -E \
'the file sleep was not found|\[CDH\] \[ERROR\]: Image Pull error|ENOENT: No such file or directory'"
info "Waiting ${wait_time} seconds for: ${command}"
waitForProcess "${wait_time}" "${sleep_time}" "${command}" >/dev/null 2>/dev/null
}
teardown() {
# Debugging information
kubectl describe "pod/${pod_name}"
kubectl get "pod/${pod_name}" -o yaml
kubectl delete pod "${pod_name}"
teardown_common "${node}" "${node_start_time:-}"
}

View File

@@ -42,7 +42,6 @@ else
)
K8S_TEST_SMALL_HOST_UNION=( \
"k8s-empty-image.bats" \
"k8s-guest-pull-image.bats" \
"k8s-confidential.bats" \
"k8s-sealed-secret.bats" \

View File

@@ -1,13 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: no-layer-image
spec:
runtimeClassName: kata
containers:
- name: no-layer-image
image: ghcr.io/kata-containers/no-layer-image:latest
resources: {}
command:
- sleep
- infinity

View File

@@ -1,7 +1,7 @@
module example.com/m
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
require (
github.com/BurntSushi/toml v1.3.2

View File

@@ -17,13 +17,11 @@ die() {
echo "chroot: ${msg}" >&2
exit 1
}
arch_target="${1:?arch_target not specified}"
nvidia_gpu_stack="${2:?nvidia_gpu_stack not specified}"
cuda_repo_osv="${3:?cuda_repo_osv not specified}"
cuda_repo_url="${4:?cuda_repo_url not specified}"
cuda_repo_pkg="${5:?cuda_repo_pkg not specified}"
tools_repo_url="${6:?tools_repo_url not specified}"
tools_repo_pkg="${7:?tools_repo_pkg not specified}"
arch_target=$1
nvidia_gpu_stack="$2"
base_os="$3"
APT_INSTALL="apt -o Dpkg::Options::='--force-confdef' -o Dpkg::Options::='--force-confold' -yqq --no-install-recommends install"
export DEBIAN_FRONTEND=noninteractive
@@ -93,43 +91,36 @@ setup_apt_repositories() {
key="/usr/share/keyrings/ubuntu-archive-keyring.gpg"
comp="main restricted universe multiverse"
cat <<-CHROOT_EOF > /etc/apt/sources.list.d/"${cuda_repo_osv}".list
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${cuda_repo_osv} ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${cuda_repo_osv}-updates ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${cuda_repo_osv}-security ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${cuda_repo_osv}-backports ${comp}
cat <<-CHROOT_EOF > /etc/apt/sources.list.d/"${base_os}".list
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${base_os} ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${base_os}-updates ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${base_os}-security ${comp}
deb [arch=${deb_arch} signed-by=${key}] http://${mirror} ${base_os}-backports ${comp}
CHROOT_EOF
# Tools repository is always needed for toolkit, DCGM and other helpers
curl -fsSL -O "${tools_repo_url}/${tools_repo_pkg}"
dpkg -i "${tools_repo_pkg}" && rm -f "${tools_repo_pkg}"
local arch="${arch_target}"
[[ ${arch_target} == "aarch64" ]] && arch="sbsa"
# shellcheck disable=SC2015
[[ ${base_os} == "noble" ]] && osver="ubuntu2404" || die "Unknown base_os ${base_os} used"
# Remote or local CUDA repository
curl -fsSL -O "${cuda_repo_url}/${cuda_repo_pkg}"
dpkg -i "${cuda_repo_pkg}" && rm -f "${cuda_repo_pkg}"
# Copy keyring if local repo was installed
keyring="/var/cuda-repo-*-local/cuda-*-keyring.gpg"
# shellcheck disable=SC2128 # Intentional: expect exactly one match
[[ -e "${keyring}" ]] && cp "${keyring}" /usr/share/keyrings/
keyring="cuda-keyring_1.1-1_all.deb"
# Use consistent curl flags: -fsSL for download, -O for output
curl -fsSL -O "https://developer.download.nvidia.com/compute/cuda/repos/${osver}/${arch}/${keyring}"
dpkg -i "${keyring}" && rm -f "${keyring}"
# Set priorities: CUDA repos highest, Ubuntu non-driver next, Ubuntu blocked for driver packages
cat <<-CHROOT_EOF > /etc/apt/preferences.d/nvidia-priority
Package: *
Pin: origin $(dirname "${mirror}")
Pin: $(dirname "${mirror}")
Pin-Priority: 400
Package: nvidia-* libnvidia-*
Pin: origin $(dirname "${mirror}")
Pin: $(dirname "${mirror}")
Pin-Priority: -1
Package: *
Pin: origin developer.download.nvidia.com
Pin-Priority: 800
Package: *
Pin: origin ""
Pin-Priority: 900
CHROOT_EOF
apt update

View File

@@ -15,6 +15,7 @@ die() {
exit 1
}
readonly BUILD_DIR="/kata-containers/tools/packaging/kata-deploy/local-build/build/"
# catch errors and then assign
script_dir="$(dirname "$(readlink -f "$0")")"
@@ -97,16 +98,7 @@ setup_nvidia_gpu_rootfs_stage_one() {
mount --make-rslave ./dev
mount -t proc /proc ./proc
local cuda_repo_url cuda_repo_pkg gpu_base_os_version
cuda_repo_url=$(get_package_version_from_kata_yaml "externals.nvidia.cuda.repo.${machine_arch}.url")
cuda_repo_pkg=$(get_package_version_from_kata_yaml "externals.nvidia.cuda.repo.${machine_arch}.pkg")
gpu_base_os_version=$(get_package_version_from_kata_yaml "assets.image.architecture.x86_64.nvidia-gpu.version")
tools_repo_url=$(get_package_version_from_kata_yaml "externals.nvidia.tools.repo.${machine_arch}.url")
tools_repo_pkg=$(get_package_version_from_kata_yaml "externals.nvidia.tools.repo.${machine_arch}.pkg")
chroot . /bin/bash -c "/nvidia_chroot.sh ${machine_arch} ${NVIDIA_GPU_STACK} \
${gpu_base_os_version} ${cuda_repo_url} ${cuda_repo_pkg} ${tools_repo_url} ${tools_repo_pkg}"
chroot . /bin/bash -c "/nvidia_chroot.sh ${machine_arch} ${NVIDIA_GPU_STACK} noble"
umount -R ./dev
umount ./proc
@@ -279,7 +271,7 @@ compress_rootfs() {
find . -type f -executable | while IFS= read -r file; do
# Skip files with setuid/setgid bits (UPX refuses to pack them)
if [[ -u "${file}" ]] || [[ -g "${file}" ]]; then
if [ -u "${file}" ] || [ -g "${file}" ]; then
echo "nvidia: skip compressing executable (special permissions): ${file} ($(file -b "${file}"))"
continue
fi

View File

@@ -122,7 +122,7 @@ RUN \
#### Prepare runtime dependencies (nsenter and required libraries)
# This stage assembles all runtime dependencies based on architecture
# using ldd to find exact library dependencies
FROM debian:trixie-slim AS runtime-assembler
FROM debian:bookworm-slim AS runtime-assembler
ARG DESTINATION=/opt/kata-artifacts
@@ -206,7 +206,7 @@ RUN \
esac
#### kata-deploy main image
FROM gcr.io/distroless/static-debian13@sha256:972618ca78034aaddc55864342014a96b85108c607372f7cbd0dbd1361f1d841
FROM gcr.io/distroless/static-debian12@sha256:87bce11be0af225e4ca761c40babb06d6d559f5767fbf7dc3c47f0f1a466b92c
ARG DESTINATION=/opt/kata-artifacts

View File

@@ -102,11 +102,6 @@ pub async fn install_artifacts(config: &Config) -> Result<()> {
configure_shim_config(config, shim).await?;
}
// Install custom runtime configuration files if enabled
if config.custom_runtimes_enabled && !config.custom_runtimes.is_empty() {
install_custom_runtime_configs(config)?;
}
if std::env::var("HOST_OS").unwrap_or_default() == "cbl-mariner" {
configure_mariner(config).await?;
}
@@ -123,11 +118,6 @@ pub async fn install_artifacts(config: &Config) -> Result<()> {
pub async fn remove_artifacts(config: &Config) -> Result<()> {
info!("deleting kata artifacts");
// Remove custom runtime configs first (before removing main install dir)
if config.custom_runtimes_enabled && !config.custom_runtimes.is_empty() {
remove_custom_runtime_configs(config)?;
}
if Path::new(&config.host_install_dir).exists() {
fs::remove_dir_all(&config.host_install_dir)?;
}
@@ -137,102 +127,6 @@ pub async fn remove_artifacts(config: &Config) -> Result<()> {
Ok(())
}
/// Each custom runtime gets an isolated directory under custom-runtimes/{handler}/
fn install_custom_runtime_configs(config: &Config) -> Result<()> {
info!("Installing custom runtime configuration files");
for runtime in &config.custom_runtimes {
// Create isolated directory for this handler
let handler_dir = format!(
"/host/{}/share/defaults/kata-containers/custom-runtimes/{}",
config.dest_dir, runtime.handler
);
let config_d_dir = format!("{}/config.d", handler_dir);
fs::create_dir_all(&config_d_dir)
.with_context(|| format!("Failed to create config.d directory: {}", config_d_dir))?;
// Copy base config (already modified by kata-deploy with debug, proxy, annotations)
let base_src = format!(
"/host/{}/share/defaults/kata-containers/configuration-{}.toml",
config.dest_dir, runtime.base_config
);
let base_dest = format!("{}/configuration-{}.toml", handler_dir, runtime.base_config);
info!(
"Copying base config for {}: {} -> {}",
runtime.handler, base_src, base_dest
);
fs::copy(&base_src, &base_dest).with_context(|| {
format!(
"Failed to copy base config from {} to {}",
base_src, base_dest
)
})?;
// Copy drop-in file if provided
if let Some(ref drop_in_src) = runtime.drop_in_file {
let drop_in_dest = format!("{}/50-overrides.toml", config_d_dir);
info!(
"Copying drop-in for {}: {} -> {}",
runtime.handler, drop_in_src, drop_in_dest
);
fs::copy(drop_in_src, &drop_in_dest).with_context(|| {
format!(
"Failed to copy drop-in from {} to {}",
drop_in_src, drop_in_dest
)
})?;
}
}
info!(
"Successfully installed {} custom runtime config(s)",
config.custom_runtimes.len()
);
Ok(())
}
fn remove_custom_runtime_configs(config: &Config) -> Result<()> {
info!("Removing custom runtime configuration files");
let custom_runtimes_dir = format!(
"/host/{}/share/defaults/kata-containers/custom-runtimes",
config.dest_dir
);
for runtime in &config.custom_runtimes {
// Remove the entire handler directory (includes config.d/)
let handler_dir = format!("{}/{}", custom_runtimes_dir, runtime.handler);
if Path::new(&handler_dir).exists() {
info!("Removing custom runtime directory: {}", handler_dir);
if let Err(e) = fs::remove_dir_all(&handler_dir) {
log::warn!(
"Failed to remove custom runtime directory {}: {}",
handler_dir,
e
);
}
}
}
// Remove the custom-runtimes directory if empty
if Path::new(&custom_runtimes_dir).exists() {
if let Ok(entries) = fs::read_dir(&custom_runtimes_dir) {
if entries.count() == 0 {
let _ = fs::remove_dir(&custom_runtimes_dir);
}
}
}
info!("Successfully removed custom runtime config files");
Ok(())
}
/// Note: The src parameter is kept to allow for unit testing with temporary directories,
/// even though in production it always uses /opt/kata-artifacts/opt/kata
fn copy_artifacts(src: &str, dst: &str) -> Result<()> {
@@ -890,8 +784,6 @@ mod tests {
containerd_conf_file_backup: "/etc/containerd/config.toml.bak".to_string(),
containerd_drop_in_conf_file: "/opt/kata/containerd/config.d/kata-deploy.toml"
.to_string(),
custom_runtimes_enabled: false,
custom_runtimes: vec![],
}
}

View File

@@ -23,21 +23,6 @@ pub struct ContainerdPaths {
pub use_drop_in: bool,
}
/// Custom runtime configuration parsed from ConfigMap
#[derive(Debug, Clone)]
pub struct CustomRuntime {
/// Handler name (e.g., "kata-my-custom-runtime")
pub handler: String,
/// Base configuration to copy (e.g., "qemu", "qemu-nvidia-gpu")
pub base_config: String,
/// Path to the drop-in file (if provided)
pub drop_in_file: Option<String>,
/// Containerd snapshotter to use (e.g., "nydus", "erofs")
pub containerd_snapshotter: Option<String>,
/// CRI-O pull type (e.g., "guest-pull")
pub crio_pull_type: Option<String>,
}
#[derive(Debug, Clone)]
pub struct Config {
pub node_name: String,
@@ -62,8 +47,6 @@ pub struct Config {
pub containerd_conf_file: String,
pub containerd_conf_file_backup: String,
pub containerd_drop_in_conf_file: String,
pub custom_runtimes_enabled: bool,
pub custom_runtimes: Vec<CustomRuntime>,
}
impl Config {
@@ -169,15 +152,6 @@ impl Config {
.map(|s| s.trim().to_string())
.collect();
// Parse custom runtimes from ConfigMap
let custom_runtimes_enabled =
env::var("CUSTOM_RUNTIMES_ENABLED").unwrap_or_else(|_| "false".to_string()) == "true";
let custom_runtimes = if custom_runtimes_enabled {
parse_custom_runtimes()?
} else {
Vec::new()
};
let config = Config {
node_name,
debug,
@@ -201,8 +175,6 @@ impl Config {
containerd_conf_file,
containerd_conf_file_backup,
containerd_drop_in_conf_file,
custom_runtimes_enabled,
custom_runtimes,
};
// Validate the configuration
@@ -216,18 +188,15 @@ impl Config {
/// All validations are performed on the `_for_arch` values, which are the final
/// values after architecture-specific processing.
fn validate(&self) -> Result<()> {
// Must have either standard shims OR custom runtimes enabled
let has_standard_shims = !self.shims_for_arch.is_empty();
let has_custom_runtimes = self.custom_runtimes_enabled && !self.custom_runtimes.is_empty();
if !has_standard_shims && !has_custom_runtimes {
// Validate SHIMS_FOR_ARCH is not empty and not just whitespace
if self.shims_for_arch.is_empty() {
return Err(anyhow::anyhow!(
"No runtimes configured. Please provide at least one shim via SHIMS \
or enable custom runtimes with CUSTOM_RUNTIMES_ENABLED=true"
"SHIMS for the current architecture must not be empty. \
Please provide at least one shim via SHIMS or SHIMS_<ARCH>"
));
}
// Check for empty shim names (only if we have standard shims)
// Check for empty shim names
for shim in &self.shims_for_arch {
if shim.trim().is_empty() {
return Err(anyhow::anyhow!(
@@ -236,21 +205,19 @@ impl Config {
}
}
// Validate DEFAULT_SHIM only if we have standard shims
if has_standard_shims {
if self.default_shim_for_arch.trim().is_empty() {
return Err(anyhow::anyhow!(
"DEFAULT_SHIM for the current architecture must not be empty"
));
}
// Validate DEFAULT_SHIM_FOR_ARCH exists in SHIMS_FOR_ARCH
if self.default_shim_for_arch.trim().is_empty() {
return Err(anyhow::anyhow!(
"DEFAULT_SHIM for the current architecture must not be empty"
));
}
if !self.shims_for_arch.contains(&self.default_shim_for_arch) {
return Err(anyhow::anyhow!(
"DEFAULT_SHIM '{}' must be one of the configured SHIMS for this architecture: [{}]",
self.default_shim_for_arch,
self.shims_for_arch.join(", ")
));
}
if !self.shims_for_arch.contains(&self.default_shim_for_arch) {
return Err(anyhow::anyhow!(
"DEFAULT_SHIM '{}' must be one of the configured SHIMS for this architecture: [{}]",
self.default_shim_for_arch,
self.shims_for_arch.join(", ")
));
}
// Validate ALLOWED_HYPERVISOR_ANNOTATIONS_FOR_ARCH shim-specific entries
@@ -407,23 +374,6 @@ impl Config {
"* EXPERIMENTAL_FORCE_GUEST_PULL: {}",
self.experimental_force_guest_pull_for_arch.join(",")
);
info!(
"* CUSTOM_RUNTIMES_ENABLED: {}",
self.custom_runtimes_enabled
);
if !self.custom_runtimes.is_empty() {
info!("* CUSTOM_RUNTIMES:");
for runtime in &self.custom_runtimes {
info!(
" - {}: base_config={}, drop_in={}, containerd_snapshotter={:?}, crio_pull_type={:?}",
runtime.handler,
runtime.base_config,
runtime.drop_in_file.is_some(),
runtime.containerd_snapshotter,
runtime.crio_pull_type
);
}
}
}
/// Get containerd configuration file paths based on runtime type and containerd version
@@ -485,100 +435,12 @@ fn get_arch() -> Result<String> {
.to_string())
}
/// Parse custom runtimes from the mounted ConfigMap at /custom-configs/
/// Reads the custom-runtimes.list file which contains entries in the format:
/// handler:baseConfig:containerd_snapshotter:crio_pulltype
/// Optionally reads drop-in files named dropin-{handler}.toml
fn parse_custom_runtimes() -> Result<Vec<CustomRuntime>> {
let custom_configs_dir = "/custom-configs";
let list_file = format!("{}/custom-runtimes.list", custom_configs_dir);
let list_content = match std::fs::read_to_string(&list_file) {
Ok(content) => content,
Err(e) => {
log::warn!(
"Could not read custom runtimes list at {}: {}",
list_file,
e
);
return Ok(Vec::new());
}
};
let mut custom_runtimes = Vec::new();
for line in list_content.lines() {
let line = line.trim();
if line.is_empty() {
continue;
}
// Parse format: handler:baseConfig:containerd_snapshotter:crio_pulltype
let parts: Vec<&str> = line.split(':').collect();
let handler = parts.first().map(|s| s.trim()).unwrap_or("");
if handler.is_empty() {
continue;
}
let base_config = parts.get(1).map(|s| s.trim()).unwrap_or("");
if base_config.is_empty() {
anyhow::bail!(
"Custom runtime '{}' missing required baseConfig field",
handler
);
}
let containerd_snapshotter = parts
.get(2)
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
let crio_pull_type = parts
.get(3)
.map(|s| s.trim())
.filter(|s| !s.is_empty())
.map(|s| s.to_string());
// Check for optional drop-in file
let drop_in_file_path = format!("{}/dropin-{}.toml", custom_configs_dir, handler);
let drop_in_file = if std::path::Path::new(&drop_in_file_path).exists() {
Some(drop_in_file_path)
} else {
None
};
log::info!(
"Found custom runtime: handler={}, base_config={}, drop_in={:?}, containerd_snapshotter={:?}, crio_pull_type={:?}",
handler,
base_config,
drop_in_file.is_some(),
containerd_snapshotter,
crio_pull_type
);
custom_runtimes.push(CustomRuntime {
handler: handler.to_string(),
base_config: base_config.to_string(),
drop_in_file,
containerd_snapshotter,
crio_pull_type,
});
}
log::info!(
"Parsed {} custom runtime(s) from {}",
custom_runtimes.len(),
list_file
);
Ok(custom_runtimes)
}
/// Get default shims list for a specific architecture
/// Returns only shims that are supported for that architecture
fn get_default_shims_for_arch(arch: &str) -> &'static str {
match arch {
"x86_64" => "clh cloud-hypervisor dragonball fc qemu qemu-coco-dev qemu-coco-dev-runtime-rs qemu-runtime-rs qemu-nvidia-gpu qemu-nvidia-gpu-snp qemu-nvidia-gpu-tdx qemu-snp qemu-snp-runtime-rs qemu-tdx qemu-tdx-runtime-rs",
"aarch64" => "clh cloud-hypervisor dragonball fc qemu qemu-runtime-rs qemu-nvidia-gpu qemu-cca",
"aarch64" => "clh cloud-hypervisor dragonball fc qemu qemu-nvidia-gpu qemu-cca",
"s390x" => "qemu qemu-runtime-rs qemu-se qemu-se-runtime-rs qemu-coco-dev qemu-coco-dev-runtime-rs",
"ppc64le" => "qemu",
_ => "qemu", // Fallback to qemu for unknown architectures
@@ -812,12 +674,12 @@ mod tests {
}
#[test]
fn test_validate_empty_shims_no_custom_runtimes() {
fn test_validate_empty_shims() {
setup_minimal_env();
// Empty strings are filtered out, so we need to unset the variable
// and ensure no default is provided. Since we always have a default,
// this test verifies that if somehow we get empty shims AND no custom runtimes,
// validation fails.
// this test verifies that if somehow we get empty shims, validation fails.
// To test this, we need to set a variable that will result in empty shims after processing.
let arch = get_arch().unwrap();
let arch_suffix = match arch.as_str() {
"x86_64" => "_X86_64",
@@ -829,10 +691,8 @@ mod tests {
std::env::remove_var(format!("SHIMS{}", arch_suffix));
// Set a variable that will result in empty shims after split
std::env::set_var(format!("SHIMS{}", arch_suffix), " ");
// Ensure custom runtimes are disabled
std::env::set_var("CUSTOM_RUNTIMES_ENABLED", "false");
assert_config_error_contains("No runtimes configured");
assert_config_error_contains("SHIMS for the current architecture must not be empty");
cleanup_env_vars();
}

View File

@@ -3,102 +3,14 @@
//
// SPDX-License-Identifier: Apache-2.0
use crate::config::{Config, ContainerdPaths, CustomRuntime};
use crate::config::Config;
use crate::k8s;
use crate::utils;
use crate::utils::toml as toml_utils;
use anyhow::{Context, Result};
use log::info;
use std::fs;
use std::path::{Path, PathBuf};
struct ContainerdRuntimeParams {
/// Runtime name (e.g., "kata-qemu")
runtime_name: String,
/// Path to the shim binary
runtime_path: String,
/// Path to the kata configuration file
config_path: String,
/// Pod annotations to allow
pod_annotations: &'static str,
/// Optional snapshotter to configure
snapshotter: Option<String>,
}
fn get_containerd_pluginid(config_file: &str) -> Result<&'static str> {
let content = fs::read_to_string(config_file)
.with_context(|| format!("Failed to read containerd config file: {}", config_file))?;
if content.contains("version = 3") {
Ok("\"io.containerd.cri.v1.runtime\"")
} else if content.contains("version = 2") {
Ok("\"io.containerd.grpc.v1.cri\"")
} else {
Ok("cri")
}
}
fn get_containerd_output_path(paths: &ContainerdPaths) -> PathBuf {
if paths.use_drop_in {
if paths.drop_in_file.starts_with("/etc/containerd/") {
Path::new(&paths.drop_in_file).to_path_buf()
} else {
let drop_in_path = paths.drop_in_file.trim_start_matches('/');
Path::new("/host").join(drop_in_path)
}
} else {
Path::new(&paths.config_file).to_path_buf()
}
}
fn write_containerd_runtime_config(
config_file: &Path,
pluginid: &str,
params: &ContainerdRuntimeParams,
) -> Result<()> {
let runtime_table = format!(
".plugins.{}.containerd.runtimes.{}",
pluginid, params.runtime_name
);
let runtime_options_table = format!("{runtime_table}.options");
let runtime_type = format!("\"io.containerd.{}.v2\"", params.runtime_name);
toml_utils::set_toml_value(
config_file,
&format!("{runtime_table}.runtime_type"),
&runtime_type,
)?;
toml_utils::set_toml_value(
config_file,
&format!("{runtime_table}.runtime_path"),
&params.runtime_path,
)?;
toml_utils::set_toml_value(
config_file,
&format!("{runtime_table}.privileged_without_host_devices"),
"true",
)?;
toml_utils::set_toml_value(
config_file,
&format!("{runtime_table}.pod_annotations"),
params.pod_annotations,
)?;
toml_utils::set_toml_value(
config_file,
&format!("{runtime_options_table}.ConfigPath"),
&params.config_path,
)?;
if let Some(ref snapshotter) = params.snapshotter {
toml_utils::set_toml_value(
config_file,
&format!("{runtime_table}.snapshotter"),
snapshotter,
)?;
}
Ok(())
}
use std::path::Path;
pub async fn configure_containerd_runtime(
config: &Config,
@@ -106,7 +18,6 @@ pub async fn configure_containerd_runtime(
shim: &str,
) -> Result<()> {
log::info!("configure_containerd_runtime: Starting for shim={}", shim);
let adjusted_shim = match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("{shim}-{suffix}"),
_ => shim.to_string(),
@@ -114,118 +25,134 @@ pub async fn configure_containerd_runtime(
let runtime_name = format!("kata-{adjusted_shim}");
let configuration = format!("configuration-{shim}");
log::info!("configure_containerd_runtime: Getting containerd paths");
let paths = config.get_containerd_paths(runtime).await?;
let configuration_file = get_containerd_output_path(&paths);
let pluginid = get_containerd_pluginid(&paths.config_file)?;
log::info!("configure_containerd_runtime: use_drop_in={}", paths.use_drop_in);
log::info!(
"configure_containerd_runtime: Writing to {:?}, pluginid={}",
configuration_file,
pluginid
);
let configuration_file: std::path::PathBuf = if paths.use_drop_in {
// Only add /host prefix if path is not in /etc/containerd (which is mounted from host)
let base_path = if paths.drop_in_file.starts_with("/etc/containerd/") {
Path::new(&paths.drop_in_file).to_path_buf()
} else {
// Need to add /host prefix for paths outside /etc/containerd
let drop_in_path = paths.drop_in_file.trim_start_matches('/');
Path::new("/host").join(drop_in_path)
};
let pod_annotations = "[\"io.katacontainers.*\"]";
// Determine snapshotter if configured
let snapshotter = config
.snapshotter_handler_mapping_for_arch
.as_ref()
.and_then(|mapping| {
mapping.split(',').find_map(|m| {
let parts: Vec<&str> = m.split(':').collect();
if parts.len() == 2 && parts[0] == shim {
let value = parts[1];
let snapshotter_value = if value == "nydus" {
match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("\"{value}-{suffix}\""),
_ => format!("\"{value}\""),
}
} else {
format!("\"{value}\"")
};
Some(snapshotter_value)
} else {
None
}
})
});
let params = ContainerdRuntimeParams {
runtime_name,
runtime_path: format!(
"\"{}\"",
utils::get_kata_containers_runtime_path(shim, &config.dest_dir)
),
config_path: format!(
"\"{}/{}.toml\"",
utils::get_kata_containers_config_path(shim, &config.dest_dir),
configuration
),
pod_annotations,
snapshotter,
log::debug!("Using drop-in config file: {:?}", base_path);
base_path
} else {
log::debug!("Using main config file: {}", paths.config_file);
Path::new(&paths.config_file).to_path_buf()
};
write_containerd_runtime_config(&configuration_file, pluginid, &params)?;
// Use config_file to read containerd version from
let containerd_root_conf_file = &paths.config_file;
let pluginid = if fs::read_to_string(containerd_root_conf_file)
.unwrap_or_default()
.contains("version = 3")
{
"\"io.containerd.cri.v1.runtime\""
} else if fs::read_to_string(containerd_root_conf_file)
.unwrap_or_default()
.contains("version = 2")
{
"\"io.containerd.grpc.v1.cri\""
} else {
"cri"
};
let runtime_table = format!(".plugins.{pluginid}.containerd.runtimes.{runtime_name}");
let runtime_options_table = format!("{runtime_table}.options");
let runtime_type = format!("\"io.containerd.{runtime_name}.v2\"");
let runtime_config_path = format!(
"\"{}/{}.toml\"",
utils::get_kata_containers_config_path(shim, &config.dest_dir),
configuration
);
let runtime_path = format!(
"\"{}\"",
utils::get_kata_containers_runtime_path(shim, &config.dest_dir)
);
log::info!(
"configure_containerd_runtime: Writing to config file: {:?}",
configuration_file
);
log::info!("configure_containerd_runtime: Setting runtime_type");
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_table}.runtime_type"),
&runtime_type,
)?;
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_table}.runtime_path"),
&runtime_path,
)?;
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_table}.privileged_without_host_devices"),
"true",
)?;
let pod_annotations = if shim.contains("nvidia-gpu-") {
"[\"io.katacontainers.*\",\"cdi.k8s.io/*\"]"
} else {
"[\"io.katacontainers.*\"]"
};
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_table}.pod_annotations"),
pod_annotations,
)?;
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_options_table}.ConfigPath"),
&runtime_config_path,
)?;
if config.debug {
toml_utils::set_toml_value(&configuration_file, ".debug.level", "\"debug\"")?;
}
Ok(())
}
match config.snapshotter_handler_mapping_for_arch.as_ref() {
Some(mapping) => {
let snapshotters: Vec<&str> = mapping.split(',').collect();
for m in snapshotters {
// Format is already validated in snapshotter_handler_mapping_validation_check
// and should be validated in Helm templates
let parts: Vec<&str> = m.split(':').collect();
let key = parts[0];
let value = parts[1];
/// Custom runtimes use an isolated config directory under custom-runtimes/{handler}/
pub async fn configure_custom_containerd_runtime(
config: &Config,
runtime: &str,
custom_runtime: &CustomRuntime,
) -> Result<()> {
log::info!(
"configure_custom_containerd_runtime: Starting for handler={}",
custom_runtime.handler
);
if key != shim {
continue;
}
let paths = config.get_containerd_paths(runtime).await?;
let configuration_file = get_containerd_output_path(&paths);
let pluginid = get_containerd_pluginid(&paths.config_file)?;
let snapshotter_value = if value == "nydus" {
match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("\"{value}-{suffix}\""),
_ => format!("\"{value}\""),
}
} else {
format!("\"{value}\"")
};
log::info!(
"configure_custom_containerd_runtime: Writing to {:?}, pluginid={}",
configuration_file,
pluginid
);
let pod_annotations = "[\"io.katacontainers.*\"]";
// Determine snapshotter if specified
let snapshotter = custom_runtime.containerd_snapshotter.as_ref().map(|s| {
if s == "nydus" {
match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("\"{s}-{suffix}\""),
_ => format!("\"{s}\""),
toml_utils::set_toml_value(
&configuration_file,
&format!("{runtime_table}.snapshotter"),
&snapshotter_value,
)?;
break;
}
} else {
format!("\"{s}\"")
}
});
_ => {}
}
let params = ContainerdRuntimeParams {
runtime_name: custom_runtime.handler.clone(),
runtime_path: format!(
"\"{}\"",
utils::get_kata_containers_runtime_path(&custom_runtime.base_config, &config.dest_dir)
),
config_path: format!(
"\"{}/share/defaults/kata-containers/custom-runtimes/{}/configuration-{}.toml\"",
config.dest_dir,
custom_runtime.handler,
custom_runtime.base_config
),
pod_annotations,
snapshotter,
};
write_containerd_runtime_config(&configuration_file, pluginid, &params)
Ok(())
}
pub async fn configure_containerd(config: &Config, runtime: &str) -> Result<()> {
@@ -292,30 +219,6 @@ pub async fn configure_containerd(config: &Config, runtime: &str) -> Result<()>
log::info!("Successfully configured runtime for shim: {}", shim);
}
if config.custom_runtimes_enabled {
if config.custom_runtimes.is_empty() {
anyhow::bail!(
"Custom runtimes enabled but no custom runtimes found in configuration. \
Check that custom-runtimes.list exists and is readable."
);
}
log::info!(
"Configuring {} custom runtime(s)",
config.custom_runtimes.len()
);
for custom_runtime in &config.custom_runtimes {
log::info!(
"Configuring custom runtime: {}",
custom_runtime.handler
);
configure_custom_containerd_runtime(config, runtime, custom_runtime).await?;
log::info!(
"Successfully configured custom runtime: {}",
custom_runtime.handler
);
}
}
log::info!("Successfully configured all containerd runtimes");
Ok(())
}

View File

@@ -3,7 +3,7 @@
//
// SPDX-License-Identifier: Apache-2.0
use crate::config::{Config, CustomRuntime};
use crate::config::Config;
use crate::utils;
use anyhow::Result;
use log::info;
@@ -11,19 +11,24 @@ use std::fs;
use std::io::Write;
use std::path::Path;
struct CrioRuntimeParams<'a> {
/// Runtime name (e.g., "kata-qemu")
runtime_name: &'a str,
/// Path to the shim binary
runtime_path: String,
/// Path to the kata configuration file
config_path: String,
/// Whether to enable guest-pull (runtime_pull_image = true)
guest_pull: bool,
}
pub async fn configure_crio_runtime(config: &Config, shim: &str) -> Result<()> {
let adjusted_shim = match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("{shim}-{suffix}"),
_ => shim.to_string(),
};
let runtime = format!("kata-{adjusted_shim}");
let configuration = format!("configuration-{shim}");
fn write_crio_runtime_config(file: &mut fs::File, params: &CrioRuntimeParams) -> Result<()> {
let kata_conf = format!("crio.runtime.runtimes.{}", params.runtime_name);
let config_path = utils::get_kata_containers_config_path(shim, &config.dest_dir);
let kata_path = utils::get_kata_containers_runtime_path(shim, &config.dest_dir);
let kata_conf = format!("crio.runtime.runtimes.{runtime}");
let kata_config_path = format!("{config_path}/{configuration}.toml");
let conf_file = Path::new(&config.crio_drop_in_conf_file);
let mut file = fs::OpenOptions::new()
.create(true)
.append(true)
.open(conf_file)?;
writeln!(file)?;
writeln!(file, "[{kata_conf}]")?;
@@ -34,92 +39,42 @@ fn write_crio_runtime_config(file: &mut fs::File, params: &CrioRuntimeParams) ->
runtime_root = "/run/vc"
runtime_config_path = "{}"
privileged_without_host_devices = true"#,
params.runtime_path, params.config_path
kata_path,
kata_config_path
)?;
if params.guest_pull {
writeln!(file, r#" runtime_pull_image = true"#)?;
match config.pull_type_mapping_for_arch.as_ref() {
Some(mapping) => {
let pull_types: Vec<&str> = mapping.split(',').collect();
for m in pull_types {
let parts: Vec<&str> = m.split(':').collect();
if parts.len() != 2 {
continue;
}
let key = parts[0];
let value = parts[1];
if key != shim || value == "default" {
continue;
}
match value {
"guest-pull" => writeln!(file, r#" runtime_pull_image = true"#)?,
_ => {
return Err(anyhow::anyhow!(
"Unsupported pull type '{value}' for {shim}"
))
}
}
break;
}
}
_ => {}
}
Ok(())
}
pub async fn configure_crio_runtime(config: &Config, shim: &str) -> Result<()> {
let adjusted_shim = match config.multi_install_suffix.as_ref() {
Some(suffix) if !suffix.is_empty() => format!("{shim}-{suffix}"),
_ => shim.to_string(),
};
let runtime_name = format!("kata-{adjusted_shim}");
let configuration = format!("configuration-{shim}");
// Determine if guest-pull is configured for this shim
let guest_pull = config
.pull_type_mapping_for_arch
.as_ref()
.map(|mapping| {
mapping.split(',').any(|m| {
let parts: Vec<&str> = m.split(':').collect();
parts.len() == 2 && parts[0] == shim && parts[1] == "guest-pull"
})
})
.unwrap_or(false);
let params = CrioRuntimeParams {
runtime_name: &runtime_name,
runtime_path: utils::get_kata_containers_runtime_path(shim, &config.dest_dir),
config_path: format!(
"{}/{}.toml",
utils::get_kata_containers_config_path(shim, &config.dest_dir),
configuration
),
guest_pull,
};
let conf_file = Path::new(&config.crio_drop_in_conf_file);
let mut file = fs::OpenOptions::new()
.create(true)
.append(true)
.open(conf_file)?;
write_crio_runtime_config(&mut file, &params)
}
pub async fn configure_custom_crio_runtime(
config: &Config,
custom_runtime: &CustomRuntime,
) -> Result<()> {
info!(
"Configuring custom CRI-O runtime: {}",
custom_runtime.handler
);
let guest_pull = custom_runtime
.crio_pull_type
.as_ref()
.map(|p| p == "guest-pull")
.unwrap_or(false);
let params = CrioRuntimeParams {
runtime_name: &custom_runtime.handler,
runtime_path: utils::get_kata_containers_runtime_path(
&custom_runtime.base_config,
&config.dest_dir,
),
config_path: format!(
"{}/share/defaults/kata-containers/custom-runtimes/{}/configuration-{}.toml",
config.dest_dir, custom_runtime.handler, custom_runtime.base_config
),
guest_pull,
};
let mut file = fs::OpenOptions::new()
.create(true)
.append(true)
.open(&config.crio_drop_in_conf_file)?;
write_crio_runtime_config(&mut file, &params)
}
pub async fn configure_crio(config: &Config) -> Result<()> {
info!("Add Kata Containers as a supported runtime for CRIO:");
@@ -151,22 +106,6 @@ pub async fn configure_crio(config: &Config) -> Result<()> {
configure_crio_runtime(config, shim).await?;
}
if config.custom_runtimes_enabled {
if config.custom_runtimes.is_empty() {
anyhow::bail!(
"Custom runtimes enabled but no custom runtimes found in configuration. \
Check that custom-runtimes.list exists and is readable."
);
}
info!(
"Configuring {} custom runtime(s) for CRI-O",
config.custom_runtimes.len()
);
for custom_runtime in &config.custom_runtimes {
configure_custom_crio_runtime(config, custom_runtime).await?;
}
}
if config.debug {
let mut debug_file = fs::OpenOptions::new()
.create(true)

View File

@@ -198,9 +198,6 @@ snapshotter:
# Configure shims
shims:
# Disable all shims at once (useful when enabling only specific shims or custom runtimes)
disableAll: false
qemu:
enabled: true
supportedArches:
@@ -239,7 +236,6 @@ defaultShim:
2. **Architecture-aware**: Shims declare which architectures they support
3. **Type safety**: Structured format reduces configuration errors
4. **Easy to use**: All shims are enabled by default in `values.yaml`, so you can use the chart directly without modification
5. **Disable all at once**: Use `shims.disableAll: true` to disable all standard shims, useful when enabling only specific shims or using custom runtimes only
### Example: Enable `qemu` shim with new format
@@ -355,13 +351,26 @@ The kata-deploy script will no longer create `runtimeClasses`
## Example: only `qemu` shim and debug enabled
Use `shims.disableAll=true` to disable all shims at once, then enable only the ones you need:
Since all shims are enabled by default, you need to disable the ones you don't want:
```sh
# Using --set flags (disable all, then enable qemu)
# Using --set flags (disable all except qemu)
$ helm install kata-deploy \
--set shims.disableAll=true \
--set shims.qemu.enabled=true \
--set shims.clh.enabled=false \
--set shims.cloud-hypervisor.enabled=false \
--set shims.dragonball.enabled=false \
--set shims.fc.enabled=false \
--set shims.qemu-runtime-rs.enabled=false \
--set shims.qemu-nvidia-gpu.enabled=false \
--set shims.qemu-nvidia-gpu-snp.enabled=false \
--set shims.qemu-nvidia-gpu-tdx.enabled=false \
--set shims.qemu-snp.enabled=false \
--set shims.qemu-tdx.enabled=false \
--set shims.qemu-se.enabled=false \
--set shims.qemu-se-runtime-rs.enabled=false \
--set shims.qemu-cca.enabled=false \
--set shims.qemu-coco-dev.enabled=false \
--set shims.qemu-coco-dev-runtime-rs.enabled=false \
--set debug=true \
"${CHART}" --version "${VERSION}"
```
@@ -372,9 +381,44 @@ Or use a custom values file:
# custom-values.yaml
debug: true
shims:
disableAll: true
qemu:
enabled: true
clh:
enabled: false
cloud-hypervisor:
enabled: false
dragonball:
enabled: false
fc:
enabled: false
qemu-runtime-rs:
enabled: false
qemu-nvidia-gpu:
enabled: false
qemu-nvidia-gpu-snp:
enabled: false
qemu-nvidia-gpu-tdx:
enabled: false
qemu-snp:
enabled: false
qemu-snp-runtime-rs:
enabled: false
qemu-tdx:
enabled: false
qemu-tdx-runtime-rs:
enabled: false
qemu-se:
enabled: false
qemu-se-runtime-rs:
enabled: false
qemu-cca:
enabled: false
qemu-coco-dev:
enabled: false
qemu-coco-dev-runtime-rs:
enabled: false
remote:
enabled: false
```
```sh

View File

@@ -15,13 +15,13 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: "3.26.0"
version: "3.25.0"
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.26.0"
appVersion: "3.25.0"
dependencies:
- name: node-feature-discovery

View File

@@ -97,34 +97,13 @@ Returns the namespace where node-feature-discovery is found, or empty string if
{{- end -}}
{{/*
Get enabled shims for a specific architecture from structured config.
Uses null-based defaults for disableAll support:
- enabled: ~ (null) + disableAll: false enabled
- enabled: ~ (null) + disableAll: true disabled
- enabled: true always enabled (explicit override)
- enabled: false always disabled (explicit override)
Get enabled shims for a specific architecture from structured config
*/}}
{{- define "kata-deploy.getEnabledShimsForArch" -}}
{{- $arch := .arch -}}
{{- $disableAll := .root.Values.shims.disableAll | default false -}}
{{- $enabledShims := list -}}
{{- range $shimName, $shimConfig := .root.Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- /* Determine if shim is enabled based on enabled field and disableAll */ -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- /* Explicit true: always enabled */ -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- /* Explicit false: always disabled */ -}}
{{- $shimEnabled = false -}}
{{- else -}}
{{- /* Null/unset: use inverse of disableAll (enabled by default, disabled when disableAll=true) */ -}}
{{- if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $archSupported := false -}}
{{- range $shimConfig.supportedArches -}}
{{- if eq . $arch -}}
@@ -136,7 +115,6 @@ Uses null-based defaults for disableAll support:
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- join " " $enabledShims -}}
{{- end -}}
@@ -154,19 +132,9 @@ Format: shim1:snapshotter1,shim2:snapshotter2
*/}}
{{- define "kata-deploy.getSnapshotterHandlerMappingForArch" -}}
{{- $arch := .arch -}}
{{- $disableAll := .root.Values.shims.disableAll | default false -}}
{{- $mappings := list -}}
{{- range $shimName, $shimConfig := .root.Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $archSupported := false -}}
{{- range $shimConfig.supportedArches -}}
{{- if eq . $arch -}}
@@ -183,7 +151,6 @@ Format: shim1:snapshotter1,shim2:snapshotter2
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- join "," $mappings -}}
{{- end -}}
@@ -193,19 +160,9 @@ Format: shim1:pullType1,shim2:pullType2
*/}}
{{- define "kata-deploy.getPullTypeMappingForArch" -}}
{{- $arch := .arch -}}
{{- $disableAll := .root.Values.shims.disableAll | default false -}}
{{- $mappings := list -}}
{{- range $shimName, $shimConfig := .root.Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $archSupported := false -}}
{{- range $shimConfig.supportedArches -}}
{{- if eq . $arch -}}
@@ -226,7 +183,6 @@ Format: shim1:pullType1,shim2:pullType2
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- join "," $mappings -}}
{{- end -}}
@@ -236,19 +192,9 @@ Output format: "shim:annotation1,annotation2" (space-separated entries, each wit
*/}}
{{- define "kata-deploy.getAllowedHypervisorAnnotationsForArch" -}}
{{- $arch := .arch -}}
{{- $disableAll := .root.Values.shims.disableAll | default false -}}
{{- $perShimAnnotations := list -}}
{{- range $shimName, $shimConfig := .root.Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $archSupported := false -}}
{{- range $shimConfig.supportedArches -}}
{{- if eq . $arch -}}
@@ -268,7 +214,6 @@ Output format: "shim:annotation1,annotation2" (space-separated entries, each wit
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- join " " $perShimAnnotations -}}
{{- end -}}
@@ -277,23 +222,12 @@ Get agent HTTPS proxy from structured config
Builds per-shim semicolon-separated list: "shim1=value1;shim2=value2"
*/}}
{{- define "kata-deploy.getAgentHttpsProxy" -}}
{{- $disableAll := .Values.shims.disableAll | default false -}}
{{- $proxies := list -}}
{{- range $shimName, $shimConfig := .Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if and $shimEnabled $shimConfig.agent $shimConfig.agent.httpsProxy -}}
{{- $entry := printf "%s=%s" $shimName $shimConfig.agent.httpsProxy -}}
{{- $proxies = append $proxies $entry -}}
{{- end -}}
{{- end -}}
{{- if and $shimConfig.enabled $shimConfig.agent $shimConfig.agent.httpsProxy -}}
{{- $entry := printf "%s=%s" $shimName $shimConfig.agent.httpsProxy -}}
{{- $proxies = append $proxies $entry -}}
{{- end -}}
{{- end -}}
{{- join ";" $proxies -}}
{{- end -}}
@@ -303,23 +237,12 @@ Get agent NO_PROXY from structured config
Builds per-shim semicolon-separated list: "shim1=value1;shim2=value2"
*/}}
{{- define "kata-deploy.getAgentNoProxy" -}}
{{- $disableAll := .Values.shims.disableAll | default false -}}
{{- $proxies := list -}}
{{- range $shimName, $shimConfig := .Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if and $shimEnabled $shimConfig.agent $shimConfig.agent.noProxy -}}
{{- $entry := printf "%s=%s" $shimName $shimConfig.agent.noProxy -}}
{{- $proxies = append $proxies $entry -}}
{{- end -}}
{{- end -}}
{{- if and $shimConfig.enabled $shimConfig.agent $shimConfig.agent.noProxy -}}
{{- $entry := printf "%s=%s" $shimName $shimConfig.agent.noProxy -}}
{{- $proxies = append $proxies $entry -}}
{{- end -}}
{{- end -}}
{{- join ";" $proxies -}}
{{- end -}}
@@ -349,19 +272,9 @@ Note: EXPERIMENTAL_FORCE_GUEST_PULL only checks containerd.forceGuestPull, not c
*/}}
{{- define "kata-deploy.getForceGuestPullForArch" -}}
{{- $arch := .arch -}}
{{- $disableAll := .root.Values.shims.disableAll | default false -}}
{{- $shimNames := list -}}
{{- range $shimName, $shimConfig := .root.Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $archSupported := false -}}
{{- range $shimConfig.supportedArches -}}
{{- if eq . $arch -}}
@@ -375,6 +288,5 @@ Note: EXPERIMENTAL_FORCE_GUEST_PULL only checks containerd.forceGuestPull, not c
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- join "," $shimNames -}}
{{- end -}}

View File

@@ -1,58 +0,0 @@
{{- if and .Values.customRuntimes.enabled .Values.customRuntimes.runtimes }}
---
# ConfigMap containing custom runtime configurations and drop-in files
# This is mounted into the kata-deploy pod at /custom-configs/
apiVersion: v1
kind: ConfigMap
metadata:
{{- if .Values.env.multiInstallSuffix }}
name: {{ .Chart.Name }}-custom-configs-{{ .Values.env.multiInstallSuffix }}
{{- else }}
name: {{ .Chart.Name }}-custom-configs
{{- end }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "kata-deploy.labels" . | nindent 4 }}
data:
# Format: handler:baseConfig:containerd_snapshotter:crio_pulltype
custom-runtimes.list: |
{{- range $name, $runtime := .Values.customRuntimes.runtimes }}
{{- $handler := "" }}
{{- /* Extract handler from runtimeClass YAML */ -}}
{{- if $runtime.runtimeClass }}
{{- range (splitList "\n" $runtime.runtimeClass) }}
{{- $line := trim . }}
{{- if hasPrefix "handler:" $line }}
{{- $handler = trim (trimPrefix "handler:" $line) }}
{{- end }}
{{- end }}
{{- end }}
{{- if $handler }}
{{ $handler }}:{{ $runtime.baseConfig }}:{{ $runtime.containerd.snapshotter | default "" }}:{{ $runtime.crio.pullType | default "" }}
{{- end }}
{{- end }}
{{- /* Generate drop-in files for each runtime */ -}}
{{- range $name, $runtime := .Values.customRuntimes.runtimes }}
{{- $handler := "" }}
{{- if $runtime.runtimeClass }}
{{- range (splitList "\n" $runtime.runtimeClass) }}
{{- $line := trim . }}
{{- if hasPrefix "handler:" $line }}
{{- $handler = trim (trimPrefix "handler:" $line) }}
{{- end }}
{{- end }}
{{- end }}
{{- if and $handler $runtime.dropIn }}
dropin-{{ $handler }}.toml: |
{{ $runtime.dropIn | indent 4 }}
{{- end }}
{{- end }}
---
# RuntimeClasses for custom runtimes
{{- range $name, $runtime := .Values.customRuntimes.runtimes }}
{{- if $runtime.runtimeClass }}
{{ $runtime.runtimeClass }}
---
{{- end }}
{{- end }}
{{- end }}

View File

@@ -283,10 +283,6 @@ spec:
{{- with .Values.env.hostOS }}
- name: HOST_OS
value: {{ . | quote }}
{{- end }}
{{- if and .Values.customRuntimes.enabled .Values.customRuntimes.runtimes }}
- name: CUSTOM_RUNTIMES_ENABLED
value: "true"
{{- end }}
securityContext:
privileged: true
@@ -297,11 +293,6 @@ spec:
mountPath: /etc/containerd/
- name: host
mountPath: /host/
{{- if and .Values.customRuntimes.enabled .Values.customRuntimes.runtimes }}
- name: custom-configs
mountPath: /custom-configs/
readOnly: true
{{- end }}
volumes:
- name: crio-conf
hostPath:
@@ -312,15 +303,6 @@ spec:
- name: host
hostPath:
path: /
{{- if and .Values.customRuntimes.enabled .Values.customRuntimes.runtimes }}
- name: custom-configs
configMap:
{{- if .Values.env.multiInstallSuffix }}
name: {{ .Chart.Name }}-custom-configs-{{ .Values.env.multiInstallSuffix }}
{{- else }}
name: {{ .Chart.Name }}-custom-configs
{{- end }}
{{- end }}
updateStrategy:
rollingUpdate:
maxUnavailable: 1

View File

@@ -3,24 +3,13 @@
{{- $createDefaultRC := .Values.runtimeClasses.createDefault }}
{{- $defaultRCName := .Values.runtimeClasses.defaultName }}
{{- /* Get enabled shims from structured config using null-aware logic */ -}}
{{- $disableAll := .Values.shims.disableAll | default false -}}
{{- /* Get enabled shims from structured config */ -}}
{{- $enabledShims := list -}}
{{- range $shimName, $shimConfig := .Values.shims -}}
{{- if ne $shimName "disableAll" -}}
{{- $shimEnabled := false -}}
{{- if eq $shimConfig.enabled true -}}
{{- $shimEnabled = true -}}
{{- else if eq $shimConfig.enabled false -}}
{{- $shimEnabled = false -}}
{{- else if not $disableAll -}}
{{- $shimEnabled = true -}}
{{- end -}}
{{- if $shimEnabled -}}
{{- if $shimConfig.enabled -}}
{{- $enabledShims = append $enabledShims $shimName -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- /* Define runtime class configurations with their overhead settings */ -}}
{{- $runtimeClassConfigs := dict

View File

@@ -12,11 +12,36 @@ snapshotter:
setup: []
# Enable NVIDIA GPU shims
# Disable all shims at once, then enable only the ones we need
# First disable all shims (since values.yaml enables all by default)
shims:
disableAll: true
clh:
enabled: false
cloud-hypervisor:
enabled: false
dragonball:
enabled: false
fc:
enabled: false
qemu:
enabled: false
qemu-runtime-rs:
enabled: false
qemu-snp:
enabled: false
qemu-tdx:
enabled: false
qemu-se:
enabled: false
qemu-se-runtime-rs:
enabled: false
qemu-cca:
enabled: false
qemu-coco-dev:
enabled: false
qemu-coco-dev-runtime-rs:
enabled: false
# Enable NVIDIA GPU shims
# Now enable NVIDIA GPU shims
qemu-nvidia-gpu:
enabled: true
supportedArches:

View File

@@ -12,11 +12,28 @@ snapshotter:
setup: ["nydus"] # TEE shims typically use nydus snapshotter
# Enable TEE (Trusted Execution Environment) shims
# Disable all shims at once, then enable only the ones we need
# First disable all shims (since values.yaml enables all by default)
shims:
disableAll: true
clh:
enabled: false
cloud-hypervisor:
enabled: false
dragonball:
enabled: false
fc:
enabled: false
qemu:
enabled: false
qemu-runtime-rs:
enabled: false
qemu-nvidia-gpu:
enabled: false
qemu-nvidia-gpu-snp:
enabled: false
qemu-nvidia-gpu-tdx:
enabled: false
# Enable TEE shims (qemu-snp, qemu-snp-runtime-rs, qemu-tdx, qemu-tdx-runtime-rs, qemu-se, qemu-se-runtime-rs, qemu-cca, qemu-coco-dev, qemu-coco-dev-runtime-rs)
# Now enable TEE shims (qemu-snp, qemu-snp-runtime-rs, qemu-tdx, qemu-tdx-runtime-rs, qemu-se, qemu-se-runtime-rs, qemu-cca, qemu-coco-dev, qemu-coco-dev-runtime-rs)
qemu-snp:
enabled: true
supportedArches:

View File

@@ -25,18 +25,10 @@ debug: false
snapshotter:
setup: [] # ["nydus", "erofs"] or []
# Shim configuration
# By default (disableAll: false), all shims with enabled: ~ (null) are enabled.
# Set disableAll: true to disable all shims, then explicitly enable specific ones:
# shims:
# disableAll: true
# qemu:
# enabled: true # Only qemu is enabled
# Enable all available shims
shims:
disableAll: false
clh:
enabled: ~ # null = use disableAll setting (enabled when false, disabled when true)
enabled: true
supportedArches:
- amd64
- arm64
@@ -45,7 +37,7 @@ shims:
snapshotter: ""
cloud-hypervisor:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
@@ -54,7 +46,7 @@ shims:
snapshotter: ""
dragonball:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
@@ -63,7 +55,7 @@ shims:
snapshotter: ""
fc:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
@@ -72,7 +64,7 @@ shims:
snapshotter: "devmapper" # requires pre-configuration on the user side
qemu:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
@@ -83,17 +75,16 @@ shims:
snapshotter: ""
qemu-runtime-rs:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
- s390x
allowedHypervisorAnnotations: []
containerd:
snapshotter: ""
qemu-nvidia-gpu:
enabled: ~
enabled: true
supportedArches:
- amd64
- arm64
@@ -102,7 +93,7 @@ shims:
snapshotter: ""
qemu-nvidia-gpu-snp:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -116,7 +107,7 @@ shims:
noProxy: ""
qemu-nvidia-gpu-tdx:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -130,7 +121,7 @@ shims:
noProxy: ""
qemu-snp:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -144,7 +135,7 @@ shims:
noProxy: ""
qemu-snp-runtime-rs:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -158,7 +149,7 @@ shims:
noProxy: ""
qemu-tdx:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -172,7 +163,7 @@ shims:
noProxy: ""
qemu-tdx-runtime-rs:
enabled: ~
enabled: true
supportedArches:
- amd64
allowedHypervisorAnnotations: []
@@ -186,7 +177,7 @@ shims:
noProxy: ""
qemu-se:
enabled: ~
enabled: true
supportedArches:
- s390x
allowedHypervisorAnnotations: []
@@ -200,7 +191,7 @@ shims:
noProxy: ""
qemu-se-runtime-rs:
enabled: ~
enabled: true
supportedArches:
- s390x
allowedHypervisorAnnotations: []
@@ -214,7 +205,7 @@ shims:
noProxy: ""
qemu-cca:
enabled: ~
enabled: true
supportedArches:
- arm64
allowedHypervisorAnnotations: []
@@ -228,7 +219,7 @@ shims:
noProxy: ""
qemu-coco-dev:
enabled: ~
enabled: true
supportedArches:
- amd64
- s390x
@@ -243,7 +234,7 @@ shims:
noProxy: ""
qemu-coco-dev-runtime-rs:
enabled: ~
enabled: true
supportedArches:
- amd64
- s390x
@@ -348,64 +339,3 @@ verification:
# --set-file verification.pod=/path/to/your-verification-pod.yaml
#
pod: ""
# Custom Runtimes - bring your own RuntimeClass with base config + drop-in overrides
# Each custom runtime uses an existing Kata config as a base and applies user overrides
# via Kata's config.d drop-in mechanism.
#
# IMPORTANT: The base config is copied AFTER kata-deploy has applied its modifications
# (debug, proxy, annotations). Custom runtimes inherit these settings from their base.
#
# Usage with values file (recommended):
# Create a custom-runtimes.values.yaml file:
#
# customRuntimes:
# enabled: true
# runtimes:
# my-gpu-runtime:
# baseConfig: "qemu-nvidia-gpu" # Required: existing config to use as base
# dropIn: | # Optional: overrides via config.d mechanism
# [hypervisor.qemu]
# default_memory = 1024
# default_vcpus = 4
# runtimeClass: |
# kind: RuntimeClass
# apiVersion: node.k8s.io/v1
# metadata:
# name: kata-my-gpu-runtime
# labels:
# app.kubernetes.io/managed-by: kata-deploy
# handler: kata-my-gpu-runtime
# overhead:
# podFixed:
# memory: "640Mi"
# cpu: "500m"
# scheduling:
# nodeSelector:
# katacontainers.io/kata-runtime: "true"
# # Optional: CRI-specific configuration
# containerd:
# snapshotter: "nydus" # Configure containerd snapshotter (nydus, erofs, etc.)
# crio:
# pullType: "guest-pull" # Configure CRI-O runtime_pull_image = true
#
# Then deploy with:
# helm install kata-deploy ./kata-deploy -f custom-runtimes.values.yaml
#
# Available base configs: qemu, qemu-nvidia-gpu, qemu-snp, qemu-tdx, cloud-hypervisor, fc, etc.
# The correct shim binary is automatically selected based on the baseConfig.
#
customRuntimes:
enabled: false
runtimes: {}
# Example structure:
# runtimes:
# my-runtime:
# baseConfig: "qemu-nvidia-gpu" # Required: base config name
# dropIn: "" # Optional: TOML overrides for config.d
# runtimeClass: |
# <full RuntimeClass YAML>
# containerd:
# snapshotter: "" # Optional: nydus, erofs, or empty for default
# crio:
# pullType: "" # Optional: guest-pull or empty for default

View File

@@ -1306,9 +1306,9 @@ handle_build() {
install_firecracker
install_image
install_image_confidential
install_image_mariner
install_initrd
install_initrd_confidential
install_initrd_mariner
install_kata_ctl
install_kata_manager
install_kernel

View File

@@ -1,4 +1,12 @@
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_PTP_1588_CLOCK_KVM=y
# The implementation of ptp_kvm on arm is one experimental feature,
# you need to apply private patches to enable it on your host machine.
# See https://github.com/kata-containers/packaging/pull/998 for detailed info.
#
# as ptp_kvm is under review in upstream and linux kernel in virtio-fs branch
# will change with time, we need update the patch again and again. For now, we
# disable ptp_kvm to wait for the stable kernel version of virtio-fs or ptp_kvm
# accept by upstream.
#CONFIG_PTP_1588_CLOCK=y
#CONFIG_PTP_1588_CLOCK_KVM=y

View File

@@ -1 +1 @@
177
176

View File

@@ -1,7 +1,7 @@
module module-path
// Keep in sync with version in versions.yaml
go 1.24.12
go 1.24.11
require (
github.com/sirupsen/logrus v1.9.3

View File

@@ -246,34 +246,11 @@ externals:
url: "https://github.com/NVIDIA/nvrc/releases/download/"
nvidia:
desc: "NVIDIA compute stack"
desc: "NVIDIA driver version"
driver:
desc: "NVIDIA kernel driver"
version: "590.48.01"
# yamllint disable-line rule:line-length
url: "https://github.com/NVIDIA/open-gpu-kernel-modules/archive/refs/tags/"
cuda:
desc: "NVIDIA CUDA repository"
repo:
x86_64:
# yamllint disable-line rule:line-length
url: "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/"
pkg: "cuda-keyring_1.1-1_all.deb"
aarch64:
# yamllint disable-line rule:line-length
url: "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/sbsa/"
pkg: "cuda-keyring_1.1-1_all.deb"
tools:
desc: "NVIDIA container toolkit and tools repository"
repo:
x86_64:
# yamllint disable-line rule:line-length
url: "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/"
pkg: "cuda-keyring_1.1-1_all.deb"
aarch64:
# yamllint disable-line rule:line-length
url: "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/sbsa/"
pkg: "cuda-keyring_1.1-1_all.deb"
busybox:
desc: "The Swiss Army Knife of Embedded Linux"
@@ -293,8 +270,8 @@ externals:
coco-guest-components:
description: "Provides attested key unwrapping for image decryption"
url: "https://github.com/confidential-containers/guest-components/"
version: "9aae2eae6a03ab97d6561bbe74f8b99843836bba"
toolchain: "1.90.0"
version: "026694d44d4ec483465d2fa5f80a0376166b174d"
toolchain: "1.85.1"
coco-trustee:
description: "Provides attestation and secret delivery components"
@@ -476,12 +453,12 @@ languages:
description: "Google's 'go' language"
notes: "'version' is the default minimum version used by this project."
# When updating this, also update in go.mod files.
version: "1.24.12"
version: "1.24.11"
meta:
description: |
'newest-version' is the latest version known to work when
building Kata
newest-version: "1.24.12"
newest-version: "1.24.11"
rust:
description: "Rust language"