Add install_image_coco_addon() to kata-deploy-binaries.sh which:
- Unpacks the CoCo guest components and pause image tarballs into a
temporary rootfs directory (under the repo root so Docker-in-Docker
volume mounts resolve correctly)
- Calls image_builder.sh with USE_DOCKER=1, FS_TYPE=erofs,
MEASURED_ROOTFS=yes, SKIP_DAX_HEADER=yes, and SKIP_ROOTFS_CHECK=yes
to produce kata-containers-coco-addon.img + root_hash_coco-addon.txt
Add the rootfs-image-coco-addon-tarball Makefile target with
dependencies on pause-image-tarball and coco-guest-components-tarball.
Remove pause-image-tarball and coco-guest-components-tarball from the
standard confidential image dependencies -- those components now live
exclusively in the CoCo addon image. NVIDIA confidential images
retain them until the NVIDIA addon split lands.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Update all six CoCo configuration templates (coco-dev, snp, tdx for
both Go and Rust runtimes) to use the standard base image instead of
the monolithic confidential image, and add an [[extra_images]] section
for the CoCo addon:
image = "@IMAGEPATH@" (was @IMAGECONFIDENTIALPATH@)
[[hypervisor.qemu.extra_images]]
name = "coco"
path = "@COCOIMAGEPATH@"
verity_params = "@COCOVERITYPARAMS@"
Add COCOIMAGENAME (kata-containers-coco-addon.img), COCOIMAGEPATH, and
COCOVERITYPARAMS to both runtime Makefiles so the placeholders are
substituted at install time.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Addon images (e.g. the CoCo guest components addon) are not full root
filesystems -- they contain only the binaries and configuration that
get bind-mounted into the real rootfs at boot. The existing
check_rootfs() validation requires /sbin/init and systemd, which are
not present in addon images.
Add a SKIP_ROOTFS_CHECK environment variable that, when set to "yes",
bypasses the check_rootfs() call. Forward the variable into the
container environment when using the Docker-based build path so it
works in both direct and containerised invocations.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Create a symlink to enable kata-addon-mount@coco.service in
kata-containers.target.wants during rootfs construction for
systemd-based (non-AGENT_INIT) guests.
The unit's ConditionPathExists guard ensures it only activates when
the virtio-addon-coco block device is actually present in the VM,
so enabling it unconditionally in the base image is safe.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
The DAX header (2 MiB of NVDIMM metadata + a duplicate MBR) is
unconditionally prepended to every image by set_dax_header(). NVIDIA
images use virtio-blk-pci with disable_image_nvdimm=true, so the
kernel reads MBR #1 directly and never touches the DAX metadata --
it is dead weight.
Add a SKIP_DAX_HEADER environment variable (default "no") that, when
set to "yes", skips the DAX header entirely:
- Removes the 2 MiB DAX overhead from image size calculations in
both the erofs and ext4 paths
- Skips the set_dax_header() call, avoiding compilation and
execution of the nsdax tool
- Passes the variable through to containerised builds
Enable SKIP_DAX_HEADER=yes for both install_image_nvidia_gpu() and
install_image_nvidia_gpu_confidential() in the build pipeline. All
other image builds are unaffected (default remains "no").
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Fedora 42 reaches end-of-life in May 2026. Move the image-builder
container to Fedora 44, which is the current stable release.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Switch the NVIDIA GPU rootfs images (both standard and confidential)
from ext4 to erofs (Enhanced Read-Only File System).
Unlike ext4, which is a read-write filesystem mounted read-only by
convention, erofs is structurally read-only -- no journal, no write
metadata, no superblock write path. This eliminates accidental
mutation and reduces the attack surface inside the guest VM, which
is particularly important for confidential workloads using dm-verity.
Introduce a DEFROOTFSTYPE_NV Makefile variable (set to erofs) for
both Go and Rust runtimes, keeping the global DEFROOTFSTYPE as ext4
so non-NVIDIA configurations are unaffected.
Update all six NVIDIA GPU configuration templates (base, SNP, TDX
for both runtimes) to use @DEFROOTFSTYPE_NV@ instead of the global
@DEFROOTFSTYPE@.
Export FS_TYPE=erofs in install_image_nvidia_gpu() and
install_image_nvidia_gpu_confidential() so the build pipeline
produces erofs images via the image builder.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add full dm-verity and measured rootfs support to
create_erofs_rootfs_image(), bringing it to parity with the ext4 path.
Unlike ext4, which is a read-write filesystem mounted read-only by
convention, erofs is structurally read-only -- no journal, no write
metadata, no superblock write path.
This is a natural fit for dm-verity: erofs never attempts writes, so
verity never has to reject anything. With ext4, the kernel must skip
journal replay on verity-protected devices, which is a fragile
assumption.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Extract build_kernel_verity_params() and setup_verity() from the
inline block inside create_rootfs_image() into top-level functions.
This is a pure refactoring with no behavior change. The verity logic
is moved verbatim, with the only difference being that
build_kernel_verity_params() now takes the image path as an explicit
parameter instead of capturing it from the enclosing scope.
The extracted functions will be reused by create_erofs_rootfs_image()
in a subsequent commit to add dm-verity support for erofs images.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Containerd 2.3 (config schema v4) uses the top-level [debug] table
for log level configuration, not plugins."io.containerd.server.v1.debug"
as was the case in the RC builds.
Update containerd_debug_level_toml_path() to use .debug.level for all
schema versions, matching the released containerd behavior.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Temporarily unrequire the NVIDIA GPU test. We are experiencing
situations in which two NIM service instances get deployed almost
at the same time into the kata-containers-k8s-tests namespace
(expected current context) and into the default namespace. This
causes the NIM operator to create two deployments in the two
namespaces and to then schedule two pods at the same time. This
usually causes the NIM pod in the default namespace to fail and to
linger.
We can't explain yet why this does not happen in the TEE CI path
and why this is happening at all.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
In cleanup_kata_deploy, bail out early when no kata-deploy Helm release
exists so baremetal-* pre-deploy cleanup on fresh clusters does not
block on helm uninstall --wait (up to 10m).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Plumb a resources block into the kata-deploy DaemonSet container in
the Helm chart so the cluster can size its memory footprint
predictably.
Defaults are sized from real /proc/<pid>/status numbers on an
unpatched 3.30.0 build running on a ~220-vCPU GPU node:
VmRSS: 9944 kB (~9.7 MiB) <- actual physical memory
RssAnon: 2628 kB (~2.6 MiB) <- heap + dirty stack pages
VmData: 464668 kB (~454 MiB) <- tokio multi-thread workers'
reserved-but-untouched stacks
Threads: 225 <- num_cpus()-driven worker pool
That VmData number is the source of the original "kata-deploy is
using 400 MB" reports: any monitoring layer that surfaces virtual
data size, committed memory, or memory.usage_in_bytes on a kernel
that includes mapped-but-untouched memory will happily reproduce
~400 MB even though only ~10 MiB is ever made resident. The earlier
commits in this series (current_thread tokio, mimalloc, shared kube
client, JSONPath removal, post-install re-exec) collapse VmData into
the tens of MiB and drop the post-install resident set further.
The defaults below are picked accordingly:
requests:
cpu: 25m # install is mostly I/O wait; the post-install
# waiter is genuinely idle
memory: 16Mi # ~2x headroom over the unpatched VmRSS we
# measured, far more over the patched waiter
Operators who hit OOMKilled on unusually large or churny clusters can
override `resources` directly in their Helm values (or set it to {}
to remove all requests and inherit cluster defaults).
Fixes: https://github.com/kata-containers/kata-containers/discussions/12976
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
After install completes the kata-deploy DaemonSet pod has nothing else
to do for the rest of its lifetime — it just blocks on SIGTERM and then
runs cleanup. Up to here, the install path has built up substantial
peak heap (kube clients, deserialised Node/RuntimeClass objects, hyper
+ rustls TLS pools, parsed JSON / YAML), and on musl essentially none
of that is ever returned to the kernel. Idling in the same process
therefore pins the pod's RSS at the install peak indefinitely.
Re-exec the binary into a hidden `internal-post-install-wait` action
the moment install succeeds. execve(2) discards the entire address
space, so the waiter starts up holding only the working set it actually
needs (a config struct, the SIGTERM handler, and the health server).
To avoid a probe-availability gap during the handover the install
process clears FD_CLOEXEC on the health listener and passes the raw
FD to the child via KATA_DEPLOY_HEALTH_FD. The child reattaches the
FD as a tokio TcpListener and resumes serving /healthz and /readyz
without ever closing the socket — the kubelet sees no failure.
The detected container runtime is similarly threaded through
KATA_DEPLOY_DETECTED_RUNTIME so the waiter doesn't have to re-query
the apiserver. The new action is tagged `#[clap(hide = true)]` so
`--help` doesn't expose it; users should never invoke it directly.
Add the FD-inheritance helpers in health.rs:
- prepare_listener_for_exec(): clears FD_CLOEXEC on a listener and
returns its raw fd number.
- listener_from_inherited_fd(): wraps an inherited fd back into a
tokio::net::TcpListener (and re-sets FD_CLOEXEC so future host
shellouts don't leak the socket).
Fixes: https://github.com/kata-containers/kata-containers/discussions/12976
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
The two pieces of node metadata kata-deploy actually reads are
.status.nodeInfo.containerRuntimeVersion and a single label, both of
which were being fetched through a homegrown JSONPath walker:
- get_node_field() serialised the entire Node object back into a
serde_json::Value tree on every call,
- split_jsonpath() / get_jsonpath_value() then walked that tree by
string key.
Both the deep clone and the helpers themselves are unnecessary — kube's
Node type is already strongly typed. Replace get_node_field() with two
purpose-built accessors that read straight off the Node struct:
- get_container_runtime_version(): pulls
status.node_info.container_runtime_version with a clear error if
the field isn't populated.
- get_node_label(key): returns Option<String> directly from
metadata.labels.
Drop split_jsonpath, get_jsonpath_value, and their unit tests (which
existed only to cover the JSONPath walker we no longer have). Update
the three callers (config.rs, runtime/manager.rs, runtime/containerd.rs)
to use the typed accessors.
This removes the entire serde_json::Value clone-and-walk path from the
hot read path and meaningfully cuts allocator churn during install.
Fixes: https://github.com/kata-containers/kata-containers/discussions/12976
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
The default #[tokio::main] expands with flavor = "multi_thread" and
worker_threads = num_cpus::get(). On a typical NVIDIA GPU node
(200+ vCPUs) that allocates 200+ worker threads with ~2 MiB stacks
each, which is the single largest contributor to the DaemonSet pod's
VmData reservation — hundreds of MiB of address space mapped but never
touched, easily reproducing the "kata-deploy is using ~400 MB" reports
on any monitoring layer that surfaces VSZ / committed virtual memory.
Switch to a fixed two-worker multi-thread runtime instead:
#[tokio::main(flavor = "multi_thread", worker_threads = 2)]
Two workers is exactly the right number for kata-deploy:
- the install path is overwhelmingly I/O-bound and runs serially;
one worker is enough to drive the install future itself,
- install does shell out to `nsenter --target 1 systemctl restart
containerd` (and friends) via the synchronous std::process::
Command::output(), which wedges the worker thread it runs on for
tens of seconds; the second worker keeps the spawned health-server
task able to answer kubelet probes inside timeoutSeconds while
the first is blocked.
flavor = "current_thread" would be tighter still on stacks (~4 MiB
saved) but is fundamentally unsafe here: with a single runtime thread,
any blocking host_systemctl call freezes the health server too, the
kubelet fails the readiness probe, and the pod is restarted long
before install completes. The CI lifecycle test reliably reproduces
this as a 15-minute timeout waiting for the kata-deploy DaemonSet pod
to become Ready.
Net result vs. upstream's num_cpus()-driven pool on a 200-vCPU node:
~200 fewer worker threads, ~400 MiB less VmData reservation, while
keeping kubelet probes responsive across the entire install path.
Add the "sync" tokio feature here too so subsequent commits in the
series can use tokio::sync primitives (OnceCell) without another
features bump.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
The binary doesn't use kube::runtime (controllers, watchers, reflectors)
or kube::derive (the CustomResource macro). Pulling them in only added
transitive deps (kube-runtime, kube-derive, backon, educe, ahash,
async-broadcast, ...) and inflated the binary's static data segment for
no functional gain.
Set default-features = false and select only what the binary actually
calls into: the kube-client surface plus the rustls-tls backend that
hyper-rustls already pulled in transitively. Behaviour is unchanged.
Fixes: https://github.com/kata-containers/kata-containers/discussions/12976
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Register the new qemu-nvidia-gpu-tdx-runtime-rs shim across the kata-deploy
stack so it is built, installed, and exposed as a RuntimeClass.
This adds the shim to the Rust binary's RUST_SHIMS list (so it uses the
runtime-rs binary), SHIMS list, the qemu-tdx-experimental share name
mapping, and the x86_64 default shim set. The Helm chart gets the new
shim entry in values.yaml, try-kata-nvidia-gpu.values.yaml, and the
RuntimeClass overhead definition in runtimeclasses.yaml.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Register the new qemu-nvidia-gpu-snp-runtime-rs shim across the kata-deploy
stack so it is built, installed, and exposed as a RuntimeClass.
This adds the shim to the Rust binary's RUST_SHIMS list (so it uses the
runtime-rs binary), SHIMS list, the qemu-snp-experimental share name
mapping, and the x86_64 default shim set. The Helm chart gets the new
shim entry in values.yaml, try-kata-nvidia-gpu.values.yaml, and the
RuntimeClass overhead definition in runtimeclasses.yaml.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Register the Rust NVIDIA GPU runtime as a kata-deploy shim so it gets
installed and configured alongside the existing Go-based
qemu-nvidia-gpu shim.
Add qemu-nvidia-gpu-runtime-rs to the RUST_SHIMS list and the default
enabled shims, create its RuntimeClass entry in the Helm chart, and
include it in the try-kata-nvidia-gpu values overlay. The kata-deploy
installer will now copy the runtime-rs configuration and create the
containerd runtime entry for it.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The `generate_vendor.sh` script already knows how to create a tarball
with all the rust and go vendored code within the repo. It is used by
the release workflow to provide vendored code to downstream consummers
that might need it.
There isn't any vendored code in the repo anymore.
It thus doesn't seem quite useful to run `make vendor` in CI.
Stop doing it.
Signed-off-by: Greg Kurz <groug@kaod.org>
Now shipped in the vendored code tarball.
Drop the git tree status check since it isn't needed anymore.
Also stop building with `-mod=vendor`. This requires to
expose GOMODCACHE as suggested by Fabiano Fidêncio.
Signed-off-by: Greg Kurz <groug@kaod.org>
Add go vendored code for all packages to the vendor tarball.
This should be enough for people who need vendored code, e.g.
for hermetic builds.
The repo only tracks 4 go vendored code directories but the
script considers all go.mod files accross the repo, for the
sake of simplicity. The impact on the size of the tarball
is less than 20 mb.
It is now possible to stop tracking vendored code in git and
to get rid of `make vendor`.
Signed-off-by: Greg Kurz <groug@kaod.org>
This is to silent :
warning: `.../.cargo/config` is deprecated in favor of `config.toml`
|
= help: if you need to support cargo 1.38 or earlier, you can symlink `config` to `config.toml`
We don't care for cargo 1.38 or earlier.
Signed-off-by: Greg Kurz <groug@kaod.org>
rootfs.sh stops passing a host GOPATH bind-mount into the inner
osbuilder docker run. Pass INSTALL_IN_GOPATH=false so
ci/install_yq.sh installs yq under /usr/local/bin in the container.
scripts/lib.sh resolves yq after sourcing install_yq.sh and fails
clearly if yq is still missing.
This avoids build issues on (managed) build hosts where HOME, for
example, resolves to /localhome/... while the image user record
still points at /home/... On those hosts the old flow could make
the daemon bind-mount a GOPATH path that does not exist or is not
writable on the host (e.g. mkdir or mount under /home/... denied).
Co-authored-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
The kata-deploy DaemonSet pod had no Kubernetes health probes, so the
kubelet could not distinguish between "still installing" and "crashed",
and rolling updates would proceed to the next node before install
actually finished.
Add a lightweight HTTP health server (built on raw tokio TcpListener,
no new crate dependencies) that starts immediately in the install path:
/healthz — liveness: returns 200 as soon as the server binds
/readyz — readiness: returns 503 while installing, 200 after
install completes (artifacts extracted, CRI restarted,
node labeled)
Wire the Helm chart with startup, liveness, and readiness probes
(all individually toggleable). The startup probe allows up to 10
minutes for install to complete before the liveness probe takes over.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The codegen check ensures that generated files are up-to-date and
correspond to the tool versions used in CI. Requiring this check
prevents us from accidentally merging, e.g., proto changes without the
corresponding Rust/Go updates.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Apply same test configs we use in runtime-go config to runtime-rs config.
These are:
- runtime.static_sandbox_resource_mgmt = true
- hypervisor.clh.valid_hypervisor_paths includes cloud-hypervisor-glibc
- hypervisor.clh.path = cloud-hypervisor-glibc
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
Copy Fail" (CVE-2026-31431) is a high-severity local privilege escalation (LPE)
vulnerability found in the Linux kernel in April 2026, which affects all major
Linux distributions—including those using Long Term Support (LTS) kernels—released since 2017.
The bug allows an unprivileged user to gain root access, escape containers,
and modify the in-memory page cache reliably using a tiny 732-byte script
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Containerd 2.3.0 introduces config schema version 4 (see upstream
RELEASES.md and the version-4 server-plugin documentation). The default file
still uses the same split-CRI layout as version 3 (plugins under
io.containerd.cri.v1.runtime and io.containerd.cri.v1.images). Schema v4
mainly moves gRPC, TTRPC, debug, and metrics listener settings under
io.containerd.server.v1.*; kata-deploy does not edit those server tables except
for containerd log verbosity when DEBUG=true.
Fixes: #12936
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Now that all but agent-ctl (still WIP) of the tools are
in the root workspace, switch the default to that and add
the exception for agent-ctl as it's the odd one out.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Mutating the Makefile in-place to strip prereqs was fragile and
limited to one target per invocation. DEPS= skips deps declaratively
and propagates through recursive make, so multi-target builds can
opt out in one shot.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Skipping prereq rebuilds is useful when artifacts are already staged
from a prior run (CI splitting work across jobs, local iteration).
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
During the zizmor refactoring I changed the name of two jobs
to make all the architectures match. I forgot to update required_tests
and as a workflow only change the PR didn't check this, so update
them now.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>