mirror of
https://github.com/kata-containers/kata-containers.git
synced 2026-05-17 04:52:23 +00:00
The default #[tokio::main] expands with flavor = "multi_thread" and
worker_threads = num_cpus::get(). On a typical NVIDIA GPU node
(200+ vCPUs) that allocates 200+ worker threads with ~2 MiB stacks
each, which is the single largest contributor to the DaemonSet pod's
VmData reservation — hundreds of MiB of address space mapped but never
touched, easily reproducing the "kata-deploy is using ~400 MB" reports
on any monitoring layer that surfaces VSZ / committed virtual memory.
Switch to a fixed two-worker multi-thread runtime instead:
#[tokio::main(flavor = "multi_thread", worker_threads = 2)]
Two workers is exactly the right number for kata-deploy:
- the install path is overwhelmingly I/O-bound and runs serially;
one worker is enough to drive the install future itself,
- install does shell out to `nsenter --target 1 systemctl restart
containerd` (and friends) via the synchronous std::process::
Command::output(), which wedges the worker thread it runs on for
tens of seconds; the second worker keeps the spawned health-server
task able to answer kubelet probes inside timeoutSeconds while
the first is blocked.
flavor = "current_thread" would be tighter still on stacks (~4 MiB
saved) but is fundamentally unsafe here: with a single runtime thread,
any blocking host_systemctl call freezes the health server too, the
kubelet fails the readiness probe, and the pod is restarted long
before install completes. The CI lifecycle test reliably reproduces
this as a 15-minute timeout waiting for the kata-deploy DaemonSet pod
to become Ready.
Net result vs. upstream's num_cpus()-driven pool on a 200-vCPU node:
~200 fewer worker threads, ~400 MiB less VmData reservation, while
keeping kubelet probes responsive across the entire install path.
Add the "sync" tokio feature here too so subsequent commits in the
series can use tokio::sync primitives (OnceCell) without another
features bump.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Assisted-by: Cursor <cursoragent@cursor.com>
Kata Containers packaging
Introduction
Kata Containers currently supports packages for many distributions. Tooling to aid in creating these packages are contained within this repository.
Build in a container
Kata build artifacts are available within a container image, created by a
Dockerfile. Reference DaemonSets are provided in
kata-deploy, which make installation of Kata Containers in a
running Kubernetes Cluster very straightforward.
Build static binaries
See the static build documentation.
Build Kata Containers Kernel
Build QEMU
Create a Kata Containers release
See the release documentation.
Packaging scripts
See the scripts documentation.
Credits
Kata Containers packaging uses packagecloud for package hosting.