Compare commits

...

607 Commits

Author SHA1 Message Date
Fabiano Fidêncio
17472f3f10 release: scripts: Accept KATA_TOOLS_STATIC_TARBALL env var
a2534e7bc8 introduced the logic to also
release a kata-tools tarball, but it missed allowing
KATA_TOOLS_STATIC_TARBALL env var to be passed to the release script,
leading to the following error during the release process:
```
ERROR: Invalid environment variable "KATA_TOOLS_STATIC_TARBALL"
```

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-19 13:03:23 +01:00
Fabiano Fidêncio
882862d711 release: Bump version to 3.25.0
Bump VERSION and helm-charts versions.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-19 11:33:45 +01:00
XanderC
93beb58c5d runtime: fix network initialization for non-hotplug VMMs
In startVM(), for VMMs without hotplug support (e.g., Firecracker or
QEMU microvm), the runtime runs prestart hooks but misses rescanning
the network namespace. This causes VMs to boot with uninitialized
network configs, as updates from CNI plugins are not captured.

This patch adds a network rescan via AddEndpoints after prestart hooks
for the non-hotplug path, ensuring correct network info is passed to
the VMM configuration before the VM starts.

Fixes #11500

Signed-off-by: XanderC <xanderc@qq.com>
2026-01-17 23:56:59 +01:00
Zvonko Kaiser
428cc5d586 gpu: Chroot Cleanup
With the newest NVRC we do not need the supported GPUs
anymore.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-17 19:27:24 +01:00
Fabiano Fidêncio
1c154b4c15 kernel: Add DAX fix for arm64
The patch has been provided upstream by Seunguk Shin and is already
approved.

We'll drop it once it becomes available in the LTS tree.

Reference:
https://lore.kernel.org/all/18af3213-6c46-4611-ba75-da5be5a1c9b0@arm.coum

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-17 19:15:53 +01:00
Fabiano Fidêncio
33b1f0786e Revert "arm64: Do not use DAX with the rootfs image"
This reverts commit 2acb94ef2d, as we have
a kernel patch approved fixing the issue.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-17 19:15:53 +01:00
Alex Lyn
fe15f2fa47 runtime-rs: Remove deprecated virtio-9p
The virtio-9p is not supported for a long time, specially within
the runtime-rs, we have no such plan to support it. Removal of the
related items is reasonable.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
b7cfc6fd72 runtime-rs: Remove mem-agent section from TDX/SNP configurations
As Memory Agent feature is not used within CoCo(TDX/SNP) scenarios,
with this fact, it's better to just remove the related sections.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
634ec2b56d runtime-rs: Add configurable SNP items in Makefile when make build
It aims to introduce some related items within Makefile to enable
Intel SNP settings in configuration when do make build. And make it
possible to generate the rendered qemu-snp-runtime-rs configuration
based on the *.in template.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
0abdb8e016 runtime-rs: Introduce a qemu-runtime-rs/SEV-SNP dedicated configuration
To make it work well on the SEV-SNP platforms for qemu-runtime-rs with
coco, a dedicated SEV-SNP configuration should be introduced to help
prepare related CVM resources.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
b0a82f7bb8 runtime-rs: Enable measured rootfs within configuration when make build
Enable measured rootfs within configuration when make build. And add
some other important items to make the configuration work well.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
3799855040 runtime-rs: Add configurable TDX items in Makefile when make build
It aims to introduce some related items within Makefile to enable
Intel TDX settings in configuration when do make build. And make it
possible to generate the rendered qemu-tdx-runtime-rs configuration
based on the *.in template.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Alex Lyn
4d55e2c8c8 runtime-rs: Introduce a dedicated configuration for qemu-runtime-rs/TDX
To make it work well on the TDX platforms for qemu-runtime-rs with
coco, a dedicated TDX configuration should be introduced to help
prepare related CVM resources.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-17 18:52:57 +01:00
Manuel Huber
956f43c6c6 runtime: skip MoveTo for systemd cgroups
Systemd-managed cgroups use the slice:prefix:name format, which is
not a filesystem path. Calling MoveTo() on such paths fails with
"invalid group path" and can abort cleanup before Delete() runs.
In some cases, this causes pod teardown delays.
Skip MoveTo for systemd-formatted sandbox/overhead cgroup paths when
sandbox_cgroup_only is true; systemd moves tasks on unit deletion.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-16 16:41:38 +01:00
Manuel Huber
6b70923e55 docs: Update NVIDIA GPU passthrough QEMU scenario
With cold-plug becoming by design the only supported mode with the
update of NVRC to v0.1.1, resolving references to hot-plug.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-16 13:50:10 +01:00
Steve Horsman
610a8bdfd5 Merge pull request #12346 from Amulyam24/ppc64le-payload
ci: move the job publish kata payload after push to an alternate runner for ppc64le
2026-01-16 11:41:53 +00:00
Fabiano Fidêncio
ea18f543b4 tests: kata-deploy: Enable verification during helm install
Enable post-install verification in kata-deploy CI tests. When
HELM_VERIFY_DEPLOYMENT is set, a simple verification pod is created
that runs with the Kata runtime to confirm deployment succeeded.

The verification pod prints kernel info and exits - success indicates
the Kata runtime is properly configured and functional.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-16 10:52:43 +01:00
Fabiano Fidêncio
a188f04d75 kata-deploy: helm: Add optional post-install verification
Add optional verification that runs after kata-deploy installation.
When a pod spec is provided via --set-file verification.pod=<file>,
a verification job runs after install/upgrade to validate deployment.

The user is fully responsible for the verification pod content:
- Pod name, runtimeClassName, annotations, and verification logic
- Pod must exit 0 on success, non-zero on failure

The verification job simply:
1. Waits for kata-deploy DaemonSet to be ready
2. Applies the user-provided pod spec
3. Waits for the pod to complete
4. Shows logs and cleans up

Usage:
  helm install kata-deploy ... \
    --set-file verification.pod=/path/to/your-pod.yaml

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-16 10:52:43 +01:00
Amulyam24
859313d904 ci: move the job payload after push to an alternate runner for ppc64le
To unlock the release, move the job to publish kata payload after push to an alternate runner(IBM owned) for ppc64le.

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2026-01-16 11:14:42 +05:30
Alex Lyn
c0cca81993 runtime-rs: Set default_bridges with 0 for dragonball vmm
As Dragonball VMM does not support PCI hotplug options, it should
be set 0.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-15 20:32:15 +01:00
Alex Lyn
1a76d44e16 kata-types: Chanage the default bridges with 1
It aims to align it with the Makefile and configuration's
setting.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-15 20:32:15 +01:00
Alex Lyn
6375b3881d runtime-rs: Set the default bridges with default 1
As runtime-go use the default bridges with 1, it should be
kept as 1 to avoid alignment issues.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-15 20:32:15 +01:00
Alex Lyn
8728b262fb Merge pull request #12338 from zvonkok/nvrc-update
gpu: Bump NVRC Version
2026-01-15 19:36:07 +08:00
Zvonko Kaiser
adce41c432 gpu: Bump NVRC Version
The new NVRC version works for CC and non-CC use cases,
no --feature confidential needed anymore.

Bump versions.yaml and adjust deployment instructions.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-15 01:51:10 +00:00
Manuel Huber
6753c3ac08 runtime: nvidia: Disable NVDIMM
Disable NVDIMM. When using GPU passthrough, using NVDIMM would create
a r/o file-backed memory region. When using a GPU, QEMU tries to DMA-
map guest memory for the device, resulting in a mapping error:
memory listener initialization failed: Region mem0:
vfio_container_dma_map ... -22 (Invalid argument).
For the CC configs, NVDIMM is disabled by default in qemu_amd64.go
with a warning, but we also explicitly disable the setting in the
shim configuration file.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-14 22:51:07 +01:00
Fabiano Fidêncio
a9dda0e52b versions: nvidia: Bump kernel to the latest LTS
As now that we have the decoupled rootfs / kernel, doing the bump
becomes trivial.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-14 20:45:54 +01:00
Fabiano Fidêncio
4e99860fd2 workflows: nvidia: Adjust to kernel / roots build decouple
We don't need to store the kernel headers anymore. We do need to store
the kernel modules, instead.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
02d2b6bdf2 kernel: bump kata_config_version
We have kernel build changes bump the config version

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
a075c3740a gpu: build_image.sh use versions.yaml
We've done some bad file based driver determination,
now with versions.yaml there is a single source of truth.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
ffc8725164 gpu: rootfs update decoupling
Remove all the driver build instructions,
sicne those are now done in the kernel target.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
cca973772d gpu: deploy modules for kernel build
We need to package the build modules for the rootfs
to be able to consume it. We package the whole
/lib/modules/$(uname -r)  directory strip=2.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
13ed3cdff9 gpu: Add NVIDA modules to build-kernel.sh
Checkout and build the kernel modules along
with the kernel to avoid the kernel rootfs dependency.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
2a11910acb gpu: Remove building of Headers
Since we build along the kernel we do not need to
carry over the headers to the rootfs build.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
b1870fef07 gpu: versions.yaml nvidia driver pinning
We want to have deterministic behaviour and only
one valid driver version acceptable via versions.yaml

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Zvonko Kaiser
229481b348 kernel: bugfix install yq
We actually never installed yq to the kernel build,
there are  some path that use yq but were never hit,
for the GPU use-case we need to read values from versions.yaml

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-01-14 20:45:54 +01:00
Steve Horsman
6db3a4cf8d Merge pull request #12333 from fitzthum/bump-v0180
Update Trustee and guest-components for upcoming releases
2026-01-14 19:44:55 +00:00
Tobin Feldman-Fitzthum
ca29e68acb agent-ctl: bump image-rs version
In preparation for coco v0.18.0, bump the version of image-rs we use in
agent-ctl to match what we have in versions.yaml.

Drop the snapshotter-overlayfs feature. This was dropped from image-rs
when we removed enclave-cc support.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2026-01-14 06:54:29 -08:00
Tobin Feldman-Fitzthum
25a08ef739 versions: bump Trustee and guest-components
Before cutting the Kata release that will be used with CoCo v0.18.0,
let's bump the versions of Trustee and guest-components to latest.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2026-01-14 06:43:30 -08:00
Steve Horsman
0f5f914a04 Merge pull request #12330 from LandonTClipp/docs_improvement
docs: Navigation improvements and bug fixes to Pages
2026-01-14 14:13:29 +00:00
stevenhorsman
70e3e2b0c9 genpolicy: Bump openssl-src
This is a vulnerability (CVE-2025-9230) in openssl, so move
to 3.5.4 which has a fix for this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-14 14:05:48 +01:00
stevenhorsman
aace7a7336 versions: Bump openssl-src
This is a vulnerability (CVE-2025-9230) in openssl, so move
to 3.5.4 which has a fix for this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-14 14:05:48 +01:00
Fabiano Fidêncio
2acb94ef2d arm64: Do not use DAX with the rootfs image
Kernel 6.18.x has an issue with DAX, which is not yet fixed upstream:
```
[    0.737679] EXT4-fs (pmem0p1): mounted filesystem 79676804-7c8b-491a-b2a6-9bae3c72af70 ro with ordered data mode. Quota mode: disabled.
[    0.737891] VFS: Mounted root (ext4 filesystem) readonly on device 259:1.
[    0.739119] devtmpfs: mounted
[    0.739476] Freeing unused kernel memory: 1920K
[    0.740156] Run /sbin/init as init process
[    0.740229]   with arguments:
[    0.740286]     /sbin/init
[    0.740321]   with environment:
[    0.740369]     HOME=/
[    0.740400]     TERM=linux
[    0.743162] Unable to handle kernel paging request at virtual address fffffdffbf000008
[    0.743285] Mem abort info:
[    0.743316]   ESR = 0x0000000096000006
[    0.743371]   EC = 0x25: DABT (current EL), IL = 32 bits
[    0.743444]   SET = 0, FnV = 0
[    0.743489]   EA = 0, S1PTW = 0
[    0.743545]   FSC = 0x06: level 2 translation fault
[    0.743610] Data abort info:
[    0.743656]   ISV = 0, ISS = 0x00000006, ISS2 = 0x00000000
[    0.743720]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[    0.743785]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[    0.743848] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000000b9d17000
[    0.743931] [fffffdffbf000008] pgd=10000000bfa3d403, p4d=10000000bfa3d403, pud=1000000040bfe403, pmd=0000000000000000
[    0.744070] Internal error: Oops: 0000000096000006 [#1]  SMP
[    0.748888] CPU: 0 UID: 0 PID: 1 Comm: init Not tainted 6.18.4 #1 NONE
[    0.749421] pstate: 004000c5 (nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[    0.749969] pc : dax_disassociate_entry.constprop.0+0x20/0x50
[    0.750444] lr : dax_insert_entry+0xcc/0x408
[    0.750802] sp : ffff80008000b9e0
[    0.751083] x29: ffff80008000b9e0 x28: 0000000000000000 x27: 0000000000000000
[    0.751682] x26: 0000000001963d01 x25: ffff0000004f7d90 x24: 0000000000000000
[    0.752264] x23: 0000000000000000 x22: ffff80008000bcc8 x21: 0000000000000011
[    0.752836] x20: ffff80008000ba90 x19: 0000000001963d01 x18: 0000000000000000
[    0.753407] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[    0.753970] x14: ffffbf3154b9ae70 x13: 0000000000000000 x12: ffffbf3154b9ae70
[    0.754548] x11: ffffffffffffffff x10: 0000000000000000 x9 : 0000000000000000
[    0.755122] x8 : 000000000000000d x7 : 000000000000001f x6 : 0000000000000000
[    0.755707] x5 : 0000000000000000 x4 : 0000000000000000 x3 : fffffdffc0000000
[    0.756287] x2 : 0000000000000008 x1 : 0000000040000000 x0 : fffffdffbf000000
[    0.756871] Call trace:
[    0.757107]  dax_disassociate_entry.constprop.0+0x20/0x50 (P)
[    0.757592]  dax_iomap_pte_fault+0x4fc/0x808
[    0.757951]  dax_iomap_fault+0x28/0x30
[    0.758258]  ext4_dax_huge_fault+0x80/0x2dc
[    0.758594]  ext4_dax_fault+0x10/0x3c
[    0.758892]  __do_fault+0x38/0x12c
[    0.759175]  __handle_mm_fault+0x530/0xcf0
[    0.759518]  handle_mm_fault+0xe4/0x230
[    0.759833]  do_page_fault+0x17c/0x4dc
[    0.760144]  do_translation_fault+0x30/0x38
[    0.760483]  do_mem_abort+0x40/0x8c
[    0.760771]  el0_ia+0x4c/0x170
[    0.761032]  el0t_64_sync_handler+0xd8/0xdc
[    0.761371]  el0t_64_sync+0x168/0x16c
[    0.761677] Code: f9453021 f2dfbfe3 cb813080 8b001860 (f9400401)
[    0.762168] ---[ end trace 0000000000000000 ]---
[    0.762550] note: init[1] exited with irqs disabled
[    0.762631] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
```

For now, we limit the rootfs that we ship to ARM64 to not use DAX, in
the future we'll re-enable it as soon as the patch lands on mainstream
kernel.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-14 11:46:40 +01:00
Fabiano Fidêncio
3ef99f4ee3 versions: Add specific nvidia kernel version
This is needed as the 580 driver doesn't build against 6.18.x, and the
590 driver is not yet fully working for our case, thus we stick to the
previous version that worked before.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-14 11:46:40 +01:00
Fabiano Fidêncio
cce5d4abf6 kernel: bump to v6.18.x (LTS)
Bump both the kernel and kernel-confidential versions from v6.12.x and
v6.16.x to v6.18.4, aligning with the new LTS release.

Kernel 6.18 introduced several configuration changes that required
updates to our kernel config fragments:

* CRYPTO_FIPS dependencies changed:
  - In 6.12: depended on !CRYPTO_MANAGER_DISABLE_TESTS
  - In 6.18: now depends on CRYPTO_SELFTESTS (which requires EXPERT)
  Added CONFIG_EXPERT=y and CONFIG_CRYPTO_SELFTESTS=y to crypto.conf
  to satisfy the new dependency chain.
  * CONFIG_EXPERT is a naughty one, as it disables / enables a bunch
    of things behind ones back, probably just to prove a point that
    it is for experts ;-) ... regardless, a reasonable amount of
    options had to be re-added in order to make sure anything ends
    up broken.

* Legacy iptables support:
  Kernel 6.18 requires explicit legacy xtables/iptables configs for
  IP_NF_* options. Added CONFIG_NETFILTER_XTABLES_LEGACY,
  CONFIG_IP_NF_IPTABLES_LEGACY, and CONFIG_IP6_NF_IPTABLES_LEGACY
  to netfilter.conf.

* Module signing dependencies:
  Added CONFIG_MODULES=y and other required dependencies to
  module_signing.conf to ensure MODULE_SIG can be properly enabled.

* Whitelist updates:
  - Added CONFIG_NF_CT_PROTO_DCCP (removed in 6.18+)
  - Added CONFIG_CRYPTO_SELFTESTS, CONFIG_NETFILTER_XTABLES_LEGACY,
    CONFIG_IP_NF_IPTABLES_LEGACY, CONFIG_IP6_NF_IPTABLES_LEGACY
    (added in 6.18+, not present in older kernels like 6.12)

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-14 11:46:40 +01:00
LandonTClipp
197231456f docs: Navigation improvements and bug fixes to Pages
A few minor changes to the Zensical config that makes navigation easier. Also
fixed a couple of bugs with local serving and added some quality of life
features to Zensical.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2026-01-13 11:17:58 -06:00
LandonTClipp
94fde1356c docs: Add Zensical Doc Site Generation
This commit adds a Github workflow for building a Github Pages site for the markdown
files in the docs/ directory. Zensical is a new markdown-based static site generation
framework built by the creators of Material for Mkdocs. https://zensical.org/

This commit does not clean the doc structure, so site navigation is initially going to
be messy.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2026-01-13 12:42:02 +01:00
dependabot[bot]
3377d729ea build(deps): bump rsa from 0.9.6 to 0.9.9 in /src/tools/agent-ctl
Bumps [rsa](https://github.com/RustCrypto/RSA) from 0.9.6 to 0.9.9.
- [Changelog](https://github.com/RustCrypto/RSA/blob/v0.9.9/CHANGELOG.md)
- [Commits](https://github.com/RustCrypto/RSA/compare/v0.9.6...v0.9.9)

---
updated-dependencies:
- dependency-name: rsa
  dependency-version: 0.9.9
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-13 04:08:40 +01:00
Fupan Li
1f1a000608 Merge pull request #12291 from Apokleos/bump-qapi
runtime-rs: Bump qapi-rs from 0.14 to 0.15
2026-01-13 10:39:41 +08:00
Manuel Huber
9e30283952 runtime: nvidia: change kernel parameters
Remove the agent hotplug timeout parameter from the kernel
command line. Having shifted to VFIO cold-plug, this parameter is
no longer needed.
Remove the no longer required parameter for TDX and thus align the
SNP and TDX configurations.
Add a parameter to avoid the kernel to mount the /dev tmpfs. NVRC
and later on kata-agent attempt this. While kata-agent does not
panic when mounting /dev fails, NVRC makes mounting /dev a hard
requirement.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-12 16:11:28 -08:00
dependabot[bot]
bcadb9b231 build(deps): bump sequoia-openpgp in /src/tools/agent-ctl
Bumps [sequoia-openpgp](https://gitlab.com/sequoia-pgp/sequoia) from 2.0.0 to 2.1.0.
- [Commits](https://gitlab.com/sequoia-pgp/sequoia/compare/openpgp/v2.0.0...openpgp/v2.1.0)

---
updated-dependencies:
- dependency-name: sequoia-openpgp
  dependency-version: 2.1.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-12 22:16:51 +01:00
Alex Lyn
fba92880c9 tests: make set_container_command idempotent and add debug output
set_container_command() previously appended command arguments
one-by-one with
'.command += [...]'. This makes the helper non-idempotent and can
lead to unexpected command arrays when invoked multiple times.

Update the helper to set the full command array in a single yq v4
expression and print the target YAML path plus the command being
applied to simplify debugging when tests fail.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-12 17:56:28 +01:00
Alex Lyn
38296a41b2 tests: Generate pod config with stable .yaml suffix
The pod config file created by new_pod_config() was generated via
mktemp using the template "pod-config.yaml.in.XXX", which produces
filenames that do not end with ".yaml" (e.g. pod-config.yaml.in.ABC).

If the random combination of special suffix with ".Csv" or ".Xml", etc.
the following operations with yq will fail.

Some helpers and tooling assume the config path ends with ".yaml".
Switch the mktemp template to place the random suffix before the
extension so the returned path always ends with ".yaml".

Fixes: #12268, #12319

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-12 17:56:28 +01:00
Fabiano Fidêncio
9fec31f400 tools: kubectl: Add kubectl version as a tag
This is a suggestion from Choi, so we can easily test with a specific
kubectl version and also easily understand which kubectl version is
being used in case of failure.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-12 15:48:44 +01:00
Fabiano Fidêncio
26dfcb627b tools: Build kubectl image
This image will be used by our helm charts to verify that a
kata-containers deployment is correct.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-12 15:48:44 +01:00
Alex Lyn
d03eccf567 runtime-rs: Improve wait_for_migration to avoid fixed sleep
Enhance the wait_for_migration implementation to reliably wait for
QEMU migration completion and avoid the previous `sleep(280ms)`
delay.
(1) Add an initial fast-path query to return immediately if
migration is already completed/failed/cancelled.
(2) Use a hard deadline to enforce timeouts deterministically.
(3) Implement adaptive polling with backoff and a maximum interval
to reduce QMP load while keeping responsiveness.
(4) Unify migration status handling and return clear errors on
failed/cancelled states.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-12 20:06:55 +08:00
Alex Lyn
5026b33455 runtime-rs: Introduce a method to detect current migrate info
Return information about current migration process. And the input
and output as below:
{ 'command': 'query-migrate', 'returns': 'MigrationInfo' }

But note that the Qemu API is valid within qapi-rs(v0.15+)

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-12 20:06:55 +08:00
Alex Lyn
c472b5db54 runtime-rs: Bump qapi-rs from 0.14 to 0.15
The detailed information about the updated versions as below:
```
qapi = { version = "0.15", features = ["qmp", "async-tokio-all"] }
qapi-spec = "0.3.2"
qapi-qmp = "0.15.0"
```
and it will correct some corresonding structures.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-12 20:06:55 +08:00
Manuel Huber
183507beeb agent: change secure_storage_integrity default
Change the secure_storage_integrity option's default value to true.
With this, integrity protection for encrypted block device contents
will be requested from the confidential data hub by default, see the
agent's cdh_handler_trusted_storage function in rpc.rs.
This behavior can be disabled by explicitly setting the
agent.secure_storage_integrity parameter to 0 or false via kernel
command line parameters.

This will affect the trusted storage implementation for the guest-pull
mechanism, and it will affect future implementations using this code
path, such as implementations for ephemeral secure storage.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-10 16:54:03 +01:00
stevenhorsman
a0d96256f5 packaging: Fix tools permissions issue
In some builds we are seeing:
```
error: could not create temp file /opt/rustup/tmp/r2xu46kwuyc7k2kr_file: Permission denied (os error 13)
```
in the agent-ctl build, so try and port a fix from #12313 to the tools build
to try and resolve this.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-09 21:45:26 +01:00
Federico A. Corazza
787768fe9b kata-deploy: Fix extraction of the containerd major version
Fixes deploying kata-containers using k3s. The deploy script fails with /opt/kata-artifacts/scripts/kata-deploy.sh: line 397: [: too many arguments

Signed-off-by: Federico A. Corazza <git@facorazza.com>
2026-01-09 19:52:18 +01:00
stevenhorsman
5067ed7d9a versions.yaml: Fix formatting errors
yamllint complains that there is only one space before the comment,
so add a second to prevent this annoying message showing up.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-09 19:36:31 +01:00
stevenhorsman
a850f66fc4 versions: Bump rust to 1.89
Following the agreed toolchain policy - bump rust to the current (1.91)-2
releases.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-09 19:36:31 +01:00
Manuel Huber
df2896c298 docs: Create NVIDIA GPU passthrough QEMU scenario
Create a new page for a reference implementation for Kubernetes
using QEMU, the go shim and an NVIDIA rootfs. The new page
contains information on:
- components involved in the NVIDIA (TEE) GPU scenario
- orchestration flow for GPU passthrough scenarios
- deployment guidance

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-09 19:02:56 +01:00
Manuel Huber
43627805f4 docs: Improve structure and flow of NVIDIA guide
- Apply a few structural/grouping changes and improve flow
- Group build sections together
- Move usage examples to last section

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-09 19:02:56 +01:00
Steve Horsman
489deaad17 Merge pull request #12297 from manuelh-dev/mahuber/fix-doc
docs: Fix trusted-image-storage reference
2026-01-09 15:22:25 +00:00
Hyounggyu Choi
2962e14c10 virtiofsd: fix RUSTUP_HOME and CARGO_HOME permissions for non-root builds
The following error was observed during virtiofsd static build:

```
error: could not create temp file /opt/rustup/tmp/p44enysfaxwdbvw4_file:
Permission denied (os error 13)
```

This occurs because RUSTUP_HOME and CARGO_HOME were initialized by the
root user during `docker build`, but `cargo build` is executed as a
non-root user via 'docker run --user'.

Ensure these directories are writable by adjusting the permission after
the toolchain installation is complete.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-01-09 14:01:20 +01:00
Manuel Huber
65aa99f291 docs: Fix trusted-image-storage reference
The sample uses a volume device name which does not exist,
hence fix.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-09 11:41:18 +00:00
Saul Paredes
02979a13e3 Merge pull request #12208 from romoh/patch-1
ci: Update AKS setup post Pod Sandboxing GA
2026-01-08 11:02:05 -08:00
Fabiano Fidêncio
f8318c0542 kata-deploy: Remove unused dependency
We're depending solely on toml_edit, thus we can safely remove the toml
dependency.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-08 18:58:11 +01:00
Fupan Li
b3546f3a68 Merge pull request #12282 from kata-containers/set-required-ci
Set several tests as required ci
2026-01-08 20:34:39 +08:00
Mikko Ylinen
cc6277b735 Revert "tdx: Update GPU config for the latest TDX stack"
Prefer the "full feature TDVF" instead of the generic OVMF build. See
Option-B in
https://github.com/tianocore/edk2/tree/master/OvmfPkg/IntelTdx#configurations-and-features
for the extra hardening supported.

FIRMWAREPATH_NV also seems to be TDX specific unlike the Makefile
suggests. Therefore, it can be dropped completely.

This reverts commit 66ccc25724.
2026-01-08 10:21:47 +01:00
Mikko Ylinen
e02e226431 packaging: build OVMF for Intel TDX again
OVMF build for Intel TDX (aka "TDVF") was disabled in favor of Ubuntu/
CentOS pre-upstream releases of Intel TDX.

See 4292c4c3b1.

It's time to re-enable the build and move runtime configurations to
use it (the latter will be done in a later commit).

This is a partial revert of 4292c4c3b with the following changes:
- Stop calling OVMF for Intel TDX "TDVF" and follow the naming distros
use for TDX enabled build: OVMF.inteltdx.fd.
- Single binary OVMF.inteltdx.fd is supported using -bios QEMU param.
- Secure Boot infrastructure is disabled since Kata does not support it.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2026-01-08 10:21:47 +01:00
Alex Lyn
f3d92a8b4a dragonball: Fix UT failed in test_fs_manipulate_backend_fs
Improve the checking logic for source path existing.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 12:42:00 +08:00
Alex Lyn
7de968b416 dragonball: Fix warning of unused method
Actually this method is indeed called, just add attribute of
`#[allow(dead_code)]` to allow UT pass. And the warning looks like:
warning: method `send_message_with_payload` is never used
    |
224 | impl<R: Req> Endpoint<R> {
    | ------------------------ method in this implementation
...
522 |     pub fn send_message_with_payload<T: Sized, P: Sized>(
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: `#[warn(dead_code)]` on by default

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 11:01:34 +08:00
Alex Lyn
36d3d7c3bf dragonball: Fix warnings of result to be handled
warning: unused `std::result::Result` that must be used
   -->
src/dragonball/dbs_virtio_devices/src/vhost/vhost_user/net.rs:679:9
    |
679 | /         VirtioDevice::<Arc<GuestMemoryMmap<()>>, QueueSync,
GuestRegionMmap>::write_config(
680 | |             &mut dev, 0, &config,
681 | |         );
    | |_________^
    |
    = note: this `Result` may be an `Err` variant, which should be
handled
    = note: `#[warn(unused_must_use)]` on by default
help: use `let _ = ...` to ignore the resulting value
    |
679 |         let _ = VirtioDevice::<Arc<GuestMemoryMmap<()>>,
QueueSync, GuestRegionMmap>::write_config(
    |         +++++++

warning: unused `std::result::Result` that must be used
   -->
src/dragonball/dbs_virtio_devices/src/vhost/vhost_user/net.rs:683:9
    |
683 | /         VirtioDevice::<Arc<GuestMemoryMmap<()>>, QueueSync,
GuestRegionMmap>::read_config(
684 | |             &mut dev, 0, &mut data,
685 | |         );
    | |_________^
    |
    = note: this `Result` may be an `Err` variant, which should be
handled
help: use `let _ = ...` to ignore the resulting value
    |
683 |         let _ = VirtioDevice::<Arc<GuestMemoryMmap<()>>,
QueueSync, GuestRegionMmap>::read_config(
    |         +++++++

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 10:52:19 +08:00
Alex Lyn
6a1b25a4b0 dragonball: Fix warning of variable does not need to be mutable
the WARNING looks like as:
...
warning: variable does not need to be mutable
   --> src/dragonball/dbs_virtio_devices/src/vsock/csm/txbuf.rs:217:13
    |
217 |         let mut tmp: Vec<u8> = vec![0; TxBuf::SIZE - 2];
    |             ----^^^
    |             |
    |             help: remove this `mut`
    |
    = note: `#[warn(unused_mut)]` on by default
...

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 10:44:25 +08:00
Alex Lyn
064271b9cb dragonball: Fix unexpected cfg condition of test-resources
Fix the warnings about unexpected cfg of test-resources, and the
detailed warning message looks like as below:

...
warning: unexpected `cfg` condition value: `test-resources`
   --> src/dragonball/dbs_virtio_devices/src/fs/device.rs:973:11
    |
973 |     #[cfg(feature = "test-resources")]
    |           ^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: expected values for `feature` are: `fuse-backend-rs`,
`vhost`, `vhost-net`, `vhost-rs`, `vhost-user`, `vhost-user-blk`,
`vhost-user-fs`, `vhost-user-net`, `virtio-balloon`, `virtio-blk`,
`virtio-fs`, `virtio-fs-pro`, `virtio-mem`, `virtio-mmio`, `virtio-net`,
and `virtio-vsock`
    = help: consider adding `test-resources` as a feature in
`Cargo.toml`
...

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 10:39:33 +08:00
Alex Lyn
ef36c47ca4 runtime-rs: Fix deprecated method in UT
Remove into_path() and replace it with keep().

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 10:32:31 +08:00
Alex Lyn
e4451baa84 tests: Set run-nerdctl-tests with qemu-runtime-rs required
run-nerdctl-tests (qemu-runtime-rs)

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 09:56:50 +08:00
Alex Lyn
56a21c33a3 tests: Set stability tests with qemu-runtime-rs required
run-containerd-stability (active, qemu-runtime-rs)
run-containerd-stability (lts, qemu-runtime-rs)

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 09:56:50 +08:00
Alex Lyn
679e31d884 tests: Set run-nydus CIs as required
run-basic-amd64-tests / run-nydus

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2026-01-08 09:56:50 +08:00
Fabiano Fidêncio
6b3953dd51 tests: k8s: liveness-probes: Adjust events grep
Till k8s 1.34 we could grep by "Started containerd". From k8s 1.35
onwards the event message changed and we should, instead, grep by
"Container started".

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-07 23:01:59 +01:00
Fabiano Fidêncio
c4194538e2 versions: Bump QEMU to v10.2.0
QEMU v10.2.0 was released on December 24th, 2025.

The experimental GPU SNP / TDX are also pointing to v10.2.0 release with
their gpu-{snp,tdx}-20260107 branch.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-01-07 12:30:55 +01:00
Steve Horsman
93ad6fde75 Merge pull request #12294 from stevenhorsman/remediate-RUSTSEC-2021-0064
versions: Bump sha2 crate version
2026-01-07 09:53:26 +00:00
stevenhorsman
c456b84537 versions: Bump sha2 crate version
sha2 0.9.3 includes the use of cpuid-bool, which was renamed to cpufeatures
around 5 years ago. Try moving to a workspace dependency of sha2
and bumping to the latest version to remediate RUSTSEC-2021-0064

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2026-01-06 15:41:34 +00:00
Roaa Sakr
44c79cf14a ci: Update AKS setup post Pod Sandboxing GA
Update workload-runtime value to align with current AKS Pod Sandboxing documentation post GA.

Signed-off-by: Roaa Sakr <romoh@microsoft.com>
2026-01-05 13:47:33 -08:00
Steve Horsman
9463dd970e Merge pull request #12287 from mythi/drop-qat
use-cases: drop Intel QuickAssist instructions
2026-01-05 13:28:16 +00:00
Mikko Ylinen
99bc0f49cc use-cases: drop Intel QuickAssist instructions
While the use-case of Intel QuickAssist (QAT) accelerated crypto
and/or compression with k8s and Kata Containers is still valid,
the setup instructions are outdated:

Starting with Intel Xeon Gen4 (Sapphire Rapids), QAT driver
stack moved to in-tree drivers without a separete SR-IOV VF
driver.

Drop all the setup instructions but keep the use-cases doc
for reference. Users wanting to enable the use-case, should consult
with Intel QAT Device plugins or Intel QAT DRA driver authors.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2026-01-02 12:14:04 +02:00
Fupan Li
b27a80b800 Merge pull request #12156 from Apokleos/required-coco-dev-rs
tests: Make the tests coco-dev job with coco-dev-runtime-rs required
2025-12-25 17:30:40 +08:00
Steve Horsman
bdc5f7d4be Merge pull request #12271 from stevenhorsman/bump-rust-to-1.88
Bump rust to 1.88
2025-12-23 21:38:42 +00:00
Alex Lyn
0b1a5c6e93 tests: Make the tests coco-dev job with coco-dev-runtime-rs required
The nontee job (run-k8s-tests-coco-nontee) for qemu-coco-dev-runtime-rs
is running well and it's time to make it required when the CI runs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-23 09:54:52 +08:00
stevenhorsman
b6108a7c4a dragonball: Fix manual implementation of .is_multiple_of
Use this new method to avoid the clippy warning and increase
readability

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
55be31ef0f runtime-rs: Fix manual implementation of .is_multiple_of
Use this new method to avoid the clippy warning and increase
readability

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
1d139a7c92 versions: Bump rust to 1.88
In prep for the bump to rust 1.90, try bumping
to 1.88 first to see if the CI is successful here

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
c6053e976f dragonball: Improve vector initialisation
Directly initialise  a zero-filled vector, rather than resizing later

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
18a51dad98 dragonball: Fix manual slice size calculation
Using the built in size_of_val is easier to read and less error-prone
than doing this calculation manually

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
188c9e6eb7 dragonball: Prefer from over into
From give Into for free, so prefer this method

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
c7daa12fe6 dragonball: Remove unnecessary cast
Don't cast usize to usize

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
6c19bd01c8 dragonball: Fix redundant pattern matching
Convert `matches!(desc, None)` to desc.is_none() which is simpler

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
15c6ef5988 dragonball: Fix deprecated cargo-clippy cfg
#[cfg(feature = "cargo-clippy")] has been deprecated for years,
so should be replaced with `#[cfg(clippy)]`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
e0d09dd787 dragonball: Fix useless use of vec!
`vec![...]` is the same as `[...]`, so remove it to clean up code

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
4fb90d61aa dragonball: Temporaily skip kvm bindgen tests
There are many, many null pointer dereferences in the bindgen code
when moving between rust 1.85.1 and 1.86 and no docs of the source
that it was generated from, so try and skip
these test from running until an SME can look at them @lifupan

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:19 +00:00
stevenhorsman
04306c162b genpolicy: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:11 +00:00
stevenhorsman
b9ce0bbdf8 trace-forwarder: Fix uninlined_format_args in examples
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:11 +00:00
stevenhorsman
c5f0acef23 kata-ctl: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:02 +00:00
stevenhorsman
aff3524420 kata-ctl: Refresh runtime-rs crates
runtime-rs crates are pulled into kata-ctl and some of these have
bumped recently, so update these in kata-ctl as well

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:50:01 +00:00
stevenhorsman
2caa62f753 agent-ctl: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:52 +00:00
stevenhorsman
6006b8350d libs: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:45 +00:00
stevenhorsman
2fde31547a runtime-rs: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:36 +00:00
stevenhorsman
a299338b6c dragonball: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:27 +00:00
stevenhorsman
e44c4d901f doc: Fix uninlined_format_args in examples
Clippy is recommending that format args are inlined for
better clarity, so ensure our docs include this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:27 +00:00
stevenhorsman
b07899f8dc agent: Fix uninlined_format_args
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-22 19:49:17 +00:00
stevenhorsman
2af88dbb48 agent: bump cdi-rs
In #12151 the version was bumped in cargo.toml, but the update not
done, so run `cargo update -p container-device-interface` to apply it

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-20 10:08:45 +00:00
Steve Horsman
97603608ac Merge pull request #12259 from RuoqingHe/filter-tests-requires-kvm
dragonball: Skip tests require kvm while kvm is absent
2025-12-19 16:05:33 +00:00
Steve Horsman
81d74346f3 Merge pull request #12255 from stevenhorsman/bump-to-rust-1.90-prep
Preparations for the rust 1.90 bump
2025-12-19 14:41:32 +00:00
Steve Horsman
b75cc16bad Merge pull request #12272 from shwetha-s-poojary/revert_cleanup
workflows: payload: do not remove AGENT_TOOLSDIRECTORY
2025-12-19 14:22:36 +00:00
shwetha-s-poojary
1929ca8879 workflows: payload: do not remove AGENT_TOOLSDIRECTORY
Remove line that deletes $AGENT_TOOLSDIRECTORY

Signed-off-by: shwetha-s-poojary <shwetha.s-poojary@ibm.com>
2025-12-19 05:24:36 -08:00
Alex Lyn
b85084f046 Merge pull request #12266 from BbolroC/fix-selective-skip-for-empty-dir-test
tests: remove re-delcared local variable in k8s-empty-dirs.bats
2025-12-19 17:30:07 +08:00
Hyounggyu Choi
3fa1d93f85 tests: remove re-delcared local variable in k8s-empty-dirs.bats
Since #12204 was merged, the following error has been observed:

```
bats warning: Executed 1 instead of expected 2 tests
[run_kubernetes_tests.sh:162] ERROR: Tests FAILED from suites: k8s-empty-dirs.bats
```

The cause is that `pod_logs_file` is re-declared as a local variable
in the second test before skipping, which makes it inaccessible
in `teardown()` and leads to an error.

This commit removes the re-declaration of the variable.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-18 18:57:16 +01:00
Fabiano Fidêncio
51e9b7e9d1 nydus-snapshotter: Bump to v0.15.10
As it brings a fix that most likely can workaround the containerd /
nydus-snapshotter databases desynchronization.

Reference: https://github.com/containerd/nydus-snapshotter/pull/700

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 18:41:09 +01:00
Fabiano Fidêncio
03297edd3a kata-deploy: rust: Add list verb for runtimeclasses RBAC
The Rust kata-deploy binary calls list_runtimeclasses() during NFD
setup, but the ClusterRole only granted get and patch permissions.

Add the list verb to the runtimeclasses resource permissions to fix
the RBAC error:
  runtimeclasses.node.k8s.io is forbidden: User
  \"system:serviceaccount:kube-system:kata-deploy-sa\" cannot list
  resource \"runtimeclasses\" in API group \"node.k8s.io\" at the
  cluster scope

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 18:31:52 +01:00
Ruoqing He
5fa663b1e3 dragonball: Skip tests requires KVM when KVM is absent
KVM is not available in our ARM runners, let's skip those tests
accordingly, while making the rest test cases remain tested on machines
with KVM present and access to KVM device.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-18 14:17:46 +00:00
Ruoqing He
7cfb97d41b libs: Introduce skip_if_kvm_unaccessable macro
There are test cases require interaction with KVM device, introduce
skip_if_kvm_unaccessable macro to skip them.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-18 12:43:20 +00:00
Manuel Huber
78c41b61f4 tests: nvidia: Update images, probes and timeouts
Changes in NIM/RAG samples:
- update image references
- update memory requirements, timeouts, model name
- sanitize some of the probes and print-out

Further refinements can be made in the future.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-18 10:57:14 +01:00
Manuel Huber
0373428de4 tests: nvidia: Use secret for NGC API key
This is a slight change in the manifest to at least use a secret
for the environment variable.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-18 10:57:14 +01:00
Hyounggyu Choi
56ec8d7788 Merge pull request #12204 from kata-containers/runtime-rs-stability-debug
CI: Upgrade log details for improved error analysis
2025-12-18 10:54:54 +01:00
Alex Lyn
c7dfdf71f5 Merge pull request #11935 from burgerdev/fsgroup
genpolicy: support fsGroup setting in pod security context
2025-12-18 16:47:48 +08:00
stevenhorsman
e5568e65a1 lib: Fix missing copyright and license
Add the copyright date from when the file was first submitted to github

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
175c2c70b1 dragonball: Fix pointer equality check
Use `ptr::eq` to compare references by address rather than the
values that they point to

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
a221eaa81d dragonball: Fix length comparison to zero
Replace .len() == 0 with .is_empty() for more clarity

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
e73a7c3717 dragonball: Replace manual div_ceil
Use the more clear built-in method

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
048000654c runtime-rs: Prevent doc test issue
cargo test was trying to evaluate the documentation comment and failing,
so try and make the comment explicitly text to avoid this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
4384b6ad9f dragonball: Avoid manual implementation of ok
Refactor to use `.ok()` rather than implementing it ourselves

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
f4dd69a835 dragonball: Remove unnecessary unwrap
Given that we call `is_some` earlier, we don't then need to unwrap,
so refactor to avoid this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
20192f819f agent-ctl: Remove unnecessary unwrap
Given that we call `is_some` earlier, we don't then need to unwrap,
so refactor to avoid this

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
9bf5f113f9 genpolicy: Allow dead_code
A few structs in genpolicy are never constructed, so add
`#[allow(dead_code)]` to prevent this clipped warning

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
ca1c0c853f libs: Remove doc overindentation
The doc comment had one space to many in it's list, so the format was wrong

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
501b41cf8f dragonball: Remove doc overindentation
The doc comment had one space to many in it's list, so the format was wrong

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
6a45ee0874 runtime-rs: Improve map iteration
The key was never used, just the value, so just iterate over `.values()`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
2f49dffcd7 runtime-rs: Remove dead code
`VmmPingResponse` and `NetInterworkingModel` are
never constructed, so remove them

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
35557745b1 runtime-rs: Fix char_indices_as_byte_indices
In unicode you can have multi-byte characters, so it's better to
user char_indices than enumerate the bytes

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
69ca6c0de0 runtime-rs: Fix manual_contains
Use contains to be more concise and efficient rather than manually
implementing this check

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
0027f6cae0 agent: Fix dead_code warning
VirtioBlkCcwDeviceHandler and VirtioBlkCcwHandler
are only constructed on s390x, so add #[cfg(target_arch = "s390x")]
to all the code

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:27 +00:00
stevenhorsman
3b2c83f9d2 trace-forwarder: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
b1cfa98524 runtime-rs: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
dc8f628dd1 libs: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other and drop our own macro that did this mapping

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
5f1d3481af dragonball: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
9ec7109712 agent: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
34d299ae44 vsock-exporter: Fix clippy::io_other_error issue
We can use the new Error::other options rather than
Error:new(Error:Kind:Other

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
b2f9f23504 dragonball: Fix mismatched_lifetime_syntaxes issue
Fix to`warning: hiding a lifetime that's elided elsewhere is confusing`

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
stevenhorsman
8bbbc3a58b lib: Fix mismatched_lifetime_syntaxes issue
Fix the warning throw up:
```
warning: hiding a lifetime that's elided elsewhere is confusing
  --> /root/go/src/github.com/kata-containers/kata-containers/src/libs/kata-types/src/utils/u32_set.rs:50:17
   |
50 |     pub fn iter(&self) -> Iter<u32> {
   |                 ^^^^^     --------- the same lifetime is hidden here
   |                 |
   |                 the lifetime is elided here
   |
   = help: the same lifetime is referred to in inconsistent ways, making the signature confusing
   = note: `#[warn(mismatched_lifetime_syntaxes)]` on by default
help: use `'_` for type paths
   |
50 |     pub fn iter(&self) -> Iter<'_, u32> {
   |                                +++
   ```

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-18 07:45:26 +00:00
Xuewei Niu
a65c2b06b8 Merge pull request #12169 from zhangls-0524/new-fix-issue-11996
runtime-rs: Block Device Rootfs Mount Options Lost During Storage Object Creation
2025-12-18 10:09:38 +08:00
Fabiano Fidêncio
0e534fa7fe versions: Update virtiofsd to v1.13.3
Update virtiofsd to its latest release.

Here we also need to update the alpine version used by the builder as we
need a version of musl-dev new enough to have wrappers for pread2 and
pwrite2. As bumping, bump to the latest.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
1d2e19b07c versions: Update pause image to 3.10.1
Update pause image to its latest release.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
6211c10904 versions: Update libseccomp to 2.6.0
Update libseccomp to its latest release.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
0e0a92533c versions: update lvm2 to v2_03_38
Update lvm2 to its latest release.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
142c7d6522 versions: Update gperf to 3.3
Update gperf to its latest release.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
e757485853 versions: Update cryptsetup to v2.8.1
Update cryptsetup to its latest release

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Fabiano Fidêncio
35cd5fb1d4 versions: Update helm to v4.0.4
Update helm to its latest release

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-18 00:51:08 +01:00
Tobin Feldman-Fitzthum
decc09e975 tests: cc: add test with SNP reference values
Add two attestation tests. The first one sets a resource policy that
requires CPU0 to have an affirming trust level. This is a negative test
which can run on any platform. Setting this policy without setting any
reference values should result in an attestation failure.

Next, a second test will set the same policy, but this time it will use
the journal log to find the QEMU command line from the previous test and
calculate the expected reference values. Currently this is only
supported on SNP using the sev-snp-measure tool, but the same flow
should work on other platforms.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2025-12-18 00:12:11 +01:00
Ruoqing He
8b0d650081 dragonball: Use unique name for vhost path
The five tests are set to the same vhost socket path, which could lead
to racing with one another. Use unique name to avoid this.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-17 22:25:55 +01:00
Fabiano Fidêncio
320f1ce2a3 versions: Bump experimental {tdx,snp} QEMU
Let's bump experimental {tdx,snp} QEMU to the tags created Today in the
Confidential Containers repo, which match with QEMU 10.2.0-rc3.

This bump is mostly for early testing what will become 10.2.0, which
will be bumped everywhere then.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 17:42:04 +01:00
Alex Lyn
3696d9143a tests: Correct the teardown_common in cpu-ns.bats
It will address the issue:
"# bats warning: Executed 0 instead of expected 1 tests"

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
a28f24ef8c tests: move the get_pod_config_dir into setup_common
As each case need such preparation of get_pod_config_dir,
a better method is directly move it into the setup_common method.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
5778b0a001 tests: Introduce measure_node_time to get test case end time
To measure the duration for journal, we need clearly print the journal
start time and end time for each case which helps to ensure the journal
log is for the specified period for the case.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
648f0913ca tests: Load lib.rs in bats to ensure related function available
The lib.rs should be first loaded before execute some functions call.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
0929c84480 runtime-rs: Reduce output log and increase log level
For failure cases within CI, we need dump the kata log to help
address issues, but currently large log messages cause partial
log we can see.

We remove initdata log output and increase log level to reduce
log output.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
bbec15d695 tests: delete policy_settings_dir only for first test case
Currently policy_settings_dir is created only when
BATS_TEST_NUMBER == "1",
but delete_tmp_policy_settings_dir "${policy_settings_dir}" is
called in teardown() for every test. This means that for tests
after the first one teardown() may attempt to delete a directory
that was already removed by a previous test, or rely on a value
that does not belong to the current test execution.

Adjust teardown logic so that policy_settings_dir is only deleted
for the first test case (BATS_TEST_NUMBER == "1") and ignored for
subsequent tests. This keeps the original optimization of running
genpolicy only once, while avoiding unnecessary or confusing cleanup
attempts in later test cases.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
24e68b246f tests: Add missing bin env at the head of bats
Add the missing part of `#!/bin/bash/env` in bats.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
93ba6a8e76 tests: Make pod_name a global variable
the previous pod_name is set as local which can not be captured
within the teardown() function, causing failure.
This commit just remove the `local pod_name` to make it a global
variable.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Alex Lyn
89dce4eff6 tests: Enhance debug log output
Introduce setup_common in setup() and teardown_common() in teardown()
to get enough log to help debug

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-17 16:14:10 +00:00
Fabiano Fidêncio
88cdfab604 runtime: nvidia: Align static_sandbox_resource_mgmt
Let's ensure we have those aligned for both CC and non-CC use-case.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 17:04:51 +01:00
Fabiano Fidêncio
995770dbeb runtime: nvidia: Use cold-plug by default
Now that we have the way to do cold-plug, let's ensure we also use it
for the non-CC use case.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 17:04:51 +01:00
Hyounggyu Choi
7f72acc266 Merge pull request #12180 from BbolroC/enable-vfio-ap-passthrough-runtime-rs
runtime-rs: Enable VFIO-AP passthrough (hotplug only) on s390x
2025-12-17 15:50:10 +01:00
Hyounggyu Choi
f1b4327dba Merge pull request #12247 from fidencio/topic/ci-store-the-tarballs-we-rely-on-on-gchr-follow-up
build: Fix GPG key for gperf & Pass PUSH_TO_REGISTRY and GH_TOKEN to Docker builds
2025-12-17 13:53:58 +01:00
Fabiano Fidêncio
5415cf4e0f workflows: payload: Remove unneeded stuff from the runner
Otherwise we may hit a `no space left on device` when building the rust
kata-deploy binary.

This happens mostly because of the muli-staging build used to generate a
distroless final container.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 09:57:02 +01:00
Fabiano Fidêncio
98c5276546 helm: runtimeclasses: Match the kata-deploy rust deployment
There we ensure labels are added to better deal with ownership of the
runtimeclasses.  It's not strictly needed here as helm does take care of
the ownership, but also doesn't hurt to follow what seems to be a common
practice.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 09:57:02 +01:00
Fabiano Fidêncio
6130d7330f ci: Run a nightly job using the kata-deploy rust
Let's shamelessly duplicate the nightly job to have at least nightly
runs using the rust implementation of kata-deploy.

The reason for doing that is to be pragmatic, as pragmatic as possible,
and avoid switching away of the scripts before 3.24.0 release, while
still testing both ways till the switch happens.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 09:57:02 +01:00
Fabiano Fidêncio
fbc29f3f5e kata-deploy: helm: Adapt to the rust binary
Differently than the scripts, which are called as `bash -c ...`, the
kata-deploy rust binary must be invoked directly we do not even have
shell in its container.

For now, the rust version is used in the used image has the "-rust"
suffix, which will help us to have both ways being used / tested for a
little while.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 09:57:02 +01:00
Fabiano Fidêncio
9d88c6b1d7 kata-deploy: Oxidize the script
kata-deploy shell script is not THAT bad and, to be honest, it's quite
handy for quick hacks and quick changes.  However, it's been
increasingly becoming harder to maintain as it's grown its scope from a
testing tool to the proper project's front door, lacking unit tests, and
with an abundacy of complex regular expressions and bashisms to be able
to properly parse the environment variables it consumes.

Morever, the fact it is a Frankstein's monster glued together using
python packages, golang binaries, and a distro dependent container makes
the situation VERY HARD to use it from a distroless container (thus,
avoiding security issues), preventing further integration with
components that require a higher standard of security than we've been
requiring.

With everything said, with the help of Cursor (mostly on generating the
tests cases), here comes the oxidized version of the script, which runs
from a distroless container image.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-17 09:57:02 +01:00
Fabiano Fidêncio
c9cd79655d build: Pass PUSH_TO_REGISTRY and GH_TOKEN to Docker builds
The ORAS cache helper needs PUSH_TO_REGISTRY to be set to 'yes' to
push new artifacts to the cache. However, this environment variable
was not being passed to the Docker container during agent, tools, and
busybox builds.

Moreover, for ghcr.io authentication, add support for using GH_TOKEN and
GITHUB_ACTOR as fallbacks when explicit credentials
(ARTEFACT_REGISTRY_USERNAME/PASSWORD) are not provided.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 21:58:16 +01:00
Fabiano Fidêncio
b11cea3113 build: Fix GPG key for gperf
The GPG key used for gperf was incorrectly set to the busybox
maintainer's key (Denis Vlasenko) instead of the gperf maintainer's
key (Marcel Schaible).

Wrong key (busybox): C9E9416F76E610DBD09D040F47B70C55ACC9965B
                     Denis Vlasenko <vda.linux@googlemail.com>

Correct key (gperf): EDEB87A500CC0A211677FBFD93C08C88471097CD
                     Marcel Schaible <marcel.schaible@studium.fernuni-hagen.de>

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 21:58:16 +01:00
Fabiano Fidêncio
6e01ee6d47 helm: Provide kata-remote runtime class
kata-remote is a runtime class that cloud-api-adaptor relies on to work.

kata-remote by itself does nothing, and that's the reason it's disabled
by default. We're only adding it here so cloud-api-adaptor charts can
simply do something like `--set shims.remote.enabled=true`.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 21:57:49 +01:00
Fabiano Fidêncio
0a0fcbae4a gatekeeper: Adjust to kata-tools
A few jobs have been renamed as part of the kata-tools split.
Let's add them all here.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 18:22:40 +01:00
Fabiano Fidêncio
fb326b53df agent: Ensure MS_REMOUNT is respected
When updating ephemeral storages, MS_REMOUNT is explicitly passed as,
for instance, `/dev/shm` should be remounted after memory is hotplugged.

Till now Kata Containers has been explicitly ignoring such updates,
leading to the containers' `/dev/shm` having the size of "half of the
memory allocated, during the startup time", which goes against the
expected behaviour.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-12-16 15:11:34 +01:00
Fabiano Fidêncio
830d15d4c8 tests: Adapt to using kata-tools
Instead of relying and the fully bloated kata tarball.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 12:55:07 +01:00
Fabiano Fidêncio
a2534e7bc8 kata-tools: Release as its own tarball
We're only releasing those for amd64 as that's the only architecture
we've been building the packages for.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 12:55:07 +01:00
Fabiano Fidêncio
6d2f393be4 build: Split tools build from the other artefacts build
Let's ensure we can create a specific "tools" tarball, which will help
those who only need to pull those either for testing or production
usage.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-16 12:55:07 +01:00
Ruoqing He
6d2c66c7eb runtime-rs: Refactor feature propagation
After runtime-rs workspace merged into root workspace, features passed
when building runtime-rs needs to be refactored to be correctly
propagated. Taking dragonball for example, runtime-rs requires runtimes
to depend on virt_conttainers feature, and virt_containers needs to
handle hypervisor features specifically.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
1872af7c5a ci: Install cmake before building runtime-rs
cmake is required for libz-sys to compile (which is required by nydus).

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
9551f97e87 runtime-rs: Change TARGET_PATH to root workspace
After the workspace integration of runtime-rs, now the output of
runtime-rs is under the repo root, instead of src/runtime-rs. Change the
TARGET_PATH accordingly to tell Makefile where to lookup output.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
c7c02ac513 dragonball: Skip tests needs kvm under non-root
Some cases in dragonball crates requires interaction with KVM module to
complete, which requires root privilege. Skip those tests under non-root
user.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
889c3b6012 dragonball: Fix false use statement on aarch64
gic::create_gic is actually gated behind dbs_arch crate, instead of
arch::aarch64.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
1c1f3a2416 dragonball: Allow missing_docs for dummy MMIODeviceInfo
MMIODeviceInfo inside the test module of dbs_boot on aarch64 is used for
testing purpose, but `pub` attribute requires it to have documentation.
Since this is used only for testing purpose, let's allow missing_docs
for it.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
6d0cb18c07 dragonball: Add missing test module attribute
Test set of dbs_utils's tap module is missing test attribute, which
makes dev-dependencies unusable. Marking tests of tap as test module.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
15fe7ecda1 runtime-rs: Remove lockfile
Remove Cargo.lock since it now shares lockfile workspace-wise.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
beb0cac0d1 build: Move runtime-rs to root workspace
This is a follow-up of 3fbe693.

Remove runtime-rs from exclude list, and make it as a member of root
workspace.

Specify shim and shim-ctl as the binary of runtime-rs package, make
runtime-rs and all its members into root workspace.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
Ruoqing He
ae4b3e9ac0 runtime-rs: Make runtime-rs a package
Make runtime-rs a package produces shim and shim-ctl as its binary
product, which enables Makefile to work after it's incorporated into
root workspace.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-12-16 11:26:07 +01:00
shezhang.lau
9744e9f26d runtime-rs: Block Rootfs Mount Options During Storage Object Creation
Init the storage options with original rootfs options.
Addition: XFS, append nouuid to the mount options if not exist.

Signed-off-by: shezhang.lau <shezhang.lau@antgroup.com>
2025-12-16 13:57:02 +08:00
Xuewei Niu
c8b5f8efad Merge pull request #12167 from M-Phansa/main
runtime-rs: handle container missing during kill_process gracefully
2025-12-16 10:31:50 +08:00
Fabiano Fidêncio
1388a3acda packaging: Add ORAS cache for gperf and busybox tarballs
To protect against upstream download failures for gperf and busybox,
implement ORAS-based caching to GHCR.

This adds:
- download-with-oras-cache.sh: Core helper for downloading with cache
- populate-oras-tarball-cache.sh: Script to manually populate cache
- warn() function to lib.sh for consistency

Modified build scripts to:
- Try ORAS cache first (from ghcr.io/kata-containers/kata-containers)
- Fall back to upstream download on cache miss
- Automatically push to cache when PUSH_TO_REGISTRY=yes

The cache is automatically populated during CI builds, and parallel
architecture builds check for existing versions before pushing to avoid
race conditions.

Forks benefit from upstream cache but can override with their own:
ARTEFACT_REPOSITORY=myorg/kata make agent-tarball

Generated-By: Cursor IDE with Claude
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-15 22:04:21 +01:00
Markus Rudy
661e851445 genpolicy: support fsGroup setting in pod security context
The runtime handles the fsGroup field of the pod security context by
adding a mount option to the generated storage object [1]. This commit
changes genpolicy to expect this option.

Instead of passing another side input to
yaml::get_container_mounts_and_storages, we pass the entire PodSpec.
This reduces the necessary changes in the pod-generating resources and
allows for possible future use of other PodSpec fields.

[1]: https://github.com/kata-containers/kata-containers/blob/0c6fcde1/src/runtime/virtcontainers/kata_agent.go#L1620-L1625

Fixes: #11934

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-12-15 15:22:33 +01:00
Fabiano Fidêncio
a25a53c860 kata-deploy: sa: Fix permissions for patching nodefeaturerules
I've seen this happening with the GPU SNP CI every now and then, but I
don't really understand how this was not caught by the TDX / SNP CI
themselves before.

In any case, the error seen is:
```
  Error from server (Forbidden): error when applying patch:
  {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"nfd.k8s-sigs.io/v1alpha1\",\"kind\":\"NodeFeatureRule\",\"metadata\":{\"annotations\":{},\"name\":\"amd64-tee-keys\"},\"spec\":{\"rules\":[{\"extendedResources\":{\"sev-snp.amd.com/esids\":\"@cpu.security.sev.encrypted_state_ids\"},\"labels\":{\"amd.feature.node.kubernetes.io/snp\":\"true\"},\"matchFeatures\":[{\"feature\":\"cpu.security\",\"matchExpressions\":{\"sev.snp.enabled\":{\"op\":\"Exists\"}}}],\"name\":\"amd.sev-snp\"},{\"extendedResources\":{\"tdx.intel.com/keys\":\"@cpu.security.tdx.total_keys\"},\"labels\":{\"intel.feature.node.kubernetes.io/tdx\":\"true\"},\"matchFeatures\":[{\"feature\":\"cpu.security\",\"matchExpressions\":{\"tdx.enabled\":{\"op\":\"Exists\"}}}],\"name\":\"intel.tdx\"}]}}\n"}}}
  to:
  Resource: "nfd.k8s-sigs.io/v1alpha1, Resource=nodefeaturerules", GroupVersionKind: "nfd.k8s-sigs.io/v1alpha1, Kind=NodeFeatureRule"
  Name: "amd64-tee-keys", Namespace: ""
  for: "/opt/kata-artifacts/node-feature-rules/x86_64-tee-keys.yaml": error when patching "/opt/kata-artifacts/node-feature-rules/x86_64-tee-keys.yaml": nodefeaturerules.nfd.k8s-sigs.io "amd64-tee-keys" is forbidden: User "system:serviceaccount:kube-system:kata-deploy-sa" cannot patch resource "nodefeaturerules" in API group "nfd.k8s-sigs.io" at the cluster scope
```

And the fix is as simple as allowing patching and updating a
nodefeaturerule in our service account RBAC.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-15 12:01:20 +01:00
Alex Lyn
f4f61d5666 Merge pull request #12229 from fidencio/topic/kata-deploy-do-deprecations
kata-deploy: Remove deprecated features from 3.23.0
2025-12-15 19:00:07 +08:00
Hyounggyu Choi
b69da5f3ba gatekeeper: Make s390x e2e tests required again
Since the CI issue for s390x was resolved on Dec 5th,
the nightly test result has gone green for 10 consecutive days.
This commit puts the e2e tests for s390x again into the required job list.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-15 11:12:25 +01:00
Fabiano Fidêncio
ded6d1636f kata-deploy: Remove deprecated features from 3.23.0
Let's remove the deprecated features that were marked for removal
after Kata Containers 3.23.0:

kata-deploy.sh:
- Remove non-arch-specific variable fallbacks (SHIMS, DEFAULT_SHIM,
  SNAPSHOTTER_HANDLER_MAPPING, ALLOWED_HYPERVISOR_ANNOTATIONS,
  PULL_TYPE_MAPPING, EXPERIMENTAL_FORCE_GUEST_PULL). Each arch now
  has its own default value.
- Remove CREATE_RUNTIMECLASSES and CREATE_DEFAULT_RUNTIMECLASS
  variables and associated functions (create_runtimeclasses,
  delete_runtimeclasses, adjust_shim_for_nfd). RuntimeClasses are
  now managed by Helm chart, not the daemonset script.
- Unsupported architectures now fail with an error instead of
  falling back to non-arch-specific defaults.

Helm chart:
- Remove all deprecated env values (createRuntimeClasses,
  createDefaultRuntimeClass, debug, shims, shims_*, defaultShim,
  defaultShim_*, allowedHypervisorAnnotations, snapshotterHandlerMapping,
  snapshotterHandlerMapping_*, agentHttpsProxy, agentNoProxy,
  pullTypeMapping, pullTypeMapping_*, _experimentalSetupSnapshotter,
  _experimentalForceGuestPull, _experimentalForceGuestPull_*).
- Remove backward compatibility code from _helpers.tpl that checked
  for legacy env values.
- Remove legacy env.shims check from runtimeclasses.yaml.
- Remove CREATE_RUNTIMECLASSES and CREATE_DEFAULT_RUNTIMECLASS env
  vars from kata-deploy.yaml and post-delete-job.yaml.
- Update RBAC to only include runtimeclasses get/patch permissions
  (needed for NFD patching), removing create/delete/list/update/watch.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-13 16:32:00 +01:00
Adeet Phanse
db09912808 agent: add SandboxError enum for typed error handling
- Replace generic errors in sandbox operations with typed SandboxError variants (InvalidContainerId, InitProcessNotFound, InvalidExecId).
- This enables the kata shim to handle specific failure cases differently.

Fixes #12120

Signed-off-by: Adeet Phanse <adeet.phanse@mongodb.com>
2025-12-12 12:33:18 -05:00
Adeet Phanse
5b7e1cdaad runtime-rs: handle container missing during kill_process gracefully
Add better error handling to runtime rs to handle when the sandbox itself is killed and recreated.
- Update the kill_process function to skip sending a signal when the process is stopped.
- Always set ProcessStatus::Stopped even when wait_process fails
- In state_process return synthetic state for sandbox container when using Sandbox API

Fixes #12120
Signed-off-by: Adeet Phanse <adeet.phanse@mongodb.com>
2025-12-12 12:33:17 -05:00
Fabiano Fidêncio
c7d0c270ee release: Bump version to 3.24.0
Bump VERSION and helm-chart versions

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-12 18:15:41 +01:00
Fabiano Fidêncio
50b853eb93 tests: nvidia: Always rely on the "kata" default runtime class
This is a pattern already followed by all the other tests.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
ff2396aeec tests: nvidia: Declare KATA_HYPERVISOR variable
Align with other test logic - declare the KATA_HYPERVISOR in the
run bash script, then declare the RUNTIME_CLASS_NAME variable in
the bats files.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
6e31cf2156 tests: nvidia: cc: USE is_confidential_gpu_hw
This function has recently been introduced, so we align patterns.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
cd1f55b41c tests: nvidia: cc: Set GPU0 policy for NIM tests
Now that we have a more restrictive resource policy for KBS, let
us start adopting it across all NVIDIA test cases. This policy was
previously introduced by the NVIDIA attestation test.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
edbac264cb tests: nvidia: cc: Remove KBS variable
The variable is now set in the CI YAML file, thus removing the
assignment.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
9665b74653 tests: nvidia: cc: address shellcheck warnings
Address shellcheck warnings for run_kubernetes_nv_tests.sh

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Manuel Huber
5f9e7a03a8 tests: nvidia: do not use teardown_common
Clean up in each NVIDIA bats file according to our needs.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 16:31:42 +01:00
Alex Lyn
c3fd4c1621 version: Bump rtnetlink and netlink-packet-route
It aims to upgrade rtnetlink to mitigate netlink log noise.
This commit upgrades the `rtnetlink` dependency (and corresponding
libraries like `netlink-packet-route`) to address excessive and
unnecessary netlink-related logging during sandbox startup.

Problem:
The previously used `rtnetlink v0.16` (depending on `netlink-proto
v0.11.3`) generates a high volume of DEBUG/INFO level netlink messages
during sandbox initialization. This noise:
1.  Overloads the logging system, often leading to warnings like
"slog-async: logger dropped messages due to channel overflow."
2.  Interferes with effective troubleshooting by distracting developers
from legitimate Kata errors.

Solution:
We upgrade to `rtnetlink v0.19` (and `netlink-proto v0.12`), as testing
confirms that the latest versions have correctly elevated the verbosity
of these netlink internal events to the TRACE level.

This change significantly enhances the log analysis experience by
suppressing unnecessary network-related logs during startup.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-12 14:27:33 +01:00
Manuel Huber
1781fb8b06 tests: nvidia: cc: Use CUDA image from NVCR
Pull from nvcr.io to avoid hitting unauthenticated pull rate
limits.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
f63f95f315 tests: nvidia: cc: generate pod security policies
With these changes, we create pod security policies when running
against NVIDIA TEE GPU handlers where AUTO_GENERATE_POLICY is set.
For the non-TEE GPU tests, the added functions bail out by design.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
bf26ad9532 nvidia: tests: remove outer CDI annotations
With the new device plugin being used by CI runners, these
annotations are no longer necessary.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
37b4f6ae8b tests: Adapt NVIDIA common policy settings
Following existing patterns, we adapt the common policy settings
for NVIDIA GPU CI platforms. For instance, for our CI runners, we
use containerd 2.x.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
f4c0c8546e tests: Enable AUTO_GENERATE_POLICY for NVIDIA TEEs
Enable auto-generate policy for qemu-nvidia-gpu-* if the user
didn't specify an AUTO_GENERATE_POLICY value.

Setting this in run_kubernetes_nv_tests.sh is too late as
gha-run.sh calls into run_tests, setup.sh, and then into
create_common_genpolicy_settings() where the rules.rego and
genpolicy-settings file are being copied to the right locations.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
b9774e44b6 genpolicy: tests: Add VFIO passthrough test cases
Add one valid test case with 2 GPUs with proper VFIO device
entries and CDI annotations.
Add seven test cases with invalid combinations of VFIO device
entries and CDI annotations.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Manuel Huber
d3e6936820 genpolicy: validation of vfio passthrough GPUs
Add rules for vfio passthrough GPUs. When creating the security
policy document, parse GPU resource limits and derive CDI
annotation patterns and VFIO device entries.
With various values for CDI annotations and device paths being
runtime-dependent, use regular expressions.
For now, this enables passthrough of NVIDIA GPUs, but the changes
are designed to allow for other VFIO device types.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-12 12:52:33 +01:00
Alex Lyn
82e8e9fbe0 doc: add block device's settings to the doc page
Add the block device specific annotations which is dedicated within
runtime-rs for num_queues and queue_sie to the document to help
users set the two parameters.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-11 21:10:22 +01:00
Alex Lyn
a8a458664d kata-types: Allow dynamic queue config via Pod annotations
This commit introduces the capability to dynamically configure
`queue_size` and `num_queues` parameters via Pod annotations.

Currently, `kata-runtime` allows for static configuration of
`queue_size` and `num_queues` for block devices through its config
file. However, a critical issue arises when a Pod is allocated fewer
CPU cores than the statically configured `num_queues` value. In such
scenarios, the Pod fails to start, leading to operational instability
and limiting flexibility in resource allocation.

To address this, this feature enables users to override the default
queue_size and num_queues parameters by specifying them in Pod
annotations.This allows for fine-grained control and dynamic adjustment
of these parameters based on the specific resource allocation of a Pod.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-11 21:10:22 +01:00
Steve Horsman
51459b9b15 Merge pull request #12220 from fidencio/topic/ci-arm64-temporarily-disable-arm64-non-k8s-tests
ci: arm64-non-k8s: temporarily skip the tests
2025-12-11 11:35:39 +00:00
Fabiano Fidêncio
46c7d6c9f8 ci: arm64-non-k8s: temporarily skip the tests
The runner is down for a few weeks. I may end up bringing in my personal
runner, but I'm not confident I can easily do this before the holidays,
thus I'm skipping the tests for now.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-11 12:14:32 +01:00
Manuel Huber
560f6f6c74 tests: nvidia: cc: Affirming attestation policy
Set the attestation policy for GPU0 to affirming. This requires
the GPU, for instance, to have production properties, such as
properly signed VBIOS firmware.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-11 10:16:58 +01:00
Alex Lyn
751b6875f9 tests: Temporarily skip the cpu-ns test for the s390x platform
As some reasons that this CI is continuously failed, we'd like to
temporarily skip it for the s390x platform. And it will be enabled
when we addressed related issues.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
d495b77135 runtime-rs: Align the default annptations with runtime-go
As the default enable_annotations in runtime-rs is different with
runtime-go, we should make it align with configuration in runtime-go.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
c8dd5fbacf runtime-rs: Migrate vCPU tracking to fractional float
This commit refactors the vCPU resource management within runtime's
`CpuResource` structure and related calculation logic to use
floating-point numbers (`f32`) instead of integers (`u32`).

This migration is necessary to fully support the fractional vCPU
allocation introduced in the `kata-types` library, ensuring better
precision in:
1.Allocation Tracking: `current_vcpu` now tracks the precise
fractional value (e.g., 1.5 vCPUs).
2.Resource Calculation: `calc_cpu_resources` now returns a precise
`f32` sum of container vCPU requests, including normalization logic
based on the maximum period, removing the previous integer rounding
steps in the calculation.
3.Hypervisor Interaction: The integer vCPU requirement for the
hypervisor remains, so `ceil()` is now explicitly applied only when
interacting with the hypervisor or agent APIs
(`do_update_cpu_resources`, `current_vcpu`, `online_cpu_mem`).

And key changes as below:
1. `CpuResource::current_vcpu` updated from `u32` to `f32`.
2. `calc_cpu_resources` return type changed from `u32` to `f32`.
3. CPU hotplug logic now uses `f32` for the target vCPU count and applies
4. `ceil()` before calling `hypervisor.resize_vcpu()`.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
84fd33c3bc kata-types: Use fractional float for vCPU resource tracking
Refactors `LinuxContainerCpuResources` and `LinuxSandboxCpuResources`
to track calculated vCPU allocation using `f64` (fractional float)
instead of `u64` (milliseconds).

This ensures more precise resource calculation (`quota / period`) and
aggregation by avoiding rounding errors inherent in millisecond-based
integer tracking.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
0f04363ea8 tests: Disable CPU elasticity tests for nontee scenarios
This commit updates the non-TEE tests to disable two specific test
cases: `k8s-number-cpus.bats` and `k8s-sandbox-vcpus-allocation.bats`.

These tests are designed to cover CPU elasticity/dynamic scaling
capabilities. In the non-TEE scenario, we are enforcing the disabling of
this capability by setting the default configuration to
`static_sandbox_resource_mgmt=true`.

Although the tests currently pass, allowing them to run is logically
inconsistent with the intended non-TEE configuration. Therefore, we are
disabling them for all non-TEE runtimes, specifically targeting:
- `qemu-coco-dev`
- `qemu-coco-dev-runtime-rs`

This change ensures that our non-TEE CI accurately reflects the static
resource management policy and prevents misleading test results.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
beaf44dd2e tests: disable block volume test for s390 arch
As runtime-rs doesn't support block device hotplug in s390 arch,
with this fact, we just disable or skip the test when it is the
s390.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
535ba589f4 runtime-rs: Enable elastic resource feature
To support such feature, the item in Makefile should be enabled,
and it can be set true when make build, just like this:
`DEFSTATICRESOURCEMGMT_QEMU := false`
When users don't want this feature, they can set it with true via
the configuration.toml.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
28371dbec5 tests: Enable cloud-hypervisor and qemu-runtime-rs within the CI
Enable the cpu hotplug tests within the k8s-number-cpus.bats for both
cloud-hypervisor and qemu-runtime-rs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
82a72b4564 tests: Enable cpu hotplug for dragonball and clh in vcpus allocation
We have support cpu hotplug features within dragonball and clh, this
commit is to enable the test within the CI.

Fixes: #8660

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
6196d3d646 tests: Enable cpu hotplug tests in k8s-cpu-ns.bats
As previous failure within the case, we choose to skip it, but now
the cpu hotplug has been corrected, and it's time to re-enable it.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
Alex Lyn
96bd13e85d tests: Add support for qemu-runtime-rs
We have supportted virtio-scsi driver, and now the CI should be
enabled.

Fixes: #10373

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-12-10 22:11:56 +01:00
dependabot[bot]
2137b1fa3a build(deps): bump github.com/containernetworking/plugins in /src/runtime
Bumps [github.com/containernetworking/plugins](https://github.com/containernetworking/plugins) from 1.7.1 to 1.9.0.
- [Release notes](https://github.com/containernetworking/plugins/releases)
- [Commits](https://github.com/containernetworking/plugins/compare/v1.7.1...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/containernetworking/plugins
  dependency-version: 1.9.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-10 16:10:24 +01:00
LandonTClipp
b50a73912d runtime: Config test extension for IOMMUFDID
Adding additional cases for the IOMMUFDID method to check for
non-IOMMUFD paths are passed. The method should do the right
thing.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2025-12-10 15:46:28 +01:00
LandonTClipp
d5e4cf6b4d runtime: Add test for ExecuteVFIODeviceAdd
Copilot made a good point that we should have a test for this.
Thus, this commit.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2025-12-10 15:46:28 +01:00
LandonTClipp
137866f793 runtime: Allow QMP commands to be logged in debug level
Logging the QMP commands gives us a lot of flexibility to
troubleshoot issues with what is being sent to QEMU.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2025-12-10 15:46:28 +01:00
LandonTClipp
a3b5764f67 runtime: Fix import cycle and add unit test for IOMMUFDID()
An import cycle was introduced because of a mutual need
for the constant that describes the prefix of IOMMUFD files.
We need to extract this out into a higher-level package.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2025-12-10 15:46:28 +01:00
LandonTClipp
09438fd54f runtime: Add IOMMUFD Object Creation for QEMU QMP Commands
The QMP commands sent to QEMU did not properly set up
IOMMUFD objects in the codepath that handles VFIO device
hot-plugging. This is mainly relevant in the Kubernetes
use-case where the VFIO devices are not available when
QEMU is first launched.

Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
2025-12-10 15:46:28 +01:00
Manuel Huber
cb8fd2e3b1 runtime: gpu: Skip CDI annos for pause container
The pause container does not need CDI annotations, these are only
intended for workload containers.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-10 13:26:04 +01:00
Fabiano Fidêncio
69a0ac979c tests: Adjust install_bats()
The function assumes that the runner is a Ubuntu machine, which so far
has been true as part of our CI.

However, the new ARM runner is running on Debian, and those mirror
additions would simply break.

With this in mind, for any distro that's not ubuntu, let's just make
sure to inform the owner of the system to have bats already installed as
part of the environment provided.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-10 12:05:04 +01:00
Fabiano Fidêncio
406f6b1d15 Revert "tests: Add workaround to override CDI files"
This reverts commit 5a81b010f2, as we now
have all the infrastructure properly set up as part of our CI node.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-09 23:18:11 +01:00
Fabiano Fidêncio
3db7b88eff tests: remove containerd guest pull stability tests
Remove the existing containerd guest pull stability tests workflow
as we're going to rebuild all the VMs used for testing and introduce
new, more focused stability tests for nydus-snapshotter.

The new tests will be added soon, as part of another PR.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-08 16:29:11 +01:00
Fabiano Fidêncio
5b6a2d25bc podOverhead: Reduce memory overhead for GPU runtime classes
Now that we've bumped to QEMU 10.2.0-rc1, we can take advantage of a fix
that's present there, which fixes the double memory allocation for the
cases where GPUs are being cold-plugged.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-06 00:16:43 +01:00
Fabiano Fidêndio
71f78cc87e tests: cc: gpu: Lower the amount of memory required by the pods
We've made the pods require a ridiculous amount of memory, just for the
sake of getting them running.

Now that those are running, tests are passing, CI is required, let's
work to lower the amount of mmemory needed as everything else is working
as expected.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-06 00:16:43 +01:00
Dan Mihai
965ad10cf2 tests: k8s: tests_common.sh local modification
Clean-up shellcheck warnings:

SC2030 (info): Modification of cmd_out is local (to subshell caused by (..) group).
SC2031 (info): cmd_out was modified in a subshell. That change might be lost.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-12-06 00:16:23 +01:00
Dan Mihai
8199171cc4 tests: k8s: tests_common.sh braces around variables
Clean-up shellcheck warnings:

SC2250 (style): Prefer putting braces around variable references even
when not strictly required.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-12-06 00:16:23 +01:00
Fabiano Fidêncio
5a81b010f2 tests: Add workaround to override CDI files
Let's add a simple backup and restore logic for the CDI configuration
file nvidia.com-pgpu.yaml in the k8s-nvidia-*.bats and
k8s-confidential-attestation.bats test files.

Althought not optimal, this is a temporary workaround needed until
NVIDIA releases what's needed for the GPU Operator to properly deal with
cold plugged devices for the Confidential Containers cases, which is
work in progress right now.

After that's released, we can revert/drop this patch.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-05 18:58:35 +01:00
Fabiano Fidêncio
aaa67df4dd versions: Bump experimental {tdx,snp} QEMU
Let's bump experimental {tdx,snp} QEMU to the tags created Today in the
Confidential Containers repo, which match with QEMU 10.2.0-rc1.

This bump is specially beneficial for us, as we can get rid of QEMU's
double memory allocation when **cold plugging** a GPU.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-05 18:58:35 +01:00
Zvonko Kaiser
f8ad17499d gpu: VFIO handling container vs sandbox
If the sandbox has cold-plugged a IOMMUFD device but the
device-plugins sends us a /dev/vfio/<NUM> device we need to
check if the IOMMUFD device and the  VFIO device are the same
We have the sibling.BDF we now need to extract the BDF of the
devPath that is either /dev/vfio/<NUM> or /dev/vfio/devices/vfio<NUM>

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-05 16:53:31 +01:00
Zvonko Kaiser
147e9f188e Merge pull request #12080 from manuelh-dev/mahuber/cc-gpu-ci-attestation
tests: nvidia: cc: Add attestation test
2025-12-05 09:31:57 -05:00
Steve Horsman
2f1b98c232 Merge pull request #12197 from stevenhorsman/logrus-1.9.3-bump
version: Bump sirupsen/logrus
2025-12-05 14:18:50 +00:00
Manuel Huber
e5861cde20 tests: use Authorization when GH_TOKEN is set
Same as for other uses of GH_TOKEN, use it when set in order to
avoid rate limiting issues.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 14:08:43 +01:00
stevenhorsman
9eba559bd6 version: Bump sirupsen/logrus
Bump the github.com/sirupsen/logrus version to 1.9.3
across our components where it is back-level to bring us
up-to-date and resolve high severity CVE-2025-65637

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-05 11:12:04 +00:00
Manuel Huber
34efa83afc tests: nvidia: cc: Add attestation test
Add the attestation bats test case to the NVIDIA CI and provide a
second pod manifest for the attestation test with a GPU. This will
enable composite attestation in a subsequent step.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
e31d592a0c versions: Bump coco-trustee
Bump to pull in a fix for composite attestation with GPUs. The new
commit ID corresponds to the fix (change for default GPU policy),
currently being the top commit of the main branch.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
73dfa9b9d5 versions: Bump coco-guest-components
Bump to pull in a fix for NVIDIA CC GPU attestation.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
116a72ad0d tests: cc: Fix command evaluation
This brings two fixes:
- use the test_key variable to check against the aatest value.
- properly check the run command invocation (run w/o bash does not
  seem to like the pipe which leads to ALWAYS evaluating the
  status result to 1. With this, the deny-all test would ALWAYS
  succeed regardless of whether aatest was actually returned or not.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
23675c784b tests: cc: Reset default policy
When running these tests repeatedly locally, the default policy is not
being reset after the test completes, then subsequent runs fail.
Similar to k8s-sealed-secrets.bats, we set the default policy in an if
condition.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
f70c3adaf1 tests: cc: Add kbs_set_gpu0_resource_policy
This allows setting a GPU0 resource policy, enabling GPU
attestation tests to not use the default resource policy.
For now, the policy requires attestation's ear status to
not be contraindicated. In a future change we will require
this to be affirming once our CI runners' vBIOS version is
properly configured.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
c2d1e2dcc9 tests: cc: Add is_confidential_gpu_hardware
This enables attestation tests to figure out whether composite
attestation with a GPU can be executed.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Manuel Huber
53e94df203 tests: nvidia: cc: add SUPPORTED_TEE_HYPERVISORS
Add the NVIDIA TEE hypervisors. With this, attestation tests can be run
against the NVIDIA handlers, for instance.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-05 11:48:55 +01:00
Fabiano Fidêncio
923f97bc66 rootfs: Temporarily revert "gpu: Handle root_hash.txt correctly"
This reverts commit e4a13b9a4a, as it
caused some issues with the GPU workflows.

Reverting it is better, as it unblocks other PRs.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-05 11:47:37 +01:00
Steve Horsman
d27af53902 Merge pull request #12185 from stevenhorsman/runtime-rs-required-checks
ci: Add qemu-runtime-rs AKS tests to required
2025-12-05 10:43:25 +00:00
stevenhorsman
403de2161f version: Update golang to 1.24.11
Needed to fix:
```
Vulnerability #1: GO-2025-4155
    Excessive resource consumption when printing error string for host
    certificate validation in crypto/x509
  More info: https://pkg.go.dev/vuln/GO-2025-4155
  Standard library
    Found in: crypto/x509@go1.24.9
    Fixed in: crypto/x509@go1.24.11
    Vulnerable symbols found:
      #1: x509.HostnameError.Error
```

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-04 22:50:07 +01:00
Steve Horsman
425f4ffc8d Merge pull request #12124 from zvonkok/nvidia-measured-rootfs
gpu: Measured rootfs
2025-12-04 14:54:11 +00:00
Hyounggyu Choi
1dd3426adc tests: Extend vfio-ap test for runtime-rs
vfio-ap passthrough has been introduced for runtime-rs,
requiring that the existing test verify this new functionality.
This commit adds:

- containerd config specific to runtime-rs
- extensions to the existing test functions to cover vfio-ap

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-04 15:05:23 +01:00
Hyounggyu Choi
aa326fb9b8 tests: Remove usage of crictl for vfio-ap
`crictl` is not used any more after #10767.
Let's clean up all places where the tool is used.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-04 15:05:23 +01:00
Hyounggyu Choi
41d61f4b16 runtime-rs: Enable VFIO-AP passthrough
The following have been made for the enablement:

1. Make `MediatedPci` and `MediatedAp` in `VfioDeviceType`
2. Make HostDevice without BDF for `MediatedAp`
3. Add `CCW` to VFioBusMode and set it to VfioConfig as `bus_type`
4. Return `vfio-ap` driver type for `CCW` bus type
5. Set `bus_mode` for `VfioDevice` based on `bus_type`
6. Set `vfio-ap` to the agent device's `field_type`
7. Prepare a different argument for `vfio-ap` for QMP command
8. Set None to all PCI relevant fields

Please keep in mind that `vfio-ap` does not belong to any
types of port togologies like PCI (e.g., root or switch)
because devices on s390x are controlled by CCW.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-04 15:05:23 +01:00
Hyounggyu Choi
cb5b1384ca runtime-rs: Introduce uses_native_ccw_bus()
Until now, we relied on `VMROOTFSDRIVER` to determine
whether a system uses a native CCW bus.
However, this method is not canonical and can be error-prone
depending on the configuration.

This commit introduces a new function that checks
for the presence of CCW bus infrastructure in sysfs
and verifies that native mainframe drivers are available.
It replaces all previous uses of the old detection method.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-04 15:05:23 +01:00
Steve Horsman
f673f33e72 Merge pull request #12172 from fidencio/topic/gatekeeper-mark-nvidia-jobs-as-required
gatekeeper: Mark NVIDIA CC GPU test as required
2025-12-04 12:48:57 +00:00
stevenhorsman
112810c796 ci: Add qemu-runtime-rs AKS tests to required
Add the small and normal variants of the qemu-runtime-rs
tests to the required-tests list now that they are stable.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-04 11:15:43 +00:00
Fabiano Fidêncio
c505afb67c gatekeeper: Mark NVIDIA CC GPU test as required
It's been stable for the past 10 nightlies, no retries.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-12-04 11:14:25 +00:00
Steve Horsman
635f7892d5 Merge pull request #12190 from BbolroC/mark-s390x-jobs-as-nonrequired
gatekeeper: Drop all s390x e2e tests temporarily
2025-12-04 11:10:46 +00:00
Steve Horsman
2a6ebc556f Merge pull request #12175 from kata-containers/mahuber/gpu-ci-genpolicy
ci: nvidia: Install kata-artifacts
2025-12-04 09:23:32 +00:00
Hyounggyu Choi
b6ef7eb9c3 gatekeeper: Drop all s390x e2e tests temporarily
This commit marks three s390x CI jobs as non-required.
Please check out the details at #12189.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-04 08:05:14 +01:00
Steve Horsman
10b0717cae Merge pull request #12179 from stevenhorsman/nginx-test-image-by-digest
tests: Switch nginx test image ref to digest
2025-12-03 13:39:07 +00:00
Hyounggyu Choi
22778547b2 runtime-rs: Fix panic when OCI spec annotations are missing
An oci-spec can be passed to the runtime without annotations
(e.g., `ctr run`). In this case, runtime panics with:

```
src/runtime-rs/crates/runtimes/src/manager.rs:391: called `Option::unwrap()` on a `None` value
```

This commit checks if the annotation is None, and instantiates
the hashmap as an empty map if it is missing. It also adds a None
check for `netns`.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-03 13:07:39 +01:00
Hyounggyu Choi
ba78fb46fb runtime-rs: Configure protection devices when confidential_guest is set
Currently, the protection device configuration is constructed
automatically even if `confidential_guest` is not set.
This commit puts a condition to check the flag and allows the
construction accordingly.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-12-03 13:07:39 +01:00
Zvonko Kaiser
e4a13b9a4a gpu: Handle root_hash.txt correctly
Updates to the shim-v2 build and the binaries.sh script.
Makeing sure that both variants "confidential" AND
"nvidia-gpu-confidential" are handled.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-02 19:56:19 +01:00
Steve Horsman
d8405cb7fb Merge pull request #11983 from stevenhorsman/toolchain-guidance
doc: Document our Toolchain policy
2025-12-02 15:47:54 +00:00
stevenhorsman
b9cb667687 doc: Document our Toolchain policy
Create an initial version of our toolchain policy as agreed in
Architecture Committee meetings and the PTG

Fixes: #9841
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-02 14:28:29 +00:00
stevenhorsman
79a75b63bf tests: Switch nginx test image ref to digest
As tags are mutable and digests are not, lets pin our image
by digest to give our CI a better chance of stability

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-02 13:02:50 +00:00
stevenhorsman
5c618dc8e2 tests: Switch nginx images to use version.yaml details
- Swap out the hard-coded nginx registry and verisons for reading
the test image details for version.yaml
which can also ensure that the quay.io mirror is used
rather than the docker hub versions which can hit pull limits
- Try setting imagePullPoliycy Always to fix issues with the arm CI

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-12-02 10:04:09 +01:00
Manuel Huber
3427b5c00e ci: nvidia: Install kata-artifacts
In preparation for Kata agent security policy testing, installing
Kata tools to provide genpolicy.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-01 17:59:19 +00:00
Manuel Huber
4355af7972 kata-deploy: Fix binary find install_tools_helper
Using make tarball targets for tools locally, binaries may exist
for both debug and release builds. In this case, cryptic errors
are shown as we try to install multiple binaries.
This change require exactly one binary to be found and errors  out
in other cases.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-12-01 09:29:24 -08:00
Manuel Huber
5a5c43429e ci: nvidia: remove kubectl_retry calls
When tests regress, the CI wait time can increase significantly
with the current kubectly_retry attempt logic. Thus, align with
other tests and remove kubectl_retry invocations. Instead, rely on
proper timeouts.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-28 19:00:57 +01:00
Fabiano Fidêncio
e3646adedf gatekeeper: Drop SEV-SNP from required
SEV-SNP machine is failing due to nydus not being deployed in the
machine.

We cannot easily contact the maintainers due to the US Holidays, and I
think this should become a criteria for a machine not be added as
required again (different regions coverage).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-28 12:46:07 +01:00
Steve Horsman
8534afb9e8 Merge pull request #12150 from stevenhorsman/add-gatekeeper-triggers
ci: Add two extra gatekeeper triggers
2025-11-28 09:34:41 +00:00
Zvonko Kaiser
9dfa6df2cb agent: Bump CDI-rs to latest
Latest version of container-device-interface is v0.1.1

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-27 22:57:50 +01:00
Fabiano Fidêncio
776e08dbba build: Add nvidia image rootfs builds
So far we've only been building the initrd for the nvidia rootfs.
However, we're also interested on having the image beind used for a few
use-cases.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-27 22:46:07 +01:00
stevenhorsman
531311090c ci: Add two extra gatekeeper triggers
We hit a case that gatekeeper was failing due to thinking the WIP check
had failed, but since it ran the PR had been edited to remove that from
the title. We should listen to edits and unlabels of the PR to ensure that
gatekeeper doesn't get outdated in situations like this.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-27 16:45:04 +00:00
Zvonko Kaiser
bfc9e446e1 kernel: Add NUMA config
Add per arch specific NUMA enablement kernel settings

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-27 12:45:27 +01:00
Steve Horsman
c5ae8c4ba0 Merge pull request #12144 from BbolroC/use-runs-on-to-choose-runners
GHA: Use `runs-on` only for choosing proper runners
2025-11-27 09:54:39 +00:00
Fabiano Fidêncio
2e1ca580a6 runtime-rs: Only QEMU supports templating
We can remove the checks and default values attribution from all other
shims.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-27 10:31:28 +01:00
Alex Lyn
df8315c865 Merge pull request #12130 from Apokleos/stability-rs
tests: Enable stability tests for runtime-rs
2025-11-27 14:27:58 +08:00
Fupan Li
50dce0cc89 Merge pull request #12141 from Apokleos/fix-nydus-sn
tests: Properly handle containerd config based on version
2025-11-27 11:59:59 +08:00
Fabiano Fidêncio
fa42641692 kata-deploy: Cover all flavours of QEMU shims with multiInstallSuffix
We were missing all the runtime-rs variants.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-26 17:44:16 +01:00
Fabiano Fidêncio
96d1e0fe97 kata-deploy: Fix multiInstallSuffix for NV shims
When using the multiInstallSuffix we must be cautelous on using the shim
name, as qemu-nvidia-gpu* doesn't actually have a matching QEMU itself,
but should rather be mapped to:
qemu-nvidia-gpu -> qemu
qemu-nvidia-gpu-snp -> qemu-snp-experimental
qemu-nvidia-gpu-tdx -> qemu-tdx-experimental

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-26 17:44:16 +01:00
Markus Rudy
d8f347d397 Merge pull request #12112 from shwetha-s-poojary/fix_list_routes
agent: fix the list_routes failure
2025-11-26 17:32:10 +01:00
Steve Horsman
3573408f6b Merge pull request #11586 from zvonkok/numa-qemu
qemu: Enable NUMA
2025-11-26 16:28:16 +00:00
Steve Horsman
aae483bf1d Merge pull request #12096 from Amulyam24/enable-ibm-runners
ci: re-enable IBM runners for ppc64le and s390x
2025-11-26 13:51:21 +00:00
Steve Horsman
5c09849fe6 Merge pull request #12143 from kata-containers/topic/add-report-tests-to-workflows
workflows: Add Report tests to all workflows
2025-11-26 13:18:21 +00:00
Steve Horsman
ed7108e61a Merge pull request #12138 from arvindskumar99/SNPrequired
CI: readding SNP as required
2025-11-26 11:33:07 +00:00
Amulyam24
43a004444a ci: re-enable IBM runners for ppc64le and s390x
This PR re-enables the IBM runners for ppc64le/s390x build jobs and s390x static checks.

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2025-11-26 16:20:01 +05:30
Hyounggyu Choi
6f761149a7 GHA: Use runs-on only for choosing proper runners
Fixes: #12123

`include` in #12069, introduced to choose a different runner
based on component, leads to another set of redundant jobs
where `matrix.command` is empty.
This commit gets back to the `runs-on` solution, but makes
the condition human-readable.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-26 11:35:30 +01:00
Alex Lyn
4e450691f4 tests: Unify nydus configuration to containerd v3 schema
Containerd configuration syntax (`config.toml`) varies across versions,
requiring per-version logic for fields like `runtime`.

However, testing confirms that containerd LTS (1.7.x) and newer
versions fully support the v3 schema for the nydus remote snapshotter.

This commit changes the previous containerd v1 settings in `config.toml`.
Instead, it introduces a unified v3-style configuration for nydus, which
can be vailid for lts and active containerds.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-26 17:58:16 +08:00
stevenhorsman
4c59cf1a5d workflows: Add Report tests to all workflows
In the CoCo tests jobs @wainersm create a report tests step
that summarises the jobs, so they are easier to understand and
get results for. This is very useful, so let's roll it out to all the bats
tests.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-26 09:28:36 +00:00
shwetha-s-poojary
4510e6b49e agent: fix the list_routes failure
relax list_routes tests so not every route requires a device

Signed-off-by: shwetha-s-poojary <shwetha.s-poojary@ibm.com>
2025-11-25 20:25:46 -08:00
Xuewei Niu
04e1cf06ed Merge pull request #12137 from Apokleos/fix-netdev-mq
runtime-rs: fix QMP 'mq' parameter type in netdev_add to boolean
2025-11-26 11:49:33 +08:00
Alex Lyn
ebe084e093 Merge pull request #12122 from fidencio/topic/configs-do-no-have-commented-out-options
runtimes: config: Do NOT have commented fields
2025-11-26 10:33:32 +08:00
Alex Lyn
e9f50f6e71 Merge pull request #12116 from manuelh-dev/mahuber/ci-openvpn-policy-v2
policy: ci: enable security policy for openvpn test case
2025-11-26 09:35:43 +08:00
Fabiano Fidêncio
e859537c74 runtimes: config: Do NOT have commented fields
In order to have a better way to set things up using a toml editor, we
should take the containerd approach and actually have everything
uncommnted.  This will help us to unify how we deal with such values in
the future from the kata-deploy POV.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-25 19:26:56 +01:00
Arvind Kumar
c085011a0a CI: readding SNP as required
Reenabling the SNP CI node as a required test.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2025-11-25 17:05:01 +00:00
Fabiano Fidêncio
5ca4f2b9ff runtimes: annotations: Fix kernel param handling
We need to ensure that we do not blindly append nor blindly override the
kernel parameters set by default, but rather modify the values in case
they exist, and append in case they do not.

Now we're actually making golang and rust runtime behave the same, as so
far they were behaving differently, each version wrong in its own way.
:-p.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-25 16:04:52 +01:00
Zvonko Kaiser
45cce49b72 shellcheckk: Fix [] [[]] SC2166
This file is a beast so doing one shellcheck fix after the other.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:46:16 +01:00
Zvonko Kaiser
b2c9439314 qemu: Update tools/packaging/static-build/qemu/build-qemu.sh
This nit was introduced by 227e717 during the v3.1.0 era. The + sign from the bash substitution ${CI:+...} was copied by mistake.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-25 15:46:09 +01:00
Zvonko Kaiser
2f3d42c0e4 shellcheck: build-qemu.sh is clean
Make shellcheck happy

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:46:07 +01:00
Zvonko Kaiser
f55de74ac5 shellcheck: build-base-qemu.sh is clean
Make shellcheck happy

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:45:49 +01:00
Zvonko Kaiser
040f920de1 qemu: Enable NUMA support
Enable NUMA support with QEMU.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:45:00 +01:00
Alex Lyn
de9308419b Merge pull request #12135 from microsoft/danmihai1/init-data
agent: allow disabling detect_initdata_device
2025-11-25 21:07:57 +08:00
Alex Lyn
34d3bd18bc Merge pull request #12132 from fidencio/topic/runtime-classes-fix-nvidia-gpu-podOverhead
runtimeclasses: Fix nvidia-gpu podOverhead
2025-11-25 20:23:07 +08:00
Alex Lyn
7f4d856e38 tests: Enable nydus tests for qemu-runtime-rs
We need enable nydus tests for qemu-runtime-rs, and this commit
aims to do it.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 17:45:57 +08:00
Alex Lyn
98df3e760c runtime-rs: fix QMP 'mq' parameter type in netdev_add to boolean
QEMU netdev_add QMP command requires the 'mq' (multi-queue) argument
to be of boolean type (`true` / `false`). In runtime-rs the virtio-net
device hotplug logic currently passes a string value (e.g. "on"/"off"),
which causes QEMU to reject the command:
```
    Invalid parameter type for 'mq', expected: boolean
```
This patch modifies `hotplug_network_device` to insert 'mq' as a proper
boolean value of `true . This fixes sandbox startup failures when
multi-queue is enabled.

Fixes #12136

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 17:34:36 +08:00
Alex Lyn
23393d47f6 tests: Enable stability tests for qemu-runtime-rs on nontee
Enable the stability tests for qemu-runtime-rs CoCo on non-TEE
environments

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:18:37 +08:00
Alex Lyn
f1d971040d tests: Enable run-nerdctl-tests for qemu-runtime-rs
Enable nerdctl tests for qemu-runtime-rs

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:14:50 +08:00
Alex Lyn
c7842aed16 tests: Enable stability tests for runtime-rs
As previous set without qemu-runtime-rs, we enable it in this commit.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:12:12 +08:00
Alex Lyn
aadf1d6f71 Merge pull request #11932 from Apokleos/enhance-blk-params
runtime-rs: Allow configuration of virtio block queue parameters
2025-11-25 15:24:12 +08:00
Dan Mihai
22d60a36c0 agent: allow disabling detect_initdata_device
Allow users to build the Kata Agent using INIT_DATA=no to disable the
detect_initdata_device() code loop and associated debug log output.

Future additional improvements related to Init Data are tracked by #11532.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-25 02:44:28 +00:00
Fabiano Fidêncio
bb56a2e4d9 runtimeclasses: Fix nvidia-gpu podOverhead
On 69c4fc4e76, I've mistakenly changed the
nvidia-gpu podOverhead while I should only have changed the TEE
nvidia-gpu ones.

Let's move it back to its original value.

Reported-by: Joji Mekkattuparamban <jojim@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-24 21:43:29 +01:00
Zvonko Kaiser
55489818d6 gpu: TDX kernel param cleanup
This settings is not needed anymore with Ubuntu 25.10
and the newest QEMU releases for TDX by Ubuntu.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-24 15:49:16 +01:00
Steve Horsman
e1e370091c Merge pull request #12128 from fidencio/topic/kata-deploy-nfd-adjust-runtime-classe
kata-deploy: nfd: Patch TEE runtimeclasses when needed
2025-11-24 14:05:43 +00:00
Steve Horsman
d437f875aa Merge pull request #12126 from zvonkok/cold-plug-cleanup
gpu: Cleanup Makefile
2025-11-24 14:01:49 +00:00
Zvonko Kaiser
77089fe5b3 Merge pull request #12115 from nheinemans-asml/main
Kata-deploy: Add tolerations to daemonset and cleanup job
2025-11-24 09:00:42 -05:00
Manuel Huber
331515e1b8 ci: enable security policy for openvpn test
With issue 11777 being resolved, this commit enables openvpn
policy testing. The remaining work on the security policy
required to successfully run this test case was to enable UDP
ports for Service kinds and to use the mount path's last component
instead of the volume name to construct the expected storage
source path.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-23 17:23:43 +00:00
Manuel Huber
4f32816ea3 policy: Use mount path instead of volume name
Use the mount path's last component instead of the volume name to
construct the expected storage source path. Example: Name of a
volumeMount is 'openvpn-config' and its mountPath is
'/etc/openvpn/'. Without this change, we use 'openvpn-config' to
calculate the expected storage source path. However, we need to
use 'openvpn', because the shim uses the basename of the
destination path as the source suffix and not the volume name.
For reference, see 'fs_hsare_linux.go"'s 'ShareFile' function
where the filename variable uses 'filepath.Base(m.Destionation))'.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-23 17:23:43 +00:00
Manuel Huber
e4123a9848 policy: support UDP based Service types
For Service kinds using the UDP protocol as port. An example is
the openvpn-server-service.yaml file part of the openvpn CI test.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-23 17:23:43 +00:00
Fabiano Fidêncio
d0f3eb935e kata-deploy: nfd: Patch TEE runtimeclasses when needed
We've added logic to properly do the book keeping of the TEE keys when
using NFD **AND** creating the runtime classes. However, we need to also
take into consideration the case where the runtimeclasses are being
created by the helm template, and in that case we just update what helm
has deployed.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-23 10:27:52 +01:00
Zvonko Kaiser
dce207397c gpu: Cleanup Makefile
Some VARS were introduced but not cleaned up with
the recent cold-plug PR, doing this now

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-21 22:03:34 +00:00
Zvonko Kaiser
8afcdae31f Merge pull request #12092 from manuelh-dev/mahuber/cc-gpu-ci-smi-srs
tests: nvidia: cc: Remove nvrc.smi.srs=1 parameter
2025-11-21 08:26:13 -05:00
Steve Horsman
37dd055283 Merge pull request #12090 from stevenhorsman/required-tests-update-14-nov-2025
Required tests update 14 nov 2025
2025-11-21 12:05:05 +00:00
nheinemans-asml
ef9d4e8b0d kata-deploy: Add tolerations value to kata-deploy
This allows the daemonset and cleanup job to run on tainted nodes.

fixes #12114

Signed-off-by: nheinemans-asml <nick.heinemans@asml.com>
Signed-off-by: nheinemans-asml <97238218+nheinemans-asml@users.noreply.github.com>
2025-11-21 09:49:47 +01:00
Manuel Huber
dfc229f51e tests: nvidia: cc: Remove nvrc.smi.srs=1 parameter
Remove the nvrc.smi.srs=1 parameter from the kernel command line.
In CC use cases, the attestation agent is expected to set the GPU
ready state. For the CUDA vectorAdd case where attestation agent
is not being used, we set the ready state by adding the kernel
command line parameter through an annotation.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-21 09:35:05 +01:00
Manuel Huber
6c6fc50aa5 tests: nvidia: cc: allow-all policy and init-data
Add an allow-all policy for the CC GPU tests and ensure the init-data
device is being created (hypervisor annotations).

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-21 09:24:15 +01:00
Manuel Huber
7e20118c8e tests: nvidia: move secret definitions to bottom
The add_allow_all_policy_to_yaml in tests_common.sh needs some
improvements so that this function can support pod manifests with
different resource kinds. For now, moving the Secret definition
to the bottom so that we can create a default policy for the Pod.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-21 09:24:15 +01:00
Manuel Huber
ffd5443637 tests: nvidia: adapt is_aks_cluster
The qemu-nvida-gpu handlers should not cause is_aks_cluster to
return 1. Otherwise, CI logic will assume these hypervisors run on
AKS hosts, see the following message in CI w/o this change:
INFO: Adapting common policy settings for AKS Hosts

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-21 09:24:15 +01:00
Manuel Huber
f2bdd12e5e tests: nvidia: Check KATA_HYPERVISOR var
Fail explicitly when a wrong KATA_HYPERVISOR variable is provided.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-21 09:24:15 +01:00
Xuewei Niu
bf967b81cc runtime-rs: Bump cgroups-rs to v0.5.0
The new version fixes some issues with systemd version, path
verification.

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2025-11-21 09:06:26 +01:00
Fabiano Fidêncio
6b40b59861 tests: Reduce KBS deployment check flakeness
We currently start a pod that does a `wget` to the KBS address, and
fails after 5 seconds.

By the time it fails and reports back, we can see that KBS is actually
running, but the workflow failed as the checker failed. :-/

Let's give it more time for the KBS to show up, and the flakeness should
go away.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-20 19:29:26 +01:00
Fabiano Fidêncio
35672ec5ee tests: cc: Test authenticated images with force guest pull
As this should simply work.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-20 19:02:15 +01:00
Fupan Li
b86e7ff42b Merge pull request #12087 from jojimt/device_cold_plug
shim: Support device cold plug with Kubernetes
2025-11-20 19:17:13 +08:00
Joji Mekkattuparamban
7dc292094c shim: go vendor changes for cold plug support
Vendor in the kubelet pod resources API.

Signed-off-by: Joji Mekkattuparamban <jojim@nvidia.com>
2025-11-20 10:58:55 +01:00
Joji Mekkattuparamban
5aa184925a shim: Support device cold plug with Kubernetes
Utilize Kubelet's Pod Resource API to determine device allocations
for the Pod during sandbox creation. Use CDI files to translate the device
IDs to corresponding device paths and perform device injection.

Fixes #12009

Signed-off-by: Joji Mekkattuparamban <jojim@nvidia.com>
2025-11-20 10:58:55 +01:00
Manuel Huber
477ca3980b tests: nvidia: cc: Re-enable multi GPU test case
Use the pod name variable so that kubectl wait finds the pod. Currently,
kubectl waits for nvidia-nim-llama-3-2-nv-embedqa-1b-v2, not for
nvidia-nim-llama-3-2-nv-embedqa-1b-v2-tee

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-20 10:05:46 +01:00
Zvonko Kaiser
89cd561340 Merge pull request #12059 from manuelh-dev/mahuber/bb-debug-v2
gpu: introduce a new devkit build flag to produce a rootfs for developers
2025-11-19 13:03:46 -05:00
Steve Horsman
8c6c31555a Merge pull request #12111 from fidencio/topic/ci-fix-erofs-ci
tests: k8s: Fix typo in authenticated tests
2025-11-19 16:08:48 +00:00
Manuel Huber
3966864376 gpu: introduce devkit build flag
Introduce a new devkit parameter which will produce a rootfs
without chisselling. This results in a larger rootfs with various
packages and binaries being included, for instance, enabling the
use of the debug console.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-19 15:50:03 +01:00
Manuel Huber
2c9e0f9f4f gpu: add signed-by to package sources
Pin to specific key. CUDA package sources in
/etc/apt/sources.list.d already use a specific key.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-19 15:50:03 +01:00
Ruoqing He
54bfbf5687 build: Exclude tools from root workspace
There are rust packages being cloned and built inside
tools/packaging/kata-deploy/local-build/build folder, which may mislead
those packages to think they are part of the kata root workspace.
Exclude the directory to avoid that.

Reported-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-11-19 15:49:25 +01:00
Fabiano Fidêncio
ae463642ed tests: k8s: Fix typo in authenticated tests
The person who introduced the check, someone named Fabiano Fidêncio,
forgot a `$` in a variable assignment.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-19 11:59:59 +01:00
Steve Horsman
87b180383e Merge pull request #11802 from kata-containers/dependabot/github_actions/oras-project/setup-oras-1.2.4
build(deps): bump oras-project/setup-oras from 1.2.2 to 1.2.4
2025-11-19 09:58:37 +00:00
dependabot[bot]
ede5ac9c2d build(deps): bump the bit-vec group across 2 directories with 1 update
Bumps the bit-vec group with 1 update in the /src/agent directory: [bit-vec](https://github.com/contain-rs/bit-vec).
Bumps the bit-vec group with 1 update in the /src/tools/agent-ctl directory: [bit-vec](https://github.com/contain-rs/bit-vec).


Updates `bit-vec` from 0.6.3 to 0.8.0
- [Changelog](https://github.com/contain-rs/bit-vec/blob/master/RELEASES.md)
- [Commits](https://github.com/contain-rs/bit-vec/commits)

Updates `bit-vec` from 0.6.3 to 0.8.0
- [Changelog](https://github.com/contain-rs/bit-vec/blob/master/RELEASES.md)
- [Commits](https://github.com/contain-rs/bit-vec/commits)

---
updated-dependencies:
- dependency-name: bit-vec
  dependency-version: 0.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: bit-vec
- dependency-name: bit-vec
  dependency-version: 0.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: bit-vec
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-19 10:43:25 +01:00
stevenhorsman
b75d90b483 ci: Comment out snp ci from required-tests
The snp CI has not been required for a while and has recently been
broken, so comment it out from the list of required jobs.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-19 09:39:36 +00:00
stevenhorsman
ae71921be2 ci: Update build-checks name in required-tests
to update the required-tests to match.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-19 09:39:36 +00:00
stevenhorsman
112ed9bb46 ci: Comment out run-nydus from required-tests
The run-nydus tests are not stable and blocking PRs, so make them
non-required temporarily until they can be looked at

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-19 09:38:38 +00:00
Fupan Li
478a5ff693 Merge pull request #12109 from Apokleos/enable-cocodev-rs
tests: Enable AUTO_GENERATE_POLICY for qemu-coco-dev-runtime-rs
2025-11-19 12:05:22 +08:00
Alex Lyn
1da225efc5 tests: Enable AUTO_GENERATE_POLICY for qemu-coco-dev-runtime-rs
Enable auto-generate policy on cbl-mariner Hosts for
qemu-coco-dev-runtime-rs if the user didn't specify an
AUTO_GENERATE_POLICY value.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-19 10:44:03 +08:00
Alex Lyn
8d85548711 Merge pull request #12102 from Apokleos/rs-copyfile-devcgrp
runtime-rs: Clear Linux.Resources.Devices completely and correct the guest path for container mount binding
2025-11-19 09:05:59 +08:00
Fabiano Fidêncio
8c02b5b913 tests: nvidia: cc: Temporarily skip multi GPU for nim tests
We will re-enable this one later on once the changes to properly cold
plug multi GPUs are merged.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
69c4fc4e76 kata-deploy: Adjust podOverhead for GPU TEEs
Let's just move the podOverhead to a gigantic value, as we do need pod
snadboxes as big as that, and we've noticed QEMU being OOM killed with
smaller overheads.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
94ed4051b0 tests: nvidia: cc: Increase RAM for NIM pods
Those need to pull the models inside the guest, and the guest has 50% of
its memory "allowed" to be used as tmpfs, so, we gotta usa the RAM that
we have.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
e5062a056e tests: nvidia: cc: Adjust timeouts on NIM pods
Timeout increases for confidential computing slowness:
* livenessProbe:
  * initialDelaySeconds: 15 → 120 seconds
  * timeoutSeconds: 1 → 10 seconds
  * failureThreshold: 3 → 10

* readinessProbe:
  * initialDelaySeconds: 15 → 120 seconds
  * timeoutSeconds: 1 → 10 seconds
  * failureThreshold: 3 → 10

* startupProbe:
  * initialDelaySeconds: 40 → 180 seconds
  * timeoutSeconds: 1 → 10 seconds
  * failureThreshold: 180 → 300

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
dee6f2666b runtime: nvidia: Increase the guest pull timeout to 20 minutes
Yes, we're dealing with a combination of large images and image-rs
concurrent image layers being not optimal.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
6be43b2308 tests: nvidia: Retry kubectl commands
As with CoCo some of the commands may take longer, way longer than
expected.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
bb5bf6b864 tests: nvidia: nims: Use the current auths format for KBS
We cannot use the same format used for docker, as it includes username
and password, while what's expected when using Trustee does not.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Fabiano Fidêncio
92da54c088 tests: nvidia: cc: Enable NIM tests
Now that we've bumped Trustee to a version that supports the NVIDIA
remote verifier, let's re-enable the tests.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 22:29:42 +01:00
Steve Horsman
74254cba8f Merge pull request #12106 from stevenhorsman/gatekeeper-paging-reduction
ci: Adjust gatekeeper's job fetch
2025-11-18 14:08:26 +00:00
Fabiano Fidêncio
8eca0814bd tests: Run authenticated tests with experimental_force_guest_pull
As it should be supported.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 14:46:48 +01:00
Fabiano Fidêncio
5beb1af202 tests: Pass EXPERIMENTAL_FORCE_GUEST_PULL to the test
Right now we have only been passing the env var to the deployment
script, but we really need to pass it to the tests script as well.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-18 14:46:48 +01:00
Markus Rudy
638cad18ef Merge pull request #11978 from burgerdev/genpolicy-test-refactor
genpolicy: prepare integration tests for programmatic modification
2025-11-18 09:54:40 +01:00
stevenhorsman
9f0fea1e34 ci: Adjust gatekeeper's job fetch
Try and reduce the page limit of each job request to avoid the chances of
us tripping over github's 10s api limit.
All credit to @burgerdev for the investigation and suggestion!

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-18 08:22:36 +00:00
Alex Lyn
6ceacee0b9 runtime-rs: Add queue_size and num_queues for block volumes
Add the related block queue_size and num_queues in volumes based on
block devices, This very important for IO performance.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Alex Lyn
30a9a8b4ec runtime-rs: Add queue_size and num_queues for block device
Add the queue_size and num_queues in block device config when the
block device is handled.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Alex Lyn
9b0204a2de runtime-rs: Set Clh's disk queue_size and num_queues
Previous Clh's settings with disk queue_size and num_queues are
hardcodes, they should be configurable with user-defined values.
This commit is to address such issue via passing these settings.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Alex Lyn
f19c48505c runtime-rs: Introduce queue_size and num_queues in BlockConfig
Usually, we pass the related block config via BlockConfig, and to reach
the goal of user-friendly setting queue_size and num_queues for users,
the queue_size and num_queues are introduced in BlockConfig.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Alex Lyn
e958993348 kata-types: Introduce queue_size and num_queues within BlockDeviceInfo
Add two fields of queue_size and num_queues in BlockDeviceInfo to allow
users to set the related items via configurations

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Alex Lyn
780c45de23 runtime-rs: Add support queue_size and num_queues within configurations
Add related items for block device queue size and num queues in
configurations. And users can set the related items by configurations.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 14:53:43 +08:00
Steve Horsman
ac021e2ab9 Merge pull request #11563 from RuoqingHe/single-workspace
build: Introduce root workspace for rust components
2025-11-18 06:36:18 +00:00
Alex Lyn
d071384bba runtime-rs: Clear Linux.Resources.Devices completely
The current implementation causes issues with the Agent Policy
nontee CI tests, as Kata-Agent does not allow any configuration
for `count(Linux.Resources.Devices) == 0`.

This commit ensures that Linux.Resources.Devices, including all its
values, is completely cleared from the OCI Runtime Specification before
being passed to the Kata-Agent.

This addresses the CI failure by enforcing the required empty state for
the Devices cgroup configuration.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 13:40:09 +08:00
Xuewei Niu
ca8b3300d3 Merge pull request #11620 from zhangckid/indep_iothreads_upstream
Runtime/QEMU: Introduce virtio-blk with iothreads and enable Indep iothreads framework
2025-11-18 11:08:51 +08:00
Alex Lyn
5982e66503 runtime-rs: Ensure unique guest path for container mount binding
Previously, CopyFile implementation attempted to reuse existing guest
paths for subsequent containers within the same Pod. This prevented
correct bind mounting of shared configurations (e.g., ConfigMaps,
Service Accounts) into the later containers within a multi-containers
pod, as they lacked their own allocated guest path.

This commit modifies the logic to create a unique guest path for every
container that requires file propagation.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-18 11:03:26 +08:00
Fupan Li
f791be1abb Merge pull request #12064 from Apokleos/policy-optional-path
genpolicy: Make cpath compatible with both runtime-rs and runtime-go
2025-11-18 10:19:26 +08:00
Ruoqing He
e6b24cd789 build: Exclude crates with no workspace setup
Crates with no workspace setup would think themselves are in the root
workspace, which our root workspace is not ready for them. Excluding
them for now.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-11-18 01:39:48 +00:00
Ruoqing He
6068242bf1 build: Move dragonball to root workspace
Move dragonball and all its member of that workspace into root
workspace.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-11-18 01:39:48 +00:00
Ruoqing He
3fbe693658 build: Introduce root workspace for rust components
Add Cargo.toml at repo root, use this root workspace for as many as
possible Rust components of Kata Containers. This would enable us to
share a common Cargo.lock file, and reduce the noise from dependabot.

Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
2025-11-18 01:39:48 +00:00
Steve Horsman
650ada7bcc Merge pull request #12101 from stevenhorsman/release/3.23.0
release: Bump version to 3.23.0
2025-11-17 21:09:45 +00:00
stevenhorsman
70f1f4a3ac release: Bump version to 3.23.0
Bump VERSION and helm-chart versions

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 19:27:25 +00:00
stevenhorsman
c47e8d0ab8 kata-ctl: update backtrace and local references
Similar to #12075, bump-backtrace to 0.3.76 to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
As a side effect this brought in loads of other crate changes, which I think are due
to it bumping the local dependencies that this package builds on.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
d16620bae1 runk: update backtrace to 0.3.76
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
0b259e4fcf agent-ctl: update backtrace to 0.3.76
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
4abf79f16f genpolicy: update backtrace to 0.3.76
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
4158d9a94a runtime-rs: update flate2 & backtrace
Similar to #12075, bump flate2 and backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
fe10db233c runtime-rs: Remove libbacktrace feature from backtrace
This feature was removed in https://github.com/rust-lang/backtrace-rs/pull/615
which shows that the implementation was removed over two years ago, so
get rid of this feature, so we can move to newer versions

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
stevenhorsman
398e7987cd dragonball: update flate2 & backtrace
Similar to #12075, bump flate2 and backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 20:13:04 +01:00
Steve Horsman
04c7d11689 Merge pull request #12044 from lifupan/fix_update_interface
runtime: fix the issue of update interface error
2025-11-17 14:45:36 +00:00
Fupan Li
763a0d8675 runtime: fix the issue of update interface error
Since the network device hotplug is an asynchronous operation,
it's possible that the hotplug operation had returned, but
the network device hasn't ready in guest, thus it's better to
retry on this operation to wait until the device ready in guest.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-17 13:58:36 +01:00
Steve Horsman
b3eb794662 Merge pull request #12098 from stevenhorsman/csi-kata-direct-volume-xz-0.5.15-bump
csi-kata-directvolume: Bump xz module
2025-11-17 12:47:28 +00:00
Fabiano Fidêncio
75996945aa kata-deploy: try-kata-values.yaml -> values.yaml
This makes the user experience better, as the admin can deploy Kata
Containers without having to download / set up any additional file.

Of course, if the admin wants something more specific, examples are
provided.

Tests and documentation are updated to reflect this change.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-17 12:16:17 +01:00
Alex Lyn
71a9ecf9f8 Merge pull request #12095 from lifupan/fix_vcpu_number
runtime-rs: fix the issue of wrong vcpu number
2025-11-17 19:11:48 +08:00
stevenhorsman
502a3ce3b6 csi-kata-directvolume: Bump xz module
Bump github.com/ulikunitz/xz to v0.5.15, to remediate vulnerability
GO-2025-3922

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-17 10:20:50 +00:00
Markus Rudy
b771bb6ed3 genpolicy: log requests as jsonlines
The current format of genpolicy request logs looks a bit like JSON, but
it does not parse out of the box and needs post-processing with sed, for
example.

This commit changes the log format to jsonlines[1], which is basically
newline-delimited compact JSON values. Compared to standard JSON, this
allows streaming output. The resulting file can be converted and
processed programmatically, for example with `jq -s`.

The fields are also adjusted to match the field names of TestRequest, so
that the logged requests can be used immediately in tests.

[1]: https://jsonlines.org/

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-17 09:01:00 +01:00
Markus Rudy
eb6cf025b3 genpolicy: format testcases.json and sort by key
This should allow keeping future diffs minimal.

The files were formatted with `jq -S`, which should be used after future
updates to the test case files.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-17 09:01:00 +01:00
Markus Rudy
851f8258af genpolicy: move testcase request type out of struct
Storing the request type outside the request object has two benefits:

* The request JSON passed to the Rego engine matches more closely what
  would be passed by the agent (no `type` field).
* If we want to update the requests, it's easier to insert them into a
  dedicated field, rather than inserting them and amending the type
  field.

This is a first step towards programmatic updates of testcase files.

This commit also adds the 'Request' suffix to the test case enum, such
that we can use the 'ep' input for allow_request directly.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-17 09:01:00 +01:00
zhangchen.kidd
914063bcdd runtime: documentation: Add virtio-blk support iothread comments in docs
Add comments to make the "EnableIOThreads" flag as a switch
for virtio-blk(based on IndepIOThreads) driver.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
9128112e3d runtime: qemu: Add Independent IOThread support for virtio-blk
Make hotplug virtio-blk device attach to Independent IOThread 0 as default
when enabled the EnableIOThreads and IndepIOThreads.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
fea954df7a runtime: qemu: qmp: Add iothread args for QMP ExecutePCIDeviceAdd
Qemu already support the device_add with iothread args.
Make KATA have ability to hotplug PCI device with IOThreads.
Currently, just support QEMU as the hypervisor, not sure it
works for stratovirt.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
af203b7dee runtime: qemu: introduce setup iothread function
Make the original virtio-scsi iothread and the new independent
iothread to a dedicated method for handing the related logics.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
d20712aa9e runtime: qemu: Add comments for virtio-scsi iothread args
For current implementation, just virtio-scsi use this
iothread path.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
f9d4829e77 rumtime: qemu: Add indep_iothreads for QEMU hypervisor toml
Add indep_iothreads args for QEMU related configuration toml.
The default value is 0.

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:03 +08:00
zhangchen.kidd
c3d3684f81 runtime: Introduce independent IOThreads framework
Introduce independent IOThread framework for Kata container.

What is the indep_iothreads:
This new feature introduce a way to pre-alloc IOThreads
for QEMU hypervisor (maybe other hypervisor can support too).
Independent IOThreads enables IO to be processed in a separate thread.
To generally improve the performance of each module, avoid them
running in the QEMU main loop.

Why need indep_iothreads:
In Kata container implementation, many devices based on hotplug
mechanism. The real workload container may not sync the same
lifecycle with the VM. It may require to hotplug/unplug new disks
or other devices without destroying the VM. So we can keep the
IOThread with the VM as a IOThread pool(some devices need multi iothreads
for performance like virtio-blk vq-mapping), the hotplug devices
can attach/detach with the IOThread according to business needs.
At the same time, QEMU also support the "x-blockdev-set-iothread"
to change iothreads(but it need stop VM for data secure).
Current QEMU have many devices support iothread, virtio-blk,
virtio-scsi, virtio-balloon, monitor, colo-compare...etc...

How it works:
Add new item in hypervisor struct named "indep_iothreads" in toml.
The default value is 0, it reused the original "enable_iothreads" as
the switch. If the "indep_iothreads" != 0 and "enable_iothreads" = true
it will add qmp object -iothread indepIOThreadsPrefix_No when VM startup.
The first user is the virtio-blk, it will attach the indep_iothread_0
as default when enable iothread for virtio-blk.

Thanks
Chen

Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
2025-11-17 15:55:01 +08:00
Fupan Li
c74a2650e9 runtime-rs: fix the issue of wrong vcpu number
In commit 1f95d9401b
runtime-rs: change representation of default_vcpus from i32 to f32,

When the vCPU number is less than 1.0, directly converting an integer to
a floating-point number will automatically convert it to 0. Therefore,
it needs to be rounded up before converting it back to an integer.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-17 10:09:51 +08:00
Alex Lyn
daca7b268b genpolicy: Make cpath compatible with both runtime-rs and runtime-go
Update the `cpath` variable in the policy template to support the
optional `/passthrough` subpath used by runtime-rs. This ensures
that mount source path validation works correctly for both runtime
implementations.

By changing `cpath` to include the `(?:/passthrough)?` regular
expression fragment, we make the `/passthrough` segment optional.
The updated `cpath`:
`/run/kata-containers/shared/containers(?:/passthrough)?`

This single regex pattern now correctly matches both:
1.`/run/kata-containers/shared/containers/<sandbox-id>/...`
(runtime-go)
2.`/run/kata-containers/shared/containers/passthrough/<sandbox-id>/...`
(runtime-rs)

This elegantly resolves the compatibility issue without needing to add
separate or conditional logic to the policy rules, making the policy
more robust and maintainable.

Fixes: #12063

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-17 09:36:19 +08:00
Fabiano Fidêncio
2e000129a9 kata-deploy: tests: Add example values files for easy Kata deployment
Add three example values files to make it easier for users to try out
different Kata Containers configurations:

- try-kata.values.yaml: Enables all available shims
- try-kata-tee.values.yaml: Enables only TEE/confidential computing shims
- try-kata-nvidia-gpu.values.yaml: Enables only NVIDIA GPU shims

These files use the new structured configuration format and serve as
ready-to-use examples for common deployment scenarios.

Also update the README.md to document these example files and how to use them.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
8717312599 tests: Migrate helm_helper to use new structured configuration
Update the helm_helper function in gha-run-k8s-common.sh to use the
new structured configuration format instead of the legacy env.* format.

All possible settings have been migrated to the structured format:
- HELM_DEBUG now sets root-level 'debug' boolean
- HELM_SHIMS now enables shims in structured format with automatic
  architecture detection based on shim name
- HELM_DEFAULT_SHIM now sets per-architecture defaultShim mapping
- HELM_EXPERIMENTAL_SETUP_SNAPSHOTTER now sets snapshotter.setup array
- HELM_ALLOWED_HYPERVISOR_ANNOTATIONS now sets per-shim allowedHypervisorAnnotations
- HELM_SNAPSHOTTER_HANDLER_MAPPING now sets per-shim containerd.snapshotter
- HELM_AGENT_HTTPS_PROXY and HELM_AGENT_NO_PROXY now set per-shim agent proxy settings
- HELM_PULL_TYPE_MAPPING now sets per-shim forceGuestPull/guestPull settings
- HELM_EXPERIMENTAL_FORCE_GUEST_PULL now sets per-shim forceGuestPull/guestPull

The test helper automatically determines supported architectures for
each shim (e.g., qemu-se supports s390x, qemu-cca supports arm64,
qemu-snp/qemu-tdx support amd64, etc.) and applies per-shim settings
to the appropriate shims based on HELM_SHIMS.

Only HELM_HOST_OS remains in legacy env.* format as it doesn't have
a structured equivalent yet.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
aa89fda7fc kata-deploy: Document new structured configuration and deprecation
Add comprehensive documentation for the new structured configuration
format, including:

- Migration guide from legacy env.* format
- List of deprecated fields with removal timeline (2 releases)
- Examples of the new structured format
- Explanation of key benefits
- Backward compatibility notes

The documentation makes it clear that the legacy format is deprecated
but will continue to work during the transition period.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
119893b8e8 kata-deploy: Add backward compatibility for legacy env.* configuration
This commit adds backward compatibility support to ensure existing
configurations using the legacy env.* format continue to work.

The helper functions now check for legacy env.* values first, and
only fall back to the new structured format if legacy values are
not set. This allows for gradual migration without breaking
existing deployments.

Backward compatibility is maintained for:
- env.shims, env.shims_* (per architecture)
- env.defaultShim, env.defaultShim_* (per architecture)
- env.allowedHypervisorAnnotations
- env.snapshotterHandlerMapping_* (per architecture)
- env.pullTypeMapping_* (per architecture)
- env.agentHttpsProxy, env.agentNoProxy
- env._experimentalSetupSnapshotter
- env._experimentalForceGuestPull_* (per architecture)
- env.debug

Legacy env vars (SHIMS, DEFAULT_SHIM, etc.) are still set in the
DaemonSet when using the old format to maintain full compatibility.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
ae3fb45814 kata-deploy: Introduce structured configuration format for shims
This commit introduces a new structured configuration format for
configuring Kata Containers shims in the Helm chart. The new format
provides:

- Per-shim configuration with enabled/supportedArches
- Per-shim snapshotter, guest pull, and agent proxy settings
- Architecture-aware default shim configuration
- Root-level debug and snapshotter setup configuration

All shims are disabled by default and must be explicitly enabled.
This provides better type safety and clearer organization compared
to the legacy env.* string-based format.

The templates are updated to use the new structure exclusively.
Backward compatibility will be added in a follow-up commit.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
e85d584e1c kata-deploy: script: Fix FOR_ARCH handling
As the some of the global vars can be empty, we should actually check
their _FOR_ARCH version instead.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
397289c67c kata-deploy: script: Handle {https,no}_proxy per shim
As we're making the values.yaml more user friendly, we actually have to
handle the https_proxy and no_proxy entries per shim, instead of having
this globally available, as this will only affect images being pulled
inside the guest (as in, when using TEE variations of the shims).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
f62d9435a2 runtimeclasses: firecracker is not a valid one
At least not for now, and it was mistakenly added to the list.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
nheinemans-asml
3380458269 kata-deploy: Add daemonsets to the RBAC
Add missing rules which are necessary for dealing with
daemonsets as kata-deploy know checks for the NFD
daemonset as part of its script.

fixes #12083

Signed-off-by: nheinemans-asml <97238218+nheinemans-asml@users.noreply.github.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-14 17:16:58 +01:00
Simon Kaegi
716c55abdd kernel: adds nft bridging and filtering support for IPv4 and IPv6
Adds a practical set of kernel config used by docker-in-docker and kind
for network bridging and filtering. It also includes the matching IPv6
support to allow tools like kind that require IPv6 network policies to
work out of the box.

This support includes:
- nftables reject and filtering support for inet/ipv4/ipv6
- Bridge filtering for container-to-container traffic
- IPv6 NAT, filtering, and packet matching rules for network policies
- VXLAN and IPsec crypto support for network tunneling
- TMPFS POSIX ACL support for filesystem permissions

The configs are organized across fragment files:
- common/fs.conf: TMPFS ACL support
- common/crypto.conf: IPsec/VXLAN crypto algorithms
- common/network.conf: VXLAN, IPsec ESP, nftables bridge/ARP/netdev
- common/netfilter.conf: IPv6 netfilter stack and nftables advanced features

Fixes: #11886

Signed-off-by: Simon Kaegi <simon.kaegi@gmail.com>
2025-11-14 15:57:47 +01:00
Dan Mihai
5cc1024936 ci: k8s: AUTO_GENERATE_POLICY for coco-dev
Re-enable AUTO_GENERATE_POLICY for coco-dev Hosts, unless PULL_TYPE is
"experimental-force-guest-pull", or the caller specified a different
value for AUTO_GENERATE_POLICY.

Auto-generated Policy has been disabled accidentally and recently for
these Hosts, by a GHA workflow change.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-14 15:53:34 +01:00
Dan Mihai
73ad83e1cc genpolicy: update workaround for guest pull
Don't skip anymore parsing the pause container image when using the
recently updated AKS pause container handling - i.e. when
pause_container_id_policy == "v2".

This was the easiest CI fix for guest pull + new AKS given the *current*
tests. When adding *new* UID/GID/AdditionalGids tests in the future,
these workarounds might need additional updates.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-14 15:53:34 +01:00
Steve Horsman
7bcb971398 Merge pull request #12075 from burgerdev/genpolicy-archived-deps
retire `adler` dependency
2025-11-14 14:51:47 +00:00
Steve Horsman
1d0d066869 Merge pull request #12069 from Amulyam24/static-checks-ppc
github: run agent checks for Power on ppc64le instead of ubuntu-24.04-ppc64le
2025-11-14 10:18:37 +00:00
Markus Rudy
dd59131924 runtime-rs: update flate2 to 1.1.5
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is negligible, because flate2 is only used for initdata
decompression, which is limited to a couple of MiB anyway.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-14 11:11:44 +01:00
Markus Rudy
3949492f19 genpolicy: update flate2 to 1.1.5
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is acceptable for a developer tool.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-14 11:10:29 +01:00
Steve Horsman
0ab71771ab Merge pull request #11447 from kata-containers/runtime-rs-qemu-coco-dev-config
Runtime rs qemu coco dev config
2025-11-13 19:12:57 +00:00
stevenhorsman
1ef3e3b929 ci: Switch gatekeeper auth header
The github API suggestions that `Authorization: Bearer <YOUR-TOKEN>`
is the way to set the auth token, but it also mentioned that `token`
should work, so it's unclear if this will help much, but it shouldn't harm.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 19:01:21 +01:00
stevenhorsman
b7abcc4c37 tests: Fix wildcard skip in k8s-cpu-ns
The formatting wasn't quite right, so the `qemu-coco-dev-runtime-rs`
hypervisor wasn't skipping this test

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:21:05 +00:00
Alex Lyn
bda6bbcad3 runtime-rs: Set static_sandbox_resource_mgmt to true within nontee
Introduce a flag `DEFSTATICRESOURCEMGMT_COCO` for setting static sandbox
resource management with default true. And then set it to the item of
`static_sandbox_resource_mgmt` in configuration.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
b51af53bc7 tests/k8s: call teardown_common in some policy tests
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Alex Lyn
efc6aee4f6 runtime-rs: Support agent policy
Support agent policy within runtime-rs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
79082171ca workflows: Add Delete AKS cluster timeout
When testing this branch, on several occasions the Delete
AKS cluster step has hung for multiple hours, so add a timeout
to prevent this.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
0335012824 tests/k8s: Enable tests for qemu-coco-dev-runtime-rs
Add the runtime class to the non-tee tests and
enable it to run in the test code

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
a1ddd2c3dd kata-deploy: Add kata-qemu-coco-dev-runtime-rs runtime class
Add the runtime class and shim references for the new
 non-tee runtime-rs class

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Alex Lyn
64da581f6e kata-types: Support create_container_timeout set within configuration
Since it aligns with the create_container_timeout definition in
runtime-go, we need to set the value in configuration.toml in seconds,
not milliseconds. We must also convert it to milliseconds when the
configuration is loaded for request_timeout_ms.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
af2c2d9d00 runtime-rs: Add qemu-coco-dev-runtime-rs
Create non-tee runtime class for runtime-rs qemu CoCo development
without requiring TEE hardware. Based on the qemu-runtime-rs
config, but with updated guest image, kernel and shared_fs

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Amulyam24
b32b54c4af github: do not run agent checks for Power on ubuntu-24.04-ppc64le
The new environment of Power runners for agent checks is causing two test case failures
w.r.to selinux and inode which needs further understanding and is mostly an issue
due to environemnt change and not to do with the agent.

Fall back to running agent checks on original ppc64le self hosted runners.

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2025-11-13 15:56:43 +05:30
Gao Xiang
657c4406cd runtime: Add preliminary support for EROFS native rwlayers
So that the writable data will be written to a seperate storage
instead of tmpfs in the guest.

Note that a cleaner way should use new containerd custom mount
type but I don't have time on this for now.

More details, see:
https://github.com/containerd/containerd/blob/v2.2.0/docs/snapshotters/erofs.md#quota-support

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2025-11-13 09:55:06 +01:00
Steve Horsman
92758a17fe Merge pull request #12078 from kata-containers/switch-to-ubuntu-24.04-arm-runner
workflows: Switch to ubuntu-24.04-arm runner
2025-11-12 16:35:52 +00:00
stevenhorsman
ba56a2c372 workflows: Switch to ubuntu-22.04-arm runner
As the arm 22.04 runner isn't working at the moment, let's test the
24.04 version to see if that is better.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-12 15:37:09 +00:00
Fabiano Fidêncio
a04cdbc40f tests: Enforce qemu-coco-dev for experimental_force_guest_pull
The fact that we were not explicitly setting the VMM was leading to us
testing with the default runtime class (qemu). :-/

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-12 16:07:05 +01:00
Wainer Moschetta
e31313ce9e Merge pull request #11030 from ldoktor/webhook2
tools.kata-webhook: Add support for only-filter
2025-11-12 11:21:23 -03:00
Hyounggyu Choi
2dec247a54 Merge pull request #12038 from lifupan/fix_smaller-memeory
runtime-rs: fix the issue of hot-unplug memory smaller
2025-11-12 11:22:04 +01:00
dependabot[bot]
c715d8648c build(deps): bump oras-project/setup-oras from 1.2.2 to 1.2.4
Bumps [oras-project/setup-oras](https://github.com/oras-project/setup-oras) from 1.2.2 to 1.2.4.
- [Release notes](https://github.com/oras-project/setup-oras/releases)
- [Commits](5c0b487ce3...22ce207df3)

---
updated-dependencies:
- dependency-name: oras-project/setup-oras
  dependency-version: 1.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-12 09:45:27 +00:00
Markus Rudy
2c8d0688f2 Merge pull request #12068 from katexochen/p/full-controllers
genpolicy: support full DeploymentSpec, JobSpec; cleanup CronJobSpec
2025-11-12 10:35:38 +01:00
Fabiano Fidêncio
6d3c20bc45 riscv: Introduce its own nightly tests
By doing this, the ones interested on RISC-V support can still have a
ood visibility of its state, without the extra noise in our CI.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-12 09:46:17 +01:00
Zvonko Kaiser
d783e59b42 Merge pull request #12055 from fidencio/topic/coco-bump-trustee
versions: Bump Trustee
2025-11-12 02:48:16 -05:00
dependabot[bot]
edacdcb0bc build(deps): bump github.com/opencontainers/selinux in /src/runtime
Bumps [github.com/opencontainers/selinux](https://github.com/opencontainers/selinux) from 1.12.0 to 1.13.0.
- [Release notes](https://github.com/opencontainers/selinux/releases)
- [Commits](https://github.com/opencontainers/selinux/compare/v1.12.0...v1.13.0)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/selinux
  dependency-version: 1.13.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 23:15:40 +01:00
Steve Horsman
1954dfe349 Merge pull request #12071 from stevenhorsman/update-required-test-docker-and-stratovirt
ci: Remove stratovirt & docker tests from required
2025-11-11 21:19:25 +00:00
Zvonko Kaiser
76e4e6bc24 Merge pull request #12061 from Apokleos/correct-unexpected-cap
tests: Correct unexpected capability for policy failure test
2025-11-11 12:20:33 -05:00
Fabiano Fidêncio
d82eb8d0f1 ci: Drop docker tests
We have had those tests broken for months. It's time to get rid of
those.

NOTE that we could easily revert this commit and re-add those tests as
soon as we find someone to maintain and be responsible for such
integration.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 17:02:02 +01:00
stevenhorsman
8b5df4d360 ci: Remove stratovirt & docker tests from required
As stratovirt CI was removed in #12006 we should remove the
jobs from required.
Also the docker tests have been commented out for months, and
we are considering removing them, so clean this file up.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-11 15:38:51 +00:00
Steve Horsman
4b33000c56 Merge pull request #12067 from Apokleos/fix-guest-emptydir
runtime-rs: Fix several incorrect settings with guest empty dir.
2025-11-11 15:21:31 +00:00
Lukáš Doktor
ca91073d83 tools.kata-webhook: Add support for only-filter
sometimes it's hard to enumerate all blacklisted namespaces, lets add a
regular expression based only filter to allow specifying namespaces that
should be mutated.

Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
2025-11-11 15:21:15 +01:00
dependabot[bot]
281f69a540 build(deps): bump github.com/containerd/containerd in /src/runtime
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.7.27 to 1.7.29.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.27...v1.7.29)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-version: 1.7.29
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 14:23:47 +01:00
Paul Meyer
ec6896e96b genpolicy: remove non-existing field from CronJobSpec
There is no backoffLimit on CronJobSpec, also no additional fields.

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:12:48 +01:00
Paul Meyer
258aed3cd3 genpolicy: support full JobSpec
Based on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#job-v1-batch

The JOB_COMPLETION_INDEX env will be set if completionMode is "indexed".

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:12:48 +01:00
Paul Meyer
f0ffaa9a6b genpolicy: support full DeploymentSpec
The added fields are relevant only to the controller, so they should
not impact security and following aren't of interest for policies.

Adding according to https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#deployment-v1-apps

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:07:18 +01:00
Alex Lyn
79d1a6ed8f runtime-rs: Correct the mount type for emptydir with local storage
Previous set for the Mount.type with `bind` is wrong, and for local
storage, the type of Mount should be `local`.

This commit aims to correct the type with "local".

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 17:09:33 +08:00
Alex Lyn
935ecf2765 runtime-rs: Fix disable_guest_empty_dir parameters order
As the disable_guest_empty_dir order is wrong which causes
the bool value is not correct and it got a wrong result.

This commit aims to correct the parameters order.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 16:59:00 +08:00
Fabiano Fidêncio
9d6f6bac37 agent-ctl: Bump image-rs version
Bump to the same version of CoCo Guest Components.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
a5629a5a6f versions: Bump coco-guest-components
Usual bump before a release that will be consumed by Confidential
Containers.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
2d2b0de160 tests: kbs: Try to get the pod logs on deployment failure
As this helps immensely to figure out what went wrong with the
deployment.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
58df06d90e versions: Bump Trustee
This is a bump pre-release, which brings several fixes and some
improvements related to initData, and NVIDIA's remote verifier.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:05 +01:00
Alex Lyn
c225cba0e6 tests: Correct unexpected capability for policy failure test
The test case designed to verify policy failures due to an "unexpected
capability" was misconfigured. It was using "CAP_SYS_CHROOT" as the
unexpected capability to be added.

This configuration was flawed for two main reasons:
1.Incorrect Syntax: Kubernetes Pod specs expect capability names without
the "CAP_" prefix (e.g., "SYS_CHROOT", not "CAP_SYS_CHROOT").
This made the test case's premise incorrect from a K8s API perspective.
2.Part of Default Set: "SYS_CHROOT" is already included in the
`default_caps` list for a standard container. Therefore, adding it would
 not trigger a policy violation, defeating the purpose of the
 "unexpected capability" test.

Furthermore, a related issue was observed where a malformed capability
like "CAP_CAP_SYS_CHROOT" was being generated, causing parsing failures
in the `oci-spec-rs` library. This was a symptom of incorrect string
manipulation when handling capabilities.

This commit corrects the test by selecting "SYS_NICE" as the unexpected
capability. "SYS_NICE" is a more suitable choice because:
- It is a valid Linux capability.
- It is relatively harmless.
- It is **not** part of the default capability set defined in
  `genpolicy-settings.json`.

By using "SYS_NICE", the test now accurately simulates a scenario where
a Pod requests a legitimate but non-default capability, which the policy
(generated from a baseline Pod without this capability) should correctly
reject. This change fixes the test's logic and also resolves the
downstream `oci-spec-rs` parsing error by ensuring only valid capability
names are processed.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 14:06:30 +08:00
Alex Lyn
9aaf41a71b Merge pull request #11985 from Apokleos/policy-caps-rs
genpolicy: Correct caps matcher for runtime-rs
2025-11-11 11:08:11 +08:00
Alex Lyn
29fe46bc06 genpolicy: Correct caps matcher for runtime-rs
Detected a format mismatch in OCI Spec Capabilities fields between
`runtime-rs` (no `CAP_` prefix) and `runtime-go` (with `CAP_` prefix).

This introduces a normalization of caps in match_caps(p_caps, i_caps).
This ensures robust and consistent processing of Capabilities regardless
of whether the OCI Spec originates from `runtime-rs` or `runtime-go`.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 10:03:54 +08:00
Dan Mihai
f78584e868 Merge pull request #12048 from manuelh-dev/mahuber/bb-build
deploy: Improve busybox build
2025-11-10 11:32:07 -08:00
Alex Lyn
7423eb7a30 agent: Support both virtio-blk and virtio-scsi devices for initdata
Currently, the initdata module only detects virtio-blk devices
(/dev/vd*) when searching for the initdata block device. However,
when using virtio-scsi, the devices appear as /dev/sd* in the
guest, causing the initdata detection to fail.

This commit extends the device detection logic to support both
device types:
- virtio-blk devices: /dev/vda, /dev/vdb, etc.
- virtio-scsi devices: /dev/sda, /dev/sdb, etc.

This commits aims to address issue of theinitdata device not being
found when using virtio-scsi

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-10 18:03:23 +01:00
dependabot[bot]
f699f097f3 build(deps): bump github.com/opencontainers/runc in /src/runtime
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.2.6 to 1.2.8.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.2.8/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.2.6...v1.2.8)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-version: 1.2.8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 15:43:48 +01:00
Fabiano Fidêncio
92226d0a19 tests: nvidia: Be prepared for TDX
Thankfully there's only one piece that's still SNP specific (for the
supported TEEs).  Let's adjust it so we can have an easy and smooth
execution when adding a TDX CI machine.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
4d314e8676 tests: nvidia: nims: Adjust to CC
There are several changes needed in order to get this test working with
CC, and yet we still are skipping it.

Basically, we need to:
* Pull an authenticated image inside the guest, which requires:
 * Using Trustee to release the credential
   * We still depend on a PR to be merged on Trustee side
     * https://github.com/confidential-containers/trustee/pull/1035
   * We still depend on a Trustee bump (including the PR above) on our
     side

Apart from those changes, I ended up "duplicating" the tests by adding a
"-tee" version of those, which already have:
* The proper kbs annotations set up
* Dropped host mounts
* Increases the memory needed

Last but not least, as "bats" probably means "being a terrible script",
I had to re-arrange a few things otherwise the tests would not even run
due to bats-isms that I am sincerely not able to pin-point.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
8cedd96d54 tests: nvidia: k8s: Enforce experimental_force_guest_pull
We added the tests using virtio-9p as we knew it'd require incremental
changes to be able to use any kind of guest-pull method.

Now, as in the coming commits we'll be actually ensuring that guest-pull
works and is in use, we can enforce the experimental_force_guest_pull
usage for the nvidia cases.

Note: We're using experimental_force_guest_pull instead of
nydus-snapshotter due to stability concerns with the snapshotter.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
464764c7e0 tests: nvidia: kbs: Ensure KBS_INGRESS=nodeport
I've missed doing this doing the KBS deployment set up.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Manuel Huber
a5cd7235cb runtime: Align nvidia TEEs enable_annotations with TEEs
It was just missed when adding those configurations.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
e85cf83573 k8s: tests: Fix default for EXPERIMENTAL_FORCE_GUEST_PULL
It takes either a shim name or "", but we were treating this (thankfully
only in this specific file) as a boolean.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Manuel Huber
8b39468b36 tests: nvidia: Logging for NIM
Adjust output to the setup_file and teardown_file behavior.
With this, we will be able to observe relevant logging rather than
adding to the output variable.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
812191c1f3 tests: nvidia: Do not deploy NFD on nvidia-gpu cases
As it'll come from the GPU Operator for now.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Pavel Mores
74f9fdb11f runtime-rs: remove hardcoding of SEV physical address reduction
Previous commit enabled getting the physical address reduction from
processor but just stored it for later use.  This commit adds handling
of the value to ProtectionDevice and enables the QEMU driver to use it.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-10 13:01:03 +01:00
Pavel Mores
6f9178d290 runtime-rs: get SEV params using CPUID and store them in SevSnpDetails
An implementation of cbitpos acquisition is supplied that was missing
so far.  We also get the physical address reduction value from the same
source (CPUID Fn8000_001f function).  This has been hardcoded at 1 so far,
following the Go runtime example, but it's better to get it from the
processor.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-10 13:01:03 +01:00
Greg Kurz
5810279edf Merge pull request #12008 from microsoft/saulparedes/allow_priv
webhook: allow privileged containers
2025-11-10 11:13:41 +01:00
Zvonko Kaiser
df58972d41 Merge pull request #12051 from microsoft/danmihai1/agent-version
agent: update version.rs when VERSION file changed
2025-11-09 20:34:58 -05:00
Fabiano Fidêncio
37d4eb0b77 ci: nvidia: Ensure K8S_TEST_HOST_TYPE=baremetal
So the proper cleanups are performed in case something goes awry in a
previous run.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-09 10:51:33 +01:00
Dan Mihai
7b10f4c72a agent: update version.rs when VERSION file changed
- version.rs gets generated from version.rs.in
- version.rs.in contains values read from VERSION
- so version.rs (and maybe other Agent files too) must be
  re-generated when the VERSION file changes

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 17:53:09 +00:00
Alex Lyn
83b0a59215 Merge pull request #12046 from Apokleos/disable-guest-emptydir
Disable guest emptydir
2025-11-08 11:54:15 +08:00
Dan Mihai
df7ee2dd38 ci: k8s: AUTO_GENERATE_POLICY for cbl-mariner
Auto-generate policy on cbl-mariner Hosts if the user didn't
specify an AUTO_GENERATE_POLICY value.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
53acb74f26 genpolicy: adapt to new AKS pause container behavior
The new image reference has changed to mcr.microsoft.com/oss/v2/kubernetes/pause:3.6
from mcr.microsoft.com/oss/kubernetes/pause:3.6.

The new image uses by default UID=0, GID=0 while the older. The older image had:
UID=65535, GID=65535.

There is a new pause_container_id_policy field in genpolicy-settings.json, informing
genpolicy about the way AdditionalGids gets updated - "v1" for the older behavior
and "v2" for the newer AKS version:
- When using v1, the default value of AdditionalGids is {65535}.
- When using v2, the default value of AdditionalGids is {}.

UID=65535 and GID=65535 are still hard-coded by default in genpolicy-settings.json.
We might be able to remove/ignore these fields in the future, if we'll stop relying
on policy::KataSpec::get_process_fields to use these fields.

A new CI function adapt_common_policy_settings_for_aks() changes the pause container
UID, GID, pause_container_id_policy, and image ref settings values when testing on
AKS Hosts - i.e., when testing coco-dev or mariner Hosts.

The genpolicy workarounds for the unexpected behavior with guest pull enabled have
been improved to use the current container's GID instead of hard-coding GID=0 as the
guest pull default. Also, AdditionalGids gets updated when the current container's GID is
changing, instead of always changing the AdditionalGids at the very end of
policy::AgentPolicy::get_container_process(), when the relevant evolution of the GID
value was no longer available.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
1f784bb770 genpolicy: improve policy generation comments
Make it easier to understand the source of the UID/GID/AdditionalGids
values from the container in the auto-generated policy.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
969b8e0fb8 genpolicy: more detailed UID/GID debug logs
Add more details to code paths handling UID/GID values, for easier
debugging.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
cacd37ee6e tests: genpolicy: restore test settings for non-Coco configMap
These settings got broken recently because the non-CoCo tests were
disabled for unrelated reasons.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Manuel Huber
caff6df827 deploy: Improve busybox build
Parallelize busybox builds to build a bit faster and create the
build directory prior to Docker execution, which on my
environment, helps with permission issues when building busybox
without the kata-containers/build directory existing beforehand.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-07 10:09:57 -08:00
Alex Lyn
23024876b2 runtime-rs: Use the configurable disable_guest_empty_dir
Correct the hardcoded value of disable_guest_empty_dir, instead,
we use the real value of it which comes from the configuration.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:52:11 +08:00
Alex Lyn
382924bdf3 kata-sys-util: Introduce a sandbox annotation for disable guest emptydir
A sandbox annotation that determines if it should create Kubernetes
emptyDir mounts on the guest filesystem.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:48:42 +08:00
Alex Lyn
720a229579 kata-types: Introduce disable guest emptydir flag
It acts as if it should create Kubernetes emptyDir mounts on the
guest filesystem. If enabled, the runtime will not create Kubernetes
emptyDir mounts on the guest filesystem.Instead, emptyDir mounts will
be created on the host and shared via virtio-fs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:45:55 +08:00
Fabiano Fidêncio
03e06fdf4d tests: nvidia: Deploy Trustee
Let's ensure Trustee is deployed as some of the tests rely images that
live behind authentication. /o\

The approach taken here to deploy Trustee is exactly the same one taken
on the other CoCo tests, apart from an env var passed to ensure we're
using the NVIDIA remote verifier (which will be in handy very very
soon).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-07 12:32:11 +01:00
Pavel Mores
841fee28da runtime-rs: add a helper to run external command and capture its output
This isn't really related to remote hypervisor though it was useful for
its debugging.  It's a small helper I've been using regularly during
development for quite some time that I think might be useful more broadly.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
72c704b287 runtime-rs: make error reporting for CreateVM a bit more explicit
A naked ttrpc error with no context turns out to be rather hard to
understand or even notice in log.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
45d8141edc runtime-rs: remote hv needs neither image nor initrd specified in config
The remote hypervisor launches no VM, it just instructs the Cloud API
Adaptor to do so, therefore it has no need for an image or initrd to boot
from and should be exempt from the mandate for one or the other to be
specified.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
80ef102a00 runtime-rs: fix scoping of the remote hv Hypervisor service
The go runtime's .proto file - which is also used by the Cloud API
Adaptor - puts the Hypervisor service into the "hypervisor" package.
runtime-rs has to do the same to avoid an "unimplemented" error.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Alex Lyn
d5e2071869 Merge pull request #11921 from Apokleos/enhance-copyfile2
runtime-rs: Add support LocalStorage for emptyDir within nontee cases
2025-11-07 16:58:39 +08:00
Fabiano Fidêncio
a591cda466 gatekeeper: Adjust the nvidia gpu test name
With the change made to the matrix when the CC GPU runner was added,
there was a change in the job name (@sprt saw that coming, but I
didn't).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-06 16:28:33 +01:00
Manuel Huber
c6dc176a03 tests: nvidia: cc: Enable NIMs tests
Same deal as the previous commut, just enabling the tests here, with the
same list of improvements that we will need to go through in order to
get is working in a perfect way.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-06 16:28:33 +01:00
Manuel Huber
8ca77f2655 tests: nvidia: cc: Run CUDA vectorAdd tests on CC mode
While the primary goal of this change is to detect regressions to the
NVIDIA SNP GPU scenario, various improvements to reflect a more
realistic CC setting are planned in subsequent changes, such as:

* moving away from the overlayfs snapshotter
* disabling filesystem sharing
* applying a pod security policy
* activating the GPUs only after attestation
* using a refined approach for GPU cold-plugging without requiring
  annotations
* revisiting pod timeout and overhead parameters (the podOverhead value
  was increased due to CUDA vectorAdd requiring about 6Gi of
  podOverhead, as well as the inference and embedqa requiring at least
  12Gi, respectively, 14Gi of podOverhead to run without invoking the
  host's oom-killer. We will revisit this aspect after addressing
  points 1. and 2.)

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-06 16:28:33 +01:00
Manuel Huber
25ce0afd52 kata-deploy: Allow the CDI annotation for CC GPU cases
For the nvidia-gpu-snp and nvidia-gpu-tdx we must set containerd to
allow the CDI annotation to be passed to down.

This solution may become obsolete soon enough, but the cleanest way to
have it properly working is by adding it here (even if we remove it
before the next release).

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-06 16:28:33 +01:00
Manuel Huber
c91edf884b runtimeclasses: nvidia: Bump TEE podOverhead
It's been noticed that as more RAM is needed to run the CC tests, we
also need to update the podOverhead of the NVIDIA CC runtime classes to
avoid getting OOM Killed.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-06 16:28:33 +01:00
Fupan Li
bfe8da6c8a tests: disable the qemu-runtime-rs cpu hotplug test
Since there's something wrong with the cpu hotplug
on qemu-runtime-rs, thus disable this test temporally.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-06 21:37:01 +08:00
Fupan Li
3b1bfea609 runtime-rs: fix the issue of hot-unplug memory smaller
It should do nothing instead of return an error when
hot-unplug the memory to the size smaller than static
plugged memory size.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-06 18:19:55 +08:00
Fupan Li
aac2a37ff5 runtime-rs: enable pselect6 syscall for dragonball seccomp
Since the nerdctl's network hook would call pselect6 syscall
by xtables-nft-multi, thus we'd better add it to the seccomp's
whitelist.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-06 11:17:57 +01:00
Hyounggyu Choi
ff429072b6 Merge pull request #11924 from BbolroC/fix-static-checks-actionspz
ci: Fix failing static checks to enable IBM actionspz - Z specific
2025-11-06 09:04:04 +01:00
Zvonko Kaiser
fce6a75899 Merge pull request #12027 from fidencio/topic/kata-deploy-make-ALLOWED_HYPERVISOR_ANNOTATIONS-per-arch
kata-deploy: Add per arch ALLOWED_HYPERVISOR_ANNOTATIONS
2025-11-05 18:20:14 -05:00
Manuel Huber
d8953f67c5 ci: Onboard another NVIDIA machine
Let's add a new NVIDIA machine, which later on will be used for CC
related tests.

For now the current tests are skipped in the CC capable machine.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 23:23:08 +01:00
Fabiano Fidêncio
b2ee64a2d6 kata-deploy: scripts: Ensure we don't add duplicated values
Let's now make sure that we don't add duplicated values to any of our
entries, making the script as sane as possible for sequential runs.

Vibed with Cursor's help!

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 19:48:24 +01:00
Fabiano Fidêncio
78ae79d153 kata-deploy: scripts: Add helper functions to avoid duplicated items
Let's add some helper functions, not yet used, to avoid adding
duplicated items.

This idea is an expansion of Choi's idea to avoid setting duplicated
items, and it'll help on making the whole script idempotent on
sequential runs.

Vibed with Cursor's help!

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 19:48:24 +01:00
Fabiano Fidêncio
f773368d93 kata-deploy: Add per arch ALLOWED_HYPERVISOR_ANNOTATIONS
I know, this is not simplifying much things for now, but it has a good
intent in the background and will serve as base for making the
kata-deploy helm chart more user friendly.

With that said, let's add ALLOWED_HYPERVISOR_ANNOTATIONS per arch, while
adding support to set something like "qemu:foo,bar clh:bar foobar
barfoo". Why? Because in the future we'll have a better way to set this
per shim (and the shim is per arch ...).

More details of what we'll do in the future are being discussed here:
https://github.com/kata-containers/kata-containers/issues/12024

Anyways, the variables are **DELIBERATELY** not exposed to the chart for
now, as those will be later on when addressing the issue mentioned
above.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 19:45:34 +01:00
Fabiano Fidêncio
66e133e096 kata-deploy: Add missing runtimeClasses
When the runtimeClasses were added, as part of 7cfa826804, the
firecracker runtimeClass ended up missing from the dictionary.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 19:07:28 +01:00
Anton Ippolitov
23c46b8a00 docs: Update devmapper containerd plugin name
The Firecracker installation docs had an outaded containerd configuration for the devmapper plugin.
This commit updates the instructions so that they are compatible with more recent versions of containerd.

Signed-off-by: Anton Ippolitov <anton.ippolitov@datadoghq.com>
2025-11-05 18:42:29 +01:00
Fabiano Fidêncio
ace9cf942d tests: guest-pull: Fix names
When added, I've mistakenly used the wrong test-type name, which is now
fixed and should be enough to trigger the tests correctly.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 18:21:48 +01:00
Hyounggyu Choi
4ee2037974 GHA: Run runtime tests on self-hosted runners for P/Z
On IBM actionspz P/Z runners, the following error was observed during
runtime tests:

```
host system doesn't support vsock: stat /dev/vhost-vsock: no such file or directory
```

Since loading the vsock module on the fly is not permitted, this commit
moves the runtime tests back to self-hosted runners for P/Z.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Hyounggyu Choi
32da38273a agent/tests: Skip if kernel module is not found
On IBM actionspz Z runners, the following error occurs when running
`modprobe`:

```
modprobe: FATAL: Module bridge not found in directory /lib/modules/6.8.0-85-generic
```

Additionally, there are no files under `/lib/modules`, for example:

```
total 0
drwxr-xr-x 1 root root    0 Aug  5 13:09 .
drwxr-xr-x 1 root root 2.0K Oct  1 22:59 ..
```

This commit skips the `test_load_kernel_module` test if the module is
not found or if running `modprobe` is not permitted.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Hyounggyu Choi
075de4dc62 agent/tests: Skip test if error is EACCES (permission denied)
On IBM actionspz Z runners, write operations on network interfaces
are not allowed, even for the root user.
This commit skips the `add_update_addresses` test if the operation
fails with EACCES (-13, permission denied).

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Hyounggyu Choi
3f84b623a3 agent/tests: Skip RNG reseeding test on restricted environments
On IBM actionspz Z runners, the ioctl system call is not allowed even
for the root user. There is likely an additional security mechanism
(such as AppArmor or seccomp) in place on Ubuntu runners.
This commit introduces a new helper, `is_permission_error()`,
which skips the test if ioctl operations in `reseed_rng()` are not
permitted.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Hyounggyu Choi
c2abc4da34 agent/tests: Use detected filesystem for baremounted points
The IBM actionspz Z runners mount /dev as tmpfs, while other systems
use devtmpfs. This difference causes an assertion failure for
test_already_baremounted.
This commit sets the detected filesystem for bare-mounted points
as the expected value.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Hyounggyu Choi
faa048893d agent/tests: Handle error messages differetnly based on root filesystem
The root filesystem for IBM actionspz Z runners is `btrfs` instead of `ext4`.
The error message differs when an unprivileged user tries to perform a bind mount.
This commit adjusts the handling of error messages based on the detected root
filesystem type.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-05 16:35:04 +00:00
Fupan Li
0df6c795d8 runtime-rs: disable the default static resource management
Since the qemu & cloud-hypervisor support the cpu & memory
hotplug now, thus disable the static resource management
for qemu and cloud-hypervisor by default.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-05 16:59:13 +01:00
Fupan Li
02ecab40e4 tests: disable the cpu hotplug test for coco dev runtime
Since qemu-coco-dev-runtime-rs and qemu-coco-dev had disabled the
cpu&memory hotplug by enable static_sandbox_resource_mgmt, thus
we should disable the cpu hotplug test for those two runtime.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-05 16:59:13 +01:00
Fupan Li
1fc05491a2 tests: enable the cpu hotplug test for dragonball etc
Since the qemu, cloud-hypervisor and dragonball had supported the
cpu hotplug on runtime-rs, thus enable the cpu hotplug test in CI.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-05 16:59:13 +01:00
Fabiano Fidêncio
0a0de4e6e3 Revert "tests: Do not enable NFD on s390x"
This reverts commit c75a46d17f, as NFD now
publishes an s390x image (and also a ppc64le one).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-05 16:06:33 +01:00
Alex Lyn
8f0dd4c44b runtime-rs: Introduce disable_guest_empty_dir flag
This commit introduces the configuration flag `disable_guest_empty_dir`
to control the placement of Kubernetes emptyDir volumes.

By default, the value is set to `false`, maintaining the current
behavior of creating emptyDirs within the guest VM

When set to `true`, emptyDirs will be created on the host filesystem.
This is essential for scenarios where users need to share data between
the host and the guest VM via an emptyDir.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
205c3dac44 runtime-rs: Add rprivate and rw options for memory emptyDir mounts
When handling a memory-based emptyDir, the runtime creates a tmpfs
mount inside the guest VM. The previous implementation just supports
mount options with only "rbind", which does not explicitly guarantee
the desired mount propagation behavior.

This commit hardens the mounting process by explicitly adding the
`rprivate` and `rw` mount flags.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
fac9c795c6 runtime-rs: Add 'local' volume to support k8s emptyDir
This commit introduces the 'local' volume, which is specifically
designed to create and manage Kubernetes emptyDir volumes directly
within the VM's sandbox directory.

The core functionality ensures that local volume can be handled
correctly in handle volume procedure.

This capability is essential for allowing containers to leverage the
storage backend for shared volumes.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
1696968eb1 runtime-rs: Implement 'local' storage type for k8s emptyDir volumes
This commit implements the new 'local' storage type, enabling Kubernetes
emptyDir volumes to be created and managed directly inside the Kata VM
(in the sandbox directory).

The 'local' type instructs the kata-agent to provision the empty
directory within the VM.

This approach allows containers to share storage inside VM, Specially
useful within CoCo emptyDir scenarios.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:22 +08:00
Alex Lyn
b58a53bfa4 kata-sys-util: Improve handling of Kubernetes emptyDir volumes
Separated the checks for tmpfs and disk-based emptyDirs from an
`if-else if` block into two distinct `if` statements. This clarifies
the logic by treating each volume type detection as an independent task.

Additionally, updated the type for disk-based emptyDirs to the more
semantically accurate `KATA_K8S_LOCAL_STORAGE_TYPE`. This allows for
more specific handling downstream, distinguishing them from generic
host path mounts.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Alex Lyn
c39c6f1ae4 kata-sys-utils: Correct the judgement of logic of host emptyDir
In fact, emptyDir is not usually found in the proc mounts with the
previous logic and then it failed with the previous implementation.

Based on the related implementation within runtime-go,related
implementation within

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Alex Lyn
f278616bf7 kata-types: Introduce a new storage type of "local"
This introduces a new storage type: local. Local storage type will
tell kata-agent to create an empty directory with LocalStorgae handler
in the sandbox directory within the VM.

And it also makes it align with runtime-go `KataLocalDevType = "local"`.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Manuel Huber
1561d7fbba runtime: Clear outer CDI annotations
Pod annotations from the outer runtime are being used for cold-plugging
CDI devices. We need to ensure that these annotations don't leak into
the inner runtime for which specific container (sibling) annotations
are being created. Without this change, the inner runtime receives both
annotations, leading to failing CDI injection as an outer runtime
annotation observed in the guest translates to an unresolvable CDI
device, for example, cdi.k8s.io/gpu: "nvidia.com/pgpu=0".

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-04 23:18:00 +01:00
Fabiano Fidêncio
1dfbb14093 tests: Stop testing on stratovirt
Stratovirt has been failing for a considerable amount of time, with no
sign of someone watching it and being actively working on a fix.

With this we also stop building and shipping stratovirt as part of our
release as we cannot test it.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-04 10:22:46 +01:00
Fabiano Fidêncio
02f47d3f18 helm: uninstall: Take nodeSelector into consideration
As we're already doing for the install part, but this bit was missed
during review.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-04 09:29:35 +01:00
Fabiano Fidêncio
5b01eaf929 tests: Align kata-deploy helm's uninstall
Let's use the same method both on the kata-deploy and k8s tests.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-04 09:29:35 +01:00
Fabiano Fidêncio
4293cdf846 tests: Add stability tests for experimental-force-guest-pull
A few weeks ago we've tested nydus-snapshotter with this approach, and
we DID find issues with it.

Now, let's also test this with `experimental_force_guest_pull`.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-04 09:02:19 +01:00
Dan Mihai
6a4c336ca0 Merge pull request #12016 from microsoft/danmihai1/early-wait-abort
tests: k8s: reduce test time for unexpected CreateContainerRequest errors
2025-11-03 12:04:56 -08:00
Fabiano Fidêncio
3107533953 tests: Adjust to runtimeClass creation by the chart
It's just a follow-up on the previous commit where we move away from the
runtimeClass creation inside the script, and instead we do it using the
chart itself.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-03 17:32:18 +01:00
Fabiano Fidêncio
12f3b206eb Revert "kata-deploy: Allow setting the default runtime class name"
This reverts commit be05e1370c, which is
not a problem as we never released such option.

 Conflicts:
	tools/packaging/kata-deploy/helm-chart/README.md

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-03 17:32:18 +01:00
Fabiano Fidêncio
7cfa826804 kata-deploy: Let helm deal with runtimeClass creation
We had this logic inside the script when we didn't use the helm chart.
However, this only makes the shim script more convoluted for no reason.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-03 17:32:18 +01:00
Fabiano Fidêncio
14039c9089 golang: Update to 1.24.9
In order to fix:
```

=== Running govulncheck on containerd-shim-kata-v2 ===
 Vulnerabilities found in containerd-shim-kata-v2:
=== Symbol Results ===

Vulnerability #1: GO-2025-4015
    Excessive CPU consumption in Reader.ReadResponse in net/textproto
  More info: https://pkg.go.dev/vuln/GO-2025-4015
  Standard library
    Found in: net/textproto@go1.24.6
    Fixed in: net/textproto@go1.24.8
    Vulnerable symbols found:
      #1: textproto.Reader.ReadResponse

Vulnerability #2: GO-2025-4014
    Unbounded allocation when parsing GNU sparse map in archive/tar
  More info: https://pkg.go.dev/vuln/GO-2025-4014
  Standard library
    Found in: archive/tar@go1.24.6
    Fixed in: archive/tar@go1.24.8
    Vulnerable symbols found:
      #1: tar.Reader.Next

Vulnerability #3: GO-2025-4013
    Panic when validating certificates with DSA public keys in crypto/x509
  More info: https://pkg.go.dev/vuln/GO-2025-4013
  Standard library
    Found in: crypto/x509@go1.24.6
    Fixed in: crypto/x509@go1.24.8
    Vulnerable symbols found:
      #1: x509.Certificate.Verify
      #2: x509.Certificate.Verify

Vulnerability #4: GO-2025-4012
    Lack of limit when parsing cookies can cause memory exhaustion in net/http
  More info: https://pkg.go.dev/vuln/GO-2025-4012
  Standard library
    Found in: net/http@go1.24.6
    Fixed in: net/http@go1.24.8
    Vulnerable symbols found:
      #1: http.Client.Do
      #2: http.Client.Get
      #3: http.Client.Head
      #4: http.Client.Post
      #5: http.Client.PostForm
      Use '-show traces' to see the other 9 found symbols

Vulnerability #5: GO-2025-4011
    Parsing DER payload can cause memory exhaustion in encoding/asn1
  More info: https://pkg.go.dev/vuln/GO-2025-4011
  Standard library
    Found in: encoding/asn1@go1.24.6
    Fixed in: encoding/asn1@go1.24.8
    Vulnerable symbols found:
      #1: asn1.Unmarshal
      #2: asn1.UnmarshalWithParams

Vulnerability #6: GO-2025-4010
    Insufficient validation of bracketed IPv6 hostnames in net/url
  More info: https://pkg.go.dev/vuln/GO-2025-4010
  Standard library
    Found in: net/url@go1.24.6
    Fixed in: net/url@go1.24.8
    Vulnerable symbols found:
      #1: url.JoinPath
      #2: url.Parse
      #3: url.ParseRequestURI
      #4: url.URL.Parse
      #5: url.URL.UnmarshalBinary

Vulnerability #7: GO-2025-4009
    Quadratic complexity when parsing some invalid inputs in encoding/pem
  More info: https://pkg.go.dev/vuln/GO-2025-4009
  Standard library
    Found in: encoding/pem@go1.24.6
    Fixed in: encoding/pem@go1.24.8
    Vulnerable symbols found:
      #1: pem.Decode

Vulnerability #8: GO-2025-4008
    ALPN negotiation error contains attacker controlled information in
    crypto/tls
  More info: https://pkg.go.dev/vuln/GO-2025-4008
  Standard library
    Found in: crypto/tls@go1.24.6
    Fixed in: crypto/tls@go1.24.8
    Vulnerable symbols found:
      #1: tls.Conn.Handshake
      #2: tls.Conn.HandshakeContext
      #3: tls.Conn.Read
      #4: tls.Conn.Write
      #5: tls.Dial
      Use '-show traces' to see the other 4 found symbols

Vulnerability #9: GO-2025-4007
    Quadratic complexity when checking name constraints in crypto/x509
  More info: https://pkg.go.dev/vuln/GO-2025-4007
  Standard library
    Found in: crypto/x509@go1.24.6
    Fixed in: crypto/x509@go1.24.9
    Vulnerable symbols found:
      #1: x509.CertPool.AppendCertsFromPEM
      #2: x509.Certificate.CheckCRLSignature
      #3: x509.Certificate.CheckSignature
      #4: x509.Certificate.CheckSignatureFrom
      #5: x509.Certificate.CreateCRL
      Use '-show traces' to see the other 27 found symbols

Vulnerability #10: GO-2025-4006
    Excessive CPU consumption in ParseAddress in net/mail
  More info: https://pkg.go.dev/vuln/GO-2025-4006
  Standard library
    Found in: net/mail@go1.24.6
    Fixed in: net/mail@go1.24.8
    Vulnerable symbols found:
      #1: mail.AddressParser.Parse
      #2: mail.AddressParser.ParseList
      #3: mail.Header.AddressList
      #4: mail.ParseAddress
      #5: mail.ParseAddressList
```

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-03 16:57:22 +01:00
Dan Mihai
c563ee99fa tests: policy-rc: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

not ok 1 Successful replication controller with auto-generated policy in 123335ms
ok 2 Policy failure: unexpected container command in 14601ms
ok 3 Policy failure: unexpected volume mountPath in 14443ms
ok 4 Policy failure: unexpected host device mapping in 14515ms
ok 5 Policy failure: unexpected securityContext.allowPrivilegeEscalation in 14485ms
ok 6 Policy failure: unexpected capability in 14382ms
ok 7 Policy failure: unexpected UID = 1000 in 14578ms

After this change:

not ok 1 Successful replication controller with auto-generated policy in 17108ms
ok 2 Policy failure: unexpected container command in 14427ms
ok 3 Policy failure: unexpected volume mountPath in 14636ms
ok 4 Policy failure: unexpected host device mapping in 14493ms
ok 5 Policy failure: unexpected securityContext.allowPrivilegeEscalation in 14554ms
ok 6 Policy failure: unexpected capability in 15087ms
ok 7 Policy failure: unexpected UID = 1000 in 14371ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
319400dc0d tests: policy-pvc: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

not ok 1 Successful pod with auto-generated policy in 94852ms
ok 2 Policy failure: unexpected device mount in 17807ms

After this change:

not ok 1 Successful pod with auto-generated policy in 35194ms
ok 2 Policy failure: unexpected device mount in 21355ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
1914fcb812 tests: policy-log: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

not ok 1 Logs empty when ReadStreamRequest is blocked in 102257ms

After this change:

not ok 1 Logs empty when ReadStreamRequest is blocked in 17339ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
a0bd9e02ca tests: policy-job: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

not ok 1 Successful job with auto-generated policy in 107111ms
ok 2 Policy failure: unexpected environment variable in 7920ms
ok 3 Policy failure: unexpected command line argument in 7874ms
ok 4 Policy failure: unexpected emptyDir volume in 7823ms
ok 5 Policy failure: unexpected projected volume in 7812ms
ok 6 Policy failure: unexpected readOnlyRootFilesystem in 7903ms
ok 7 Policy failure: unexpected UID = 222 in 7720ms

After this change:

not ok 1 Successful job with auto-generated policy in 10271ms
ok 2 Policy failure: unexpected environment variable in 8018ms
ok 3 Policy failure: unexpected command line argument in 7886ms
ok 4 Policy failure: unexpected emptyDir volume in 7621ms
ok 5 Policy failure: unexpected projected volume in 7843ms
ok 6 Policy failure: unexpected readOnlyRootFilesystem in 7632ms
ok 7 Policy failure: unexpected UID = 222 in 7619ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
992c91371c tests: policy-deployment-sc: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

ok 1 Successful sc deployment with auto-generated policy and container image volumes in 14769ms
ok 2 Successful sc with fsGroup/supplementalGroup deployment with auto-generated policy and container image volumes in 8384ms
not ok 3 Successful sc deployment with security context choosing another valid user in 136149ms
ok 4 Successful layered sc deployment with auto-generated policy and container image volumes in 8862ms
ok 5 Policy failure: unexpected GID = 0 for layered securityContext deployment in 7941ms
ok 6 Policy failure: malicious root group added via supplementalGroups deployment in 11612ms

After:

ok 1 Successful sc deployment with auto-generated policy and container image volumes in 15230ms
ok 2 Successful sc with fsGroup/supplementalGroup deployment with auto-generated policy and container image volumes in 9364ms
not ok 3 Successful sc deployment with security context choosing another valid user in 11060ms
ok 4 Successful layered sc deployment with auto-generated policy and container image volumes in 9124ms
ok 5 Policy failure: unexpected GID = 0 for layered securityContext deployment in 7919ms
ok 6 Policy failure: malicious root group added via supplementalGroups deployment in 11666ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
704ee76f1e tests: policy-deployment-sc: reduced redundancy
Call common function instead of copy/paste of three commands.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Dan Mihai
2cafb10a6a tests: policy-pod: detect create container errors early
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.

For example, before this change:

not ok 1 Successful pod with auto-generated policy in 110801ms
not ok 2 Able to read env variables sourced from configmap using envFrom in 94104ms
not ok 3 Successful pod with auto-generated policy and runtimeClassName filter in 95838ms
not ok 4 Successful pod with auto-generated policy and custom layers cache path in 110712ms
ok 5 Policy failure: unexpected container image in 8113ms
ok 6 Policy failure: unexpected privileged security context in 7943ms
ok 7 Policy failure: unexpected terminationMessagePath in 11530ms
ok 8 Policy failure: unexpected hostPath volume mount in 7970ms
ok 9 Policy failure: unexpected config map in 7933ms
not ok 10 Policy failure: unexpected lifecycle.postStart.exec.command in 112677ms
ok 11 RuntimeClassName filter: no policy in 2302ms
not ok 12 ExecProcessRequest tests in 93946ms
not ok 13 Successful pod: runAsUser having the same value as the UID from the container image in 94003ms
ok 14 Policy failure: unexpected UID = 0 in 8016ms
ok 15 Policy failure: unexpected UID = 1234 in 7850ms

After:

not ok 1 Successful pod with auto-generated policy in 12182ms
not ok 2 Able to read env variables sourced from configmap using envFrom in 10121ms
not ok 3 Successful pod with auto-generated policy and runtimeClassName filter in 11738ms
not ok 4 Successful pod with auto-generated policy and custom layers cache path in 26592ms
ok 5 Policy failure: unexpected container image in 7742ms
ok 6 Policy failure: unexpected privileged security context in 7949ms
ok 7 Policy failure: unexpected terminationMessagePath in 7789ms
ok 8 Policy failure: unexpected hostPath volume mount in 7887ms
ok 9 Policy failure: unexpected config map in 7818ms
not ok 10 Policy failure: unexpected lifecycle.postStart.exec.command in 9120ms
ok 11 RuntimeClassName filter: no policy in 2081ms
not ok 12 ExecProcessRequest tests in 9883ms
not ok 13 Successful pod: runAsUser having the same value as the UID from the container image in 9870ms
ok 14 Policy failure: unexpected UID = 0 in 11161ms
ok 15 Policy failure: unexpected UID = 1234 in 7814ms

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-03 15:55:55 +00:00
Alex Lyn
897ecfb503 Merge pull request #12014 from fidencio/topic/release-ensure-helm-dependencies-update
scripts: release: Run helm dependencies update
2025-11-03 16:34:17 +08:00
Fabiano Fidêncio
c539a9e90e tests: k8s: parallel: Increase timeout
We've seen a few cases where we fail the test due to timeout and when we
print the pods we just see that they've been created.

With that in mind, let's just increase the timeout a little bit.

Example:
```
not ok 1 Parallel jobs in 6250ms
 (in test file k8s-parallel.bats, line 41)
   `kubectl wait --for=condition=Ready --timeout=$timeout pod -l jobgroup=${job_name}' failed
 No resources found in kata-containers-k8s-tests namespace.
 [bats-exec-test:71] INFO: k8s configured to use runtimeclass
 job.batch/process-item-test1 created
 job.batch/process-item-test2 created
 job.batch/process-item-test3 created
 NAME                 STATUS    COMPLETIONS   DURATION   AGE
 process-item-test1   Running   0/1                      0s
 process-item-test2   Running   0/1                      0s
 process-item-test3   Running   0/1                      0s
 error: no matching resources found
 No resources found in kata-containers-k8s-tests namespace.
 No resources found in kata-containers-k8s-tests namespace.
 DEBUG: system logs of node 'aks-nodepool1-25989463-vmss000000' since test start time (2025-11-01 16:39:03)
 -- No entries --
 job.batch "process-item-test1" deleted
 job.batch "process-item-test2" deleted
 job.batch "process-item-test3" deleted
```

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-01 18:09:37 +01:00
Fabiano Fidêncio
8a5ebd5d16 tests: k8s: run QoS tests on a bigger instance
It's been failing to start quite regularly on the smaller instance.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-01 17:54:58 +01:00
Fabiano Fidêncio
157b2c32ce scripts: release: Run helm dependencies update
Otherwise we'll face issues like:
```
Error: found in Chart.yaml, but missing in charts/ directory: node-feature-discovery
```

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-01 17:54:58 +01:00
Fabiano Fidêncio
c75a46d17f tests: Do not enable NFD on s390x
As we're failing on the uninstall, which seems related to a bug on NFD
itself, but I don't have access to a s390x machine to debug, let's skip
the enablement for now and enable it back once we've experimented it
better on s390x.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:30:13 +01:00
Fabiano Fidêncio
67e38e0f92 tests: Do not enable NFD on cbl-mariner
As we're failing to install NFD on CBL Mariner, let's skip the
enablement there, and enable it once we've experimented it better there.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:30:13 +01:00
Fabiano Fidêncio
1bc873397b tests: Use NFD as part of the tests
As we have the ability to deploy NFD as a sub-chart of our chart, let's
make sure we test it during our CI.

We had to increase the timeout values, where we had timeouts set, to
deploy / undeploy kata, as now NFD is also deployed / undeployed.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:30:13 +01:00
Fabiano Fidêncio
ebe15d154e kata-deploy: Add NFD as a dependency
Let's ensure that we add NFD as a weak dependency of the kata-deploy
helm chart.

What we're doing for now is leaving it up to the user / admin to enable
it, and if enabled then we do a explicit check for virtualization
support (x86_64 only for now).

In case NFD is already deployed, we fail the installation (in case it's
enabled on the kata-deploy helm chart) with a clear error message to the
user.

While I know that kata-remote **DOES NOT** require virtualization, I've
left this out (with a comment for when we add a peer-pods dependency on
kata-deploy) in order to simplify things for now, as kata-remote is not
a deployed shim by default.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:30:13 +01:00
Fabiano Fidêncio
be05e1370c kata-deploy: Allow setting the default runtime class name
As Kata Containers can be consumed by other helm-charts, hard coding the
default runtime class name to `kata` is not optimal.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:14:53 +01:00
Fabiano Fidêncio
820e6d6351 kata-deploy: Add more per-arch options
All the options that take a specific shim as an argument MUST have
specific per arch settings, as not all the shims are available for all
the arches, leading to issues when setting up multi-arch deployments.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 16:14:53 +01:00
Zvonko Kaiser
94abe4fc00 osbuilder: nvrc: Consume NVRC release instead of building it
Let's ensure that we consume NVRC releases straight from GitHub instead
of building the binaries ourselves.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-31 12:10:20 +01:00
Zvonko Kaiser
69c76971f3 gpu: Handle VFIO and IOMMUFD
We have here either /dev/vfio/<num> or /dev/vfio/devices/vfio<num>,
for IOMMUFD format /dev/vfio/devices/vfio<num>, strip "vfio" prefix

/dev/vfio/123 - basename "123" - vfioNum = "123" - cdi.k8s.io/vfio123
/dev/vfio/devices/vfio123 - basename "vfio123" - strip - vfioNum = "123" - cdi.k8s.io/vfio123

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-10-31 09:46:07 +01:00
Saul Paredes
26396881cf webhook: allow privileged containers
This allows us to test privileged containers when using the webhook.
We can do this because kata-deploy sets privileged_without_host_devices = true for kata runtime by default.

Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
2025-10-30 14:59:26 -07:00
Fabiano Fidêncio
e30e2b5f45 tests: k8s: Remove tests running on GitHub provided runner
We have 2 tests running on GitHub provided runners:
* devmapper
* CRI-O

- devmapper situation

For devmapper, we're currently testing devmapper with s390x as part of
one of its jobs.

More than that, this test has been failing here due to a lack of space
in the machine for quite some time, and no-action was taken to bring it
back either via GARM or some other way.

With that said, let's rely on the s390x CI to test devmapper and avoid
one extra failure on our CI by removing this one.

- cri-o situation

CRI-O is being tested with a fixed version of kubernetes that's already
reached its EOL, and a CRI-O version that matches that k8s version.

There has been attempts to raise issues, and also to provide a PR that
does at least part of the work ... leaving the debugging part for the
maintainers of the CI. However, there was no action on those from the
maintainers.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-30 11:46:59 +01:00
Alex Lyn
fa521220a9 Merge pull request #11816 from jiuyi123/rs-vm-template-kata-ctl-merge
kata-ctl: add factory subcommands for VM template management
2025-10-30 18:21:12 +08:00
ssc
551caad4b1 docs: add guide on VM templating usage in runtime-rs
- Explained the concept and benefits of VM templating
- Provided step-by-step instructions for enabling VM templating
- Detailed the setup for using snapshotter in place of VirtioFS for template-based VM creation
- Added performance test results comparing template-based and direct VM creation

Signed-off-by: ssc <741026400@qq.com>
2025-10-30 15:18:31 +08:00
ssc
5a586e13a1 kata-ctl: add factory subcommands for VM template management
- init: initialize the VM template factory
- status: check the current factory status
- destroy: clean up and remove factory resources
These commands provide basic lifecycle management for VM templates.

Signed-off-by: ssc <741026400@qq.com>
2025-10-30 10:27:17 +08:00
RuoqingHe
8878c46e8f Merge pull request #11867 from spectator333/update-rust-vmm-deps
dragonball: Bump kvm-ioctls to fix security issue
2025-10-30 00:17:29 +08:00
Siyu Tao
dd444d23b3 dragonball: Bump kvm-ioctls to fix security issue
Use `ioctl_with_mut_ref` instead of `ioctl_with_ref` in the
`create_device` method as it needs to write to the `kvm_create_device`
struct passed to it, which was released in v0.12.1.

Signed-off-by: Siyu Tao <taosiyu2024@163.com>
2025-10-29 14:03:29 +00:00
Steve Horsman
0e19a2bf91 Merge pull request #11993 from zvonkok/vectorAdd
gpu: Add libs for CC
2025-10-29 13:42:34 +00:00
stevenhorsman
555926ea1a libs: Fix formatting issue
Fix the cargo fmt issues and then we can make the libs tests required
again to avoid this regression happening again.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-10-29 13:13:50 +01:00
Steve Horsman
dbdd1009af Merge pull request #11933 from kata-containers/topic/kata-deploy-nfd-dependency-part-I
kata-deploy: Automatically deploy NodeFeatureRules for TEEs
2025-10-29 09:50:38 +00:00
Fabiano Fidêncio
103f80c7f5 readme: install: Drop outdated documentation
kata-deploy helm chart is *THE* way to deploy kata-containers on
kubernetes environments, and kubernetes environments is basically the
only reliably tested deployment we have.

For now, let's just drop documentation that is outdated / incorrect, and
in the future let's ensure we update the linked docs, as we work on
update / upgrade for the helm chart.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-29 09:41:57 +01:00
Zvonko Kaiser
5ff218823c gpu: Remove unneeded libraries
The libs in question were added when moving to developer.nvidia.com
but switching back to ubuntu only based builds they are not needed.
Remove them to keep the rootfs as minimal as possible.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-10-29 08:03:36 +01:00
Zvonko Kaiser
6d9b4059f5 gpu: Add libs for CC
In the case of CC we need additional libraries in the rootfs.
Add them conditionally if type == confidential.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-10-29 08:03:36 +01:00
Xuewei Niu
55d181beb1 Merge pull request #11828 from jiuyi123/rs-vm-template-runtime-rs
runtime-rs: introduce VM template lifecycle and integration
2025-10-29 14:03:46 +08:00
Xuewei Niu
8aca32dfa9 Merge pull request #11862 from StevenFryto/rootless_clh
runtime-rs: supporting the CLH VMM process running in non-root mode
2025-10-29 13:31:53 +08:00
ssc
16e8cf1a09 runtime-rs: boot vm from template
Add build_vm_from_template() that flips boot_from_template flag,
wires factory.template_path/{memory,state} into the hypervisor config,
and returns ready-to-use hypervisor & agent instances.
When factory.template is enabled, VirtContainer bypasses normal creation
and directly boots the VM by restoring the template through incoming migration,
completing the "create → save → clone" loop.

Fixes: #11413

Signed-off-by: ssc <741026400@qq.com>
2025-10-29 12:38:28 +08:00
ssc
550615285c runtime-rs: add factory, template and vm modules for VM template lifecycle
Introduced factory::FactoryConfig with init/destroy/status commands to manage template pools.
Added template::Template to fetch, create and persist base VMs.
Introduced vm::{VM, VMConfig} exposing create, pause, save, resume, stop,
disconnect and migration helpers for sandbox integration.
Extended QemuInner to executes QMP incoming migration, pause/resume and status tracking.

Fixes: #11413

Signed-off-by: ssc <741026400@qq.com>
2025-10-29 12:38:28 +08:00
ssc
135c84b6cb kata-types: add VM template and factory configuration
Added new fields in Hypervisor struct to support VM template creation,
template boot, memory and device state paths, shared path, and store
paths. Introduced a Factory struct in config to manage template path,
cache endpoint, cache number, and template enable flag. Integrated
Factory into TomlConfig for runtime configuration parsing.

Fixes: #11413
Signed-off-by: ssc <741026400@qq.com>
2025-10-29 11:49:08 +08:00
stevenfryto
2ceadc5fa3 runtime-rs: supporting the CLH VMM process running in non-root mode
This change enables to run the Cloud Hypervisor VMM using a non-root user
when rootless flag is set true in configuration.

Fixes: #11414

Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
2025-10-29 01:55:10 +00:00
stevenfryto
2ddbae3aa6 runtime-rs: pass the tuntap fds down to Cloud Hypervisor
Pass the file descriptors of the tuntap device to the Cloud Hypervisor VMM process
so that the process could open the device without cap_net_admin

Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
2025-10-29 01:55:10 +00:00
Fabiano Fidêncio
59883a2d99 actions: Remove unused USING_NFD
There's no reason to keep the env var / input as it's never been used
and now kata-deploy detects automatically whether NFD is deployed or
not.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-28 21:24:27 +01:00
Fabiano Fidêncio
f9825b4e6e kata-deploy: Automatically deploy NodeFeatureRules for TEEs
When the NodeFeatureRule CRD is detected kata-deploy will:
* Create the specific NodeFeatureRules for the x86_64 TEEs
* Adapt the TEEs runtime classes to take into account the amount of keys
  available in the system when spawning the podsandbox.

Note, we still do not have NFD as sub-dependency of the helm chart, and
I'm not even sure if we will have. However, it's important to integrate
better with the scenarios where the NFD is already present.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-10-28 21:24:27 +01:00
Manuel Huber
8dc78057d6 ci: Refactor NVIDIA NIM test
Change NIM bats file logic to allow skipping test cases which
require multiple GPUs. This can be helpful for test clusters where
there is only one node with a single GPU, or for local test
environments with a single-node cluster with a single GPU.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-10-28 19:12:16 +01:00
Manuel Huber
be32b77baf ci: Add NVIDIA CUDA vectoradd test
This change adds a CUDA vectoradd test case and makes enabling NVRC
tracing optional and idempotent.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-10-28 19:12:16 +01:00
964 changed files with 50077 additions and 18779 deletions

View File

@@ -8,11 +8,8 @@ self-hosted-runner:
# Labels of self-hosted runner that linter should ignore
labels:
- amd64-nvidia-a100
- amd64-nvidia-h100-snp
- arm64-k8s
- containerd-v1.7
- containerd-v2.0
- containerd-v2.1
- containerd-v2.2
- garm-ubuntu-2004
- garm-ubuntu-2004-smaller
- garm-ubuntu-2204
@@ -23,10 +20,11 @@ self-hosted-runner:
- ppc64le-k8s
- ppc64le-small
- ubuntu-24.04-ppc64le
- ubuntu-24.04-s390x
- metrics
- riscv-builder
- sev-snp
- s390x
- s390x-large
- tdx
- ubuntu-22.04-arm
- ubuntu-24.04-arm

View File

@@ -71,7 +71,7 @@ jobs:
fail-fast: false
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'cloud-hypervisor', 'dragonball', 'qemu', 'stratovirt']
vmm: ['clh', 'cloud-hypervisor', 'dragonball', 'qemu', 'qemu-runtime-rs']
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
@@ -117,7 +117,7 @@ jobs:
fail-fast: false
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'qemu', 'dragonball', 'stratovirt']
vmm: ['clh', 'qemu', 'dragonball', 'qemu-runtime-rs']
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
@@ -147,9 +147,18 @@ jobs:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata
run: bash tests/integration/nydus/gha-run.sh install-kata kata-artifacts
- name: Install kata-tools
run: bash tests/integration/nydus/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Run nydus tests
timeout-minutes: 10
run: bash tests/integration/nydus/gha-run.sh run
@@ -279,50 +288,6 @@ jobs:
timeout-minutes: 15
run: bash tests/functional/vfio/gha-run.sh run
run-docker-tests:
name: run-docker-tests
strategy:
# We can set this to true whenever we're 100% sure that
# all the tests are not flaky, otherwise we'll fail them
# all due to a single flaky instance.
fail-fast: false
matrix:
vmm:
- qemu
runs-on: ubuntu-22.04
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/docker/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/docker/gha-run.sh install-kata kata-artifacts
- name: Run docker smoke test
timeout-minutes: 5
run: bash tests/integration/docker/gha-run.sh run
run-nerdctl-tests:
name: run-nerdctl-tests
strategy:
@@ -336,6 +301,7 @@ jobs:
- dragonball
- qemu
- cloud-hypervisor
- qemu-runtime-rs
runs-on: ubuntu-22.04
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
@@ -410,8 +376,16 @@ jobs:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/functional/kata-agent-apis/gha-run.sh install-kata kata-artifacts
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata & kata-tools
run: |
bash tests/functional/kata-agent-apis/gha-run.sh install-kata kata-artifacts
bash tests/functional/kata-agent-apis/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Run kata agent api tests with agent-ctl
run: bash tests/functional/kata-agent-apis/gha-run.sh run

View File

@@ -106,44 +106,3 @@ jobs:
- name: Run containerd-stability tests
timeout-minutes: 15
run: bash tests/stability/gha-run.sh run
run-docker-tests:
name: run-docker-tests
strategy:
# We can set this to true whenever we're 100% sure that
# all the tests are not flaky, otherwise we'll fail them
# all due to a single flaky instance.
fail-fast: false
matrix:
vmm: ['qemu']
runs-on: s390x-large
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/docker/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/docker/gha-run.sh install-kata kata-artifacts
- name: Run docker smoke test
timeout-minutes: 5
run: bash tests/integration/docker/gha-run.sh run

View File

@@ -12,7 +12,12 @@ name: Build checks
jobs:
check:
name: check
runs-on: ${{ inputs.instance }}
runs-on: >-
${{
( contains(inputs.instance, 's390x') && matrix.component.name == 'runtime' ) && 's390x' ||
( contains(inputs.instance, 'ppc64le') && (matrix.component.name == 'runtime' || matrix.component.name == 'agent') ) && 'ppc64le' ||
inputs.instance
}}
strategy:
fail-fast: false
matrix:
@@ -68,6 +73,8 @@ jobs:
needs:
- rust
- protobuf-compiler
instance:
- ${{ inputs.instance }}
steps:
- name: Adjust a permission for repo

View File

@@ -41,16 +41,11 @@ jobs:
matrix:
asset:
- agent
- agent-ctl
- busybox
- cloud-hypervisor
- cloud-hypervisor-glibc
- coco-guest-components
- csi-kata-directvolume
- firecracker
- genpolicy
- kata-ctl
- kata-manager
- kernel
- kernel-confidential
- kernel-dragonball-experimental
@@ -59,12 +54,11 @@ jobs:
- nydus
- ovmf
- ovmf-sev
- ovmf-tdx
- pause-image
- qemu
- qemu-snp-experimental
- qemu-tdx-experimental
- stratovirt
- trace-forwarder
- virtiofsd
stage:
- ${{ inputs.stage }}
@@ -122,7 +116,7 @@ jobs:
echo "oci-name=${oci_image%@*}" >> "$GITHUB_OUTPUT"
echo "oci-digest=${oci_image#*@}" >> "$GITHUB_OUTPUT"
- uses: oras-project/setup-oras@5c0b487ce3fe0ce3ab0d034e63669e426e294e4d # v1.2.2
- uses: oras-project/setup-oras@22ce207df3b08e061f537244349aac6ae1d214f6 # v1.2.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
version: "1.2.0"
@@ -154,8 +148,8 @@ jobs:
if: ${{ startsWith(matrix.asset, 'kernel-nvidia-gpu') }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}-headers${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.zst
name: kata-artifacts-amd64-${{ matrix.asset }}-modules${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-modules.tar.zst
retention-days: 15
if-no-files-found: error
@@ -172,6 +166,8 @@ jobs:
- rootfs-image
- rootfs-image-confidential
- rootfs-image-mariner
- rootfs-image-nvidia-gpu
- rootfs-image-nvidia-gpu-confidential
- rootfs-initrd
- rootfs-initrd-confidential
- rootfs-initrd-nvidia-gpu
@@ -241,8 +237,8 @@ jobs:
asset:
- busybox
- coco-guest-components
- kernel-nvidia-gpu-headers
- kernel-nvidia-gpu-confidential-headers
- kernel-nvidia-gpu-modules
- kernel-nvidia-gpu-confidential-modules
- pause-image
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
@@ -363,3 +359,104 @@ jobs:
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error
build-tools-asset:
name: build-tools-asset
runs-on: ubuntu-22.04
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- agent-ctl
- csi-kata-directvolume
- genpolicy
- kata-ctl
- kata-manager
- trace-forwarder
stage:
- ${{ inputs.stage }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
id: build
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-tools-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-tools-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-tools-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-tools-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
create-kata-tools-tarball:
name: create-kata-tools-tarball
runs-on: ubuntu-22.04
needs: [build-tools-asset]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-tools-artifacts-amd64-*${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-tools-artifacts versions.yaml kata-tools-static.tar.zst
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -31,7 +31,7 @@ permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
permissions:
contents: read
packages: write
@@ -51,7 +51,6 @@ jobs:
- nydus
- ovmf
- qemu
- stratovirt
- virtiofsd
env:
PERFORM_ATTESTATION: ${{ matrix.asset == 'agent' && inputs.push-to-registry == 'yes' && 'yes' || 'no' }}
@@ -103,7 +102,7 @@ jobs:
echo "oci-name=${oci_image%@*}" >> "$GITHUB_OUTPUT"
echo "oci-digest=${oci_image#*@}" >> "$GITHUB_OUTPUT"
- uses: oras-project/setup-oras@5c0b487ce3fe0ce3ab0d034e63669e426e294e4d # v1.2.2
- uses: oras-project/setup-oras@22ce207df3b08e061f537244349aac6ae1d214f6 # v1.2.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
version: "1.2.0"
@@ -135,14 +134,14 @@ jobs:
if: ${{ startsWith(matrix.asset, 'kernel-nvidia-gpu') }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}-headers${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.zst
name: kata-artifacts-arm64-${{ matrix.asset }}-modules${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-modules.tar.zst
retention-days: 15
if-no-files-found: error
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset
permissions:
contents: read
@@ -151,6 +150,7 @@ jobs:
matrix:
asset:
- rootfs-image
- rootfs-image-nvidia-gpu
- rootfs-initrd
- rootfs-initrd-nvidia-gpu
steps:
@@ -210,13 +210,13 @@ jobs:
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
asset:
- busybox
- kernel-nvidia-gpu-headers
- kernel-nvidia-gpu-modules
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
with:
@@ -225,7 +225,7 @@ jobs:
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts-for-release:
name: remove-rootfs-binary-artifacts-for-release
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
@@ -239,7 +239,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts, remove-rootfs-binary-artifacts-for-release]
permissions:
contents: read
@@ -299,7 +299,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read

View File

@@ -32,7 +32,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
strategy:
matrix:
asset:
@@ -89,7 +89,7 @@ jobs:
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: build-asset
permissions:
contents: read
@@ -170,7 +170,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
@@ -230,7 +230,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read

View File

@@ -20,9 +20,6 @@ on:
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
@@ -41,14 +38,6 @@ jobs:
- kernel
- virtiofsd
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
@@ -82,5 +71,5 @@ jobs:
with:
name: kata-artifacts-riscv64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
retention-days: 3
if-no-files-found: error

View File

@@ -32,7 +32,7 @@ permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: s390x
runs-on: ubuntu-24.04-s390x
permissions:
contents: read
packages: write
@@ -257,7 +257,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: s390x
runs-on: ubuntu-24.04-s390x
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
@@ -319,7 +319,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: s390x
runs-on: ubuntu-24.04-s390x
needs:
- build-asset
- build-asset-rootfs

View File

@@ -0,0 +1,75 @@
name: Build kubectl multi-arch image
on:
schedule:
# Run every Sunday at 00:00 UTC
- cron: '0 0 * * 0'
workflow_dispatch:
# Allow manual triggering
push:
branches:
- main
paths:
- 'tools/packaging/kubectl/Dockerfile'
- '.github/workflows/build-kubectl-image.yaml'
permissions: {}
env:
REGISTRY: quay.io
IMAGE_NAME: kata-containers/kubectl
jobs:
build-and-push:
name: Build and push multi-arch image
runs-on: ubuntu-24.04
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Login to Quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Get kubectl version
id: kubectl-version
run: |
KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
echo "version=${KUBECTL_VERSION}" >> "$GITHUB_OUTPUT"
- name: Generate image metadata
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=latest
type=raw,value={{date 'YYYYMMDD'}}
type=raw,value=${{ steps.kubectl-version.outputs.version }}
type=sha,prefix=
- name: Build and push multi-arch image
uses: docker/build-push-action@ca052bb54ab0790a636c9b5f226502c73d547a25 # v5.4.0
with:
context: tools/packaging/kubectl/
file: tools/packaging/kubectl/Dockerfile
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

34
.github/workflows/ci-nightly-riscv.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
on:
schedule:
- cron: '0 5 * * *'
name: Nightly CI for RISC-V
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
build-kata-static-tarball-riscv:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-riscv64.yaml
with:
tarball-suffix: -${{ github.sha }}
commit-hash: ${{ github.sha }}
target-branch: ${{ github.ref_name }}
build-checks-preview:
strategy:
fail-fast: false
matrix:
instance:
- "riscv-builder"
uses: ./.github/workflows/build-checks-preview-riscv64.yaml
with:
instance: ${{ matrix.instance }}

36
.github/workflows/ci-nightly-rust.yaml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Kata Containers Nightly CI (Rust)
on:
schedule:
- cron: '0 1 * * *' # Run at 1 AM UTC (1 hour after script-based nightly)
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
kata-containers-ci-on-push-rust:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/ci.yaml
with:
commit-hash: ${{ github.sha }}
pr-number: "nightly-rust"
tag: ${{ github.sha }}-nightly-rust
target-branch: ${{ github.ref_name }}
build-type: "rust" # Use Rust-based build
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -19,6 +19,11 @@ on:
required: false
type: string
default: no
build-type:
description: The build type for kata-deploy. Use 'rust' for Rust-based build, empty or omit for script-based (default).
required: false
type: string
default: ""
secrets:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
@@ -72,6 +77,7 @@ jobs:
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04
arch: amd64
build-type: ${{ inputs.build-type }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -102,8 +108,9 @@ jobs:
tag: ${{ inputs.tag }}-arm64
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04-arm
runner: ubuntu-24.04-arm
arch: arm64
build-type: ${{ inputs.build-type }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -134,20 +141,6 @@ jobs:
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-kata-static-tarball-riscv64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-riscv64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-s390x:
needs: build-kata-static-tarball-s390x
permissions:
@@ -161,8 +154,9 @@ jobs:
tag: ${{ inputs.tag }}-s390x
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: s390x
runner: ubuntu-24.04-s390x
arch: s390x
build-type: ${{ inputs.build-type }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -179,8 +173,9 @@ jobs:
tag: ${{ inputs.tag }}-ppc64le
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ppc64le-small
runner: ubuntu-24.04-ppc64le
arch: ppc64le
build-type: ${{ inputs.build-type }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -247,14 +242,14 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tarball
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64-${{ inputs.tag }}
path: kata-artifacts
name: kata-tools-static-tarball-amd64-${{ inputs.tag }}
path: kata-tools-artifacts
- name: Install tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Copy binary into Docker context
run: |
@@ -302,7 +297,7 @@ jobs:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
tag: ${{ inputs.tag }}-amd64${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -311,18 +306,6 @@ jobs:
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
run-k8s-tests-on-amd64:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-amd64.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-k8s-tests-on-arm64:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-arm64
@@ -330,7 +313,7 @@ jobs:
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-arm64
tag: ${{ inputs.tag }}-arm64${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -340,9 +323,10 @@ jobs:
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-nvidia-gpu.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
tag: ${{ inputs.tag }}-amd64${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -364,7 +348,7 @@ jobs:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
tag: ${{ inputs.tag }}-amd64${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -382,7 +366,7 @@ jobs:
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-s390x
tag: ${{ inputs.tag }}-s390x${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -396,7 +380,7 @@ jobs:
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-ppc64le
tag: ${{ inputs.tag }}-ppc64le${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -408,7 +392,7 @@ jobs:
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
tag: ${{ inputs.tag }}-amd64${{ inputs.build-type == 'rust' && '-rust' || '' }}
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
@@ -441,13 +425,11 @@ jobs:
{ containerd_version: lts, vmm: clh },
{ containerd_version: lts, vmm: dragonball },
{ containerd_version: lts, vmm: qemu },
{ containerd_version: lts, vmm: stratovirt },
{ containerd_version: lts, vmm: cloud-hypervisor },
{ containerd_version: lts, vmm: qemu-runtime-rs },
{ containerd_version: active, vmm: clh },
{ containerd_version: active, vmm: dragonball },
{ containerd_version: active, vmm: qemu },
{ containerd_version: active, vmm: stratovirt },
{ containerd_version: active, vmm: cloud-hypervisor },
{ containerd_version: active, vmm: qemu-runtime-rs },
]
@@ -501,7 +483,7 @@ jobs:
vmm: ${{ matrix.params.vmm }}
run-cri-containerd-tests-arm64:
if: ${{ inputs.skip-test != 'yes' }}
if: false
needs: build-kata-static-tarball-arm64
strategy:
fail-fast: false

32
.github/workflows/docs.yaml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Documentation
on:
push:
branches:
- main
permissions: {}
jobs:
deploy-docs:
name: deploy-docs
permissions:
contents: read
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- uses: actions/configure-pages@v5
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: pip install zensical
- run: zensical build --clean
- uses: actions/upload-pages-artifact@v4
with:
path: site
- uses: actions/deploy-pages@v4
id: deployment

View File

@@ -10,7 +10,9 @@ on:
- opened
- synchronize
- reopened
- edited
- labeled
- unlabeled
permissions: {}

View File

@@ -1,43 +0,0 @@
name: kata-runtime-classes-sync
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
kata-deploy-runtime-classes-check:
name: kata-deploy-runtime-classes-check
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Ensure the split out runtime classes match the all-in-one file
run: |
pushd tools/packaging/kata-deploy/runtimeclasses/
echo "::group::Combine runtime classes"
for runtimeClass in $(find . -type f \( -name "*.yaml" -and -not -name "kata-runtimeClasses.yaml" \) | sort); do
echo "Adding ${runtimeClass} to the resultingRuntimeClasses.yaml"
cat "${runtimeClass}" >> resultingRuntimeClasses.yaml;
done
echo "::endgroup::"
echo "::group::Displaying the content of resultingRuntimeClasses.yaml"
cat resultingRuntimeClasses.yaml
echo "::endgroup::"
echo ""
echo "::group::Displaying the content of kata-runtimeClasses.yaml"
cat kata-runtimeClasses.yaml
echo "::endgroup::"
echo ""
diff resultingRuntimeClasses.yaml kata-runtimeClasses.yaml

View File

@@ -82,6 +82,7 @@ jobs:
target-branch: ${{ github.ref_name }}
runner: ubuntu-22.04
arch: amd64
build-type: "" # Use script-based build (default)
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -97,8 +98,9 @@ jobs:
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-arm64
target-branch: ${{ github.ref_name }}
runner: ubuntu-22.04-arm
runner: ubuntu-24.04-arm
arch: arm64
build-type: "" # Use script-based build (default)
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -116,6 +118,7 @@ jobs:
target-branch: ${{ github.ref_name }}
runner: s390x
arch: s390x
build-type: "" # Use script-based build (default)
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -131,8 +134,9 @@ jobs:
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-ppc64le
target-branch: ${{ github.ref_name }}
runner: ppc64le-small
runner: ubuntu-24.04-ppc64le
arch: ppc64le
build-type: "" # Use script-based build (default)
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -195,6 +199,7 @@ jobs:
yq eval '.image.reference = "quay.io/kata-containers/kata-deploy-ci" | .image.tag = "kata-containers-latest"' -i tools/packaging/kata-deploy/helm-chart/kata-deploy/values.yaml
echo "Generating the chart package"
helm dependencies update tools/packaging/kata-deploy/helm-chart/kata-deploy
helm package tools/packaging/kata-deploy/helm-chart/kata-deploy
echo "Pushing the chart to the OCI registries"

View File

@@ -30,6 +30,11 @@ on:
description: The arch of the tarball.
required: true
type: string
build-type:
description: The build type for kata-deploy. Use 'rust' for Rust-based build, empty or omit for script-based (default).
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
@@ -50,6 +55,24 @@ jobs:
fetch-depth: 0
persist-credentials: false
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
@@ -83,8 +106,10 @@ jobs:
REGISTRY: ${{ inputs.registry }}
REPO: ${{ inputs.repo }}
TAG: ${{ inputs.tag }}
BUILD_TYPE: ${{ inputs.build-type }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)/kata-static.tar.zst" \
"${REGISTRY}/${REPO}" \
"${TAG}"
"${TAG}" \
"${BUILD_TYPE}"

View File

@@ -34,7 +34,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -31,7 +31,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -35,7 +35,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: s390x
runs-on: ubuntu-24.04-s390x
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -181,6 +181,23 @@ jobs:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: ppc64le
- name: Set KATA_TOOLS_STATIC_TARBALL env var
run: |
tarball=$(pwd)/kata-tools-static.tar.zst
echo "KATA_TOOLS_STATIC_TARBALL=${tarball}" >> "$GITHUB_ENV"
- name: Download amd64 tools artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64
- name: Upload amd64 static tarball tools to GitHub
run: |
./tools/packaging/release/release.sh upload-kata-tools-static-tarball
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: amd64
upload-versions-yaml:
name: upload-versions-yaml
needs: release

View File

@@ -1,164 +0,0 @@
name: CI | Run containerd multi-snapshotter stability test
on:
schedule:
- cron: "0 */1 * * *" #run every hour
permissions: {}
# This job relies on k8s pre-installed using kubeadm
jobs:
run-containerd-multi-snapshotter-stability-tests:
name: run-containerd-multi-snapshotter-stability-tests
strategy:
fail-fast: false
matrix:
containerd:
- v1.7
- v2.0
- v2.1
- v2.2
env:
# I don't want those to be inside double quotes, so I'm deliberately ignoring the double quotes here.
IMAGES_LIST: quay.io/mongodb/mongodb-community-server@sha256:8b73733842da21b6bbb6df4d7b2449229bb3135d2ec8c6880314d88205772a11 ghcr.io/edgelesssys/redis@sha256:ecb0a964c259a166a1eb62f0eb19621d42bd1cce0bc9bb0c71c828911d4ba93d
runs-on: containerd-${{ matrix.containerd }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Rotate the journal
run: sudo journalctl --rotate --vacuum-time 1s
- name: Pull the kata-deploy image to be used
run: sudo ctr -n k8s.io image pull quay.io/kata-containers/kata-deploy-ci:kata-containers-latest
- name: Deploy Kata Containers
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
env:
KATA_HYPERVISOR: qemu-coco-dev
KUBERNETES: vanilla
SNAPSHOTTER: nydus
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: true
# This is needed as we may hit the createContainerTimeout
- name: Adjust Kata Containers' create_container_timeout
run: |
sudo sed -i -e 's/^\(create_container_timeout\).*=.*$/\1 = 600/g' /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
grep "create_container_timeout.*=" /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
# This is needed in order to have enough tmpfs space inside the guest to pull the image
- name: Adjust Kata Containers' default_memory
run: |
sudo sed -i -e 's/^\(default_memory\).*=.*$/\1 = 4096/g' /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
grep "default_memory.*=" /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
- name: Run a few containers using overlayfs
run: |
# I don't want those to be inside double quotes, so I'm deliberately ignoring the double quotes here
# shellcheck disable=SC2086
for img in ${IMAGES_LIST}; do
echo "overlayfs | Using on image: ${img}"
pod="$(echo ${img} | tr ':.@/' '-' | awk '{print substr($0,1,56)}')"
kubectl run "${pod}" \
-it --rm \
--restart=Never \
--image="${img}" \
--image-pull-policy=Always \
--pod-running-timeout=10m \
-- uname -r
done
- name: Run a the same few containers using a different snapshotter
run: |
# I don't want those to be inside double quotes, so I'm deliberately ignoring the double quotes here
# shellcheck disable=SC2086
for img in ${IMAGES_LIST}; do
echo "nydus | Using on image: ${img}"
pod="kata-$(echo ${img} | tr ':.@/' '-' | awk '{print substr($0,1,56)}')"
kubectl run "${pod}" \
-it --rm \
--restart=Never \
--image="${img}" \
--image-pull-policy=Always \
--pod-running-timeout=10m \
--overrides='{
"spec": {
"runtimeClassName": "kata-qemu-coco-dev"
}
}' \
-- uname -r
done
- name: Uninstall Kata Containers
run: bash tests/integration/kubernetes/gha-run.sh cleanup
env:
KATA_HYPERVISOR: qemu-coco-dev
KUBERNETES: vanilla
SNAPSHOTTER: nydus
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: true
- name: Run a few containers using overlayfs
run: |
# I don't want those to be inside double quotes, so I'm deliberately ignoring the double quotes here
# shellcheck disable=SC2086
for img in ${IMAGES_LIST}; do
echo "overlayfs | Using on image: ${img}"
pod="$(echo ${img} | tr ':.@/' '-' | awk '{print substr($0,1,56)}')"
kubectl run "${pod}" \
-it --rm \
--restart=Never \
--image=${img} \
--image-pull-policy=Always \
--pod-running-timeout=10m \
-- uname -r
done
- name: Deploy Kata Containers
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
env:
KATA_HYPERVISOR: qemu-coco-dev
KUBERNETES: vanilla
SNAPSHOTTER: nydus
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: true
# This is needed as we may hit the createContainerTimeout
- name: Adjust Kata Containers' create_container_timeout
run: |
sudo sed -i -e 's/^\(create_container_timeout\).*=.*$/\1 = 600/g' /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
grep "create_container_timeout.*=" /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
# This is needed in order to have enough tmpfs space inside the guest to pull the image
- name: Adjust Kata Containers' default_memory
run: |
sudo sed -i -e 's/^\(default_memory\).*=.*$/\1 = 4096/g' /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
grep "default_memory.*=" /opt/kata/share/defaults/kata-containers/configuration-qemu-coco-dev.toml
- name: Run a the same few containers using a different snapshotter
run: |
# I don't want those to be inside double quotes, so I'm deliberately ignoring the double quotes here
# shellcheck disable=SC2086
for img in ${IMAGES_LIST}; do
echo "nydus | Using on image: ${img}"
pod="kata-$(echo ${img} | tr ':.@/' '-' | awk '{print substr($0,1,56)}')"
kubectl run "${pod}" \
-it --rm \
--restart=Never \
--image="${img}" \
--image-pull-policy=Always \
--pod-running-timeout=10m \
--overrides='{
"spec": {
"runtimeClassName": "kata-qemu-coco-dev"
}
}' \
-- uname -r
done
- name: Uninstall Kata Containers
run: bash tests/integration/kubernetes/gha-run.sh cleanup || true
if: always()
env:
KATA_HYPERVISOR: qemu-coco-dev
KUBERNETES: vanilla
SNAPSHOTTER: nydus
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: true

View File

@@ -49,7 +49,6 @@ jobs:
- dragonball
- qemu
- qemu-runtime-rs
- stratovirt
- cloud-hypervisor
instance-type:
- small
@@ -79,7 +78,6 @@ jobs:
KATA_HOST_OS: ${{ matrix.host_os }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
USING_NFD: "false"
K8S_TEST_HOST_TYPE: ${{ matrix.instance-type }}
GENPOLICY_PULL_METHOD: ${{ matrix.genpolicy-pull-method }}
steps:
@@ -95,14 +93,14 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tarball
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Download Azure CLI
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
@@ -137,13 +135,17 @@ jobs:
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
- name: Run tests
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -1,130 +0,0 @@
name: CI | Run kubernetes tests on amd64
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-k8s-tests-amd64:
name: run-k8s-tests-amd64
strategy:
fail-fast: false
matrix:
vmm:
- qemu
container_runtime:
- containerd
snapshotter:
- devmapper
k8s:
- k3s
include:
- vmm: qemu
container_runtime: crio
snapshotter: ""
k8s: k0s
runs-on: ubuntu-22.04
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
KUBERNETES_EXTRA_PARAMS: ${{ matrix.container_runtime != 'crio' && '' || '--cri-socket remote:unix:///var/run/crio/crio.sock --kubelet-extra-args --cgroup-driver="systemd"' }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USING_NFD: "false"
K8S_TEST_HOST_TYPE: all
CONTAINER_RUNTIME: ${{ matrix.container_runtime }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Configure CRI-O
if: matrix.container_runtime == 'crio'
run: bash tests/integration/kubernetes/gha-run.sh setup-crio
- name: Deploy ${{ matrix.k8s }}
run: bash tests/integration/kubernetes/gha-run.sh deploy-k8s
env:
CONTAINER_RUNTIME: ${{ matrix.container_runtime }}
- name: Configure the ${{ matrix.snapshotter }} snapshotter
if: matrix.snapshotter != ''
run: bash tests/integration/kubernetes/gha-run.sh configure-snapshotter
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: k8s-tests-${{ matrix.vmm }}-${{ matrix.snapshotter }}-${{ matrix.k8s }}-${{ inputs.tag }}
path: /tmp/artifacts
retention-days: 1
- name: Delete kata-deploy
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh cleanup

View File

@@ -42,7 +42,6 @@ jobs:
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
USING_NFD: "false"
K8S_TEST_HOST_TYPE: all
TARGET_ARCH: "aarch64"
steps:
@@ -59,7 +58,7 @@ jobs:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
@@ -69,6 +68,10 @@ jobs:
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
@@ -83,5 +86,5 @@ jobs:
- name: Delete kata-deploy
if: always()
timeout-minutes: 5
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup

View File

@@ -1,7 +1,10 @@
name: CI | Run NVIDIA GPU kubernetes tests on arm64
name: CI | Run NVIDIA GPU kubernetes tests on amd64
on:
workflow_call:
inputs:
tarball-suffix:
required: true
type: string
registry:
required: true
type: string
@@ -29,24 +32,24 @@ permissions: {}
jobs:
run-nvidia-gpu-tests-on-amd64:
name: run-nvidia-gpu-tests-on-amd64
name: run-${{ matrix.environment.name }}-tests-on-amd64
strategy:
fail-fast: false
matrix:
vmm:
- qemu-nvidia-gpu
k8s:
- kubeadm
runs-on: amd64-nvidia-a100
environment: [
{ name: nvidia-gpu, vmm: qemu-nvidia-gpu, runner: amd64-nvidia-a100 },
{ name: nvidia-gpu-snp, vmm: qemu-nvidia-gpu-snp, runner: amd64-nvidia-h100-snp },
]
runs-on: ${{ matrix.environment.runner }}
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
USING_NFD: "false"
K8S_TEST_HOST_TYPE: all
KATA_HYPERVISOR: ${{ matrix.environment.vmm }}
KUBERNETES: kubeadm
KBS: ${{ matrix.environment.name == 'nvidia-gpu-snp' && 'true' || 'false' }}
K8S_TEST_HOST_TYPE: baremetal
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -60,31 +63,68 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Deploy Kata
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Uninstall previous `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh uninstall-kbs-client
- name: Deploy CoCo KBS
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
env:
NVIDIA_VERIFIER_MODE: remote
KBS_INGRESS: nodeport
- name: Install `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Run tests
- name: Run tests ${{ matrix.environment.vmm }}
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-nv-tests
env:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
- name: Collect artifacts ${{ matrix.vmm }}
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.environment.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.vmm }}
- name: Archive artifacts ${{ matrix.environment.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: k8s-tests-${{ matrix.vmm }}-${{ matrix.k8s }}-${{ inputs.tag }}
name: k8s-tests-${{ matrix.environment.vmm }}-kubeadm-${{ inputs.tag }}
path: /tmp/artifacts
retention-days: 1
- name: Delete kata-deploy
if: always()
timeout-minutes: 5
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always() && matrix.environment.name != 'nvidia-gpu'
run: |
bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs

View File

@@ -43,7 +43,6 @@ jobs:
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
USING_NFD: "false"
TARGET_ARCH: "ppc64le"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
@@ -70,9 +69,13 @@ jobs:
run: bash "${HOME}/scripts/k8s_cluster_check.sh"
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-kubeadm
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -46,11 +46,9 @@ jobs:
include:
- snapshotter: devmapper
pull-type: default
using-nfd: true
deploy-cmd: configure-snapshotter
- snapshotter: nydus
pull-type: guest-pull
using-nfd: false
deploy-cmd: deploy-snapshotter
exclude:
- snapshotter: overlayfs
@@ -76,7 +74,6 @@ jobs:
KUBERNETES: ${{ matrix.k8s }}
PULL_TYPE: ${{ matrix.pull-type }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USING_NFD: ${{ matrix.using-nfd }}
TARGET_ARCH: "s390x"
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
@@ -112,7 +109,7 @@ jobs:
if: ${{ matrix.snapshotter != 'overlayfs' }}
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-zvsi
- name: Uninstall previous `kbs-client`
@@ -134,6 +131,10 @@ jobs:
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
run: bash tests/integration/kubernetes/gha-run.sh cleanup-zvsi

View File

@@ -46,6 +46,7 @@ jobs:
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
@@ -70,7 +71,6 @@ jobs:
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USING_NFD: "false"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -84,14 +84,14 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tarball
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
@@ -140,6 +140,10 @@ jobs:
timeout-minutes: 300
run: bash tests/stability/gha-stability-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -57,7 +57,6 @@ jobs:
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
USING_NFD: "false"
KBS: "true"
K8S_TEST_HOST_TYPE: "baremetal"
KBS_INGRESS: "nodeport"
@@ -80,8 +79,17 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Uninstall previous `kbs-client`
@@ -132,12 +140,14 @@ jobs:
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
- guest-pull
include:
- pull-type: experimental-force-guest-pull
vmm: qemu-coco-dev
snapshotter: ""
runs-on: ubuntu-22.04
permissions:
@@ -158,12 +168,12 @@ jobs:
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
EXPERIMENTAL_FORCE_GUEST_PULL: ${{ matrix.pull-type == 'experimental-force-guest-pull' && matrix.vmm || '' }}
# Caution: current ingress controller used to expose the KBS service
# requires much vCPUs, lefting only a few for the tests. Depending on the
# host type chose it will result on the creation of a cluster with
# insufficient resources.
K8S_TEST_HOST_TYPE: "all"
USING_NFD: "false"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -177,14 +187,14 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tarball
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
@@ -214,10 +224,9 @@ jobs:
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
env:
EXPERIMENTAL_FORCE_GUEST_PULL: ${{ env.PULL_TYPE == 'experimental-force-guest-pull' && env.KATA_HYPERVISOR || '' }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: ${{ env.SNAPSHOTTER == 'nydus' }}
AUTO_GENERATE_POLICY: ${{ env.PULL_TYPE == 'experimental-force-guest-pull' && 'no' || 'yes' }}
@@ -251,6 +260,7 @@ jobs:
- name: Delete AKS cluster
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter
@@ -284,7 +294,6 @@ jobs:
SNAPSHOTTER: ${{ matrix.snapshotter }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: "true"
K8S_TEST_HOST_TYPE: "all"
USING_NFD: "false"
# We are skipping the auto generated policy tests for now,
# but those should be enabled as soon as we work on that.
AUTO_GENERATE_POLICY: "no"
@@ -301,6 +310,15 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
@@ -309,7 +327,6 @@ jobs:
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
@@ -330,7 +347,7 @@ jobs:
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Deploy Kata
timeout-minutes: 10
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Deploy CSI driver

View File

@@ -59,7 +59,6 @@ jobs:
KATA_HOST_OS: ${{ matrix.host_os }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
USING_NFD: "false"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -103,6 +102,10 @@ jobs:
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -45,7 +45,6 @@ jobs:
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
USING_NFD: "false"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -67,7 +66,6 @@ jobs:
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
@@ -86,3 +84,7 @@ jobs:
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -44,7 +44,6 @@ jobs:
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
K8S_TEST_HOST_TYPE: "baremetal"
USING_NFD: "false"
KUBERNETES: kubeadm
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

View File

@@ -28,21 +28,9 @@ jobs:
fail-fast: false
matrix:
instance:
- "ubuntu-22.04-arm"
- "s390x"
- "ubuntu-24.04-arm"
- "ubuntu-24.04-s390x"
- "ubuntu-24.04-ppc64le"
uses: ./.github/workflows/build-checks.yaml
with:
instance: ${{ matrix.instance }}
build-checks-preview:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
strategy:
fail-fast: false
matrix:
instance:
- "riscv-builder"
uses: ./.github/workflows/build-checks-preview-riscv64.yaml
with:
instance: ${{ matrix.instance }}

2
.gitignore vendored
View File

@@ -18,3 +18,5 @@ src/tools/log-parser/kata-log-parser
tools/packaging/static-build/agent/install_libseccomp.sh
.envrc
.direnv
**/.DS_Store
site/

File diff suppressed because it is too large Load Diff

140
Cargo.toml Normal file
View File

@@ -0,0 +1,140 @@
[workspace.package]
authors = ["The Kata Containers community <kata-dev@lists.katacontainers.io>"]
edition = "2018"
license = "Apache-2.0"
rust-version = "1.88"
[workspace]
members = [
# Dragonball
"src/dragonball",
"src/dragonball/dbs_acpi",
"src/dragonball/dbs_address_space",
"src/dragonball/dbs_allocator",
"src/dragonball/dbs_arch",
"src/dragonball/dbs_boot",
"src/dragonball/dbs_device",
"src/dragonball/dbs_interrupt",
"src/dragonball/dbs_legacy_devices",
"src/dragonball/dbs_pci",
"src/dragonball/dbs_tdx",
"src/dragonball/dbs_upcall",
"src/dragonball/dbs_utils",
"src/dragonball/dbs_virtio_devices",
# runtime-rs
"src/runtime-rs",
"src/runtime-rs/crates/agent",
"src/runtime-rs/crates/hypervisor",
"src/runtime-rs/crates/persist",
"src/runtime-rs/crates/resource",
"src/runtime-rs/crates/runtimes",
"src/runtime-rs/crates/service",
"src/runtime-rs/crates/shim",
"src/runtime-rs/crates/shim-ctl",
"src/runtime-rs/tests/utils",
]
resolver = "2"
# TODO: Add all excluded crates to root workspace
exclude = [
"src/agent",
"src/tools",
"src/libs",
# kata-deploy binary is standalone and has its own Cargo.toml for now
"tools/packaging/kata-deploy/binary",
# We are cloning and building rust packages under
# "tools/packaging/kata-deploy/local-build/build" folder, which may mislead
# those packages to think they are part of the kata root workspace
"tools/packaging/kata-deploy/local-build/build",
]
[workspace.dependencies]
# Rust-VMM crates
event-manager = "0.2.1"
kvm-bindings = "0.6.0"
kvm-ioctls = "=0.12.1"
linux-loader = "0.8.0"
seccompiler = "0.5.0"
vfio-bindings = "0.3.0"
vfio-ioctls = "0.1.0"
virtio-bindings = "0.1.0"
virtio-queue = "0.7.0"
vm-fdt = "0.2.0"
vm-memory = "0.10.0"
vm-superio = "0.5.0"
vmm-sys-util = "0.11.0"
# Local dependencies from Dragonball Sandbox crates
dragonball = { path = "src/dragonball" }
dbs-acpi = { path = "src/dragonball/dbs_acpi" }
dbs-address-space = { path = "src/dragonball/dbs_address_space" }
dbs-allocator = { path = "src/dragonball/dbs_allocator" }
dbs-arch = { path = "src/dragonball/dbs_arch" }
dbs-boot = { path = "src/dragonball/dbs_boot" }
dbs-device = { path = "src/dragonball/dbs_device" }
dbs-interrupt = { path = "src/dragonball/dbs_interrupt" }
dbs-legacy-devices = { path = "src/dragonball/dbs_legacy_devices" }
dbs-pci = { path = "src/dragonball/dbs_pci" }
dbs-tdx = { path = "src/dragonball/dbs_tdx" }
dbs-upcall = { path = "src/dragonball/dbs_upcall" }
dbs-utils = { path = "src/dragonball/dbs_utils" }
dbs-virtio-devices = { path = "src/dragonball/dbs_virtio_devices" }
# Local dependencies from runtime-rs
agent = { path = "src/runtime-rs/crates/agent" }
hypervisor = { path = "src/runtime-rs/crates/hypervisor" }
persist = { path = "src/runtime-rs/crates/persist" }
resource = { path = "src/runtime-rs/crates/resource" }
runtimes = { path = "src/runtime-rs/crates/runtimes" }
service = { path = "src/runtime-rs/crates/service" }
tests_utils = { path = "src/runtime-rs/tests/utils" }
ch-config = { path = "src/runtime-rs/crates/hypervisor/ch-config" }
common = { path = "src/runtime-rs/crates/runtimes/common" }
linux_container = { path = "src/runtime-rs/crates/runtimes/linux_container" }
virt_container = { path = "src/runtime-rs/crates/runtimes/virt_container" }
wasm_container = { path = "src/runtime-rs/crates/runtimes/wasm_container" }
# Local dependencies from `src/lib`
kata-sys-util = { path = "src/libs/kata-sys-util" }
kata-types = { path = "src/libs/kata-types", features = ["safe-path"] }
logging = { path = "src/libs/logging" }
protocols = { path = "src/libs/protocols", features = ["async"] }
runtime-spec = { path = "src/libs/runtime-spec" }
safe-path = { path = "src/libs/safe-path" }
shim-interface = { path = "src/libs/shim-interface" }
test-utils = { path = "src/libs/test-utils" }
# Outside dependencies
actix-rt = "2.7.0"
anyhow = "1.0"
async-trait = "0.1.48"
containerd-shim = { version = "0.10.0", features = ["async"] }
containerd-shim-protos = { version = "0.10.0", features = ["async"] }
go-flag = "0.1.0"
hyper = "0.14.20"
hyperlocal = "0.8.0"
lazy_static = "1.4"
libc = "0.2"
log = "0.4.14"
netns-rs = "0.1.0"
# Note: nix needs to stay sync'd with libs versions
nix = "0.26.4"
oci-spec = { version = "0.8.1", features = ["runtime"] }
protobuf = "3.7.2"
rand = "0.8.4"
serde = { version = "1.0.145", features = ["derive"] }
serde_json = "1.0.91"
sha2 = "0.10.9"
slog = "2.5.2"
slog-scope = "4.4.0"
strum = { version = "0.24.0", features = ["derive"] }
tempfile = "3.19.1"
thiserror = "1.0"
tokio = "1.46.1"
tracing = "0.1.41"
tracing-opentelemetry = "0.18.0"
ttrpc = "0.8.4"
url = "2.5.4"

View File

@@ -50,10 +50,14 @@ docs-url-alive-check:
build-and-publish-kata-debug:
bash tools/packaging/kata-debug/kata-debug-build-and-upload-payload.sh ${KATA_DEBUG_REGISTRY} ${KATA_DEBUG_TAG}
docs-serve:
docker run --rm -p 8000:8000 -v ./docs:/docs:ro -v ${PWD}/zensical.toml:/zensical.toml:ro zensical/zensical serve --config-file /zensical.toml -a 0.0.0.0:8000
.PHONY: \
all \
kata-tarball \
install-tarball \
default \
static-checks \
docs-url-alive-check
docs-url-alive-check \
docs-serve

View File

@@ -1 +1 @@
3.22.0
3.25.0

View File

@@ -11,6 +11,10 @@ script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${script_dir}/../tests/common.bash"
# Path to the ORAS cache helper for downloading tarballs (sourced when needed)
# Use ORAS_CACHE_HELPER env var (set by build.sh in Docker) or fallback to repo path
oras_cache_helper="${ORAS_CACHE_HELPER:-${script_dir}/../tools/packaging/scripts/download-with-oras-cache.sh}"
# The following variables if set on the environment will change the behavior
# of gperf and libseccomp configure scripts, that may lead this script to
# fail. So let's ensure they are unset here.
@@ -44,6 +48,9 @@ fi
gperf_tarball="gperf-${gperf_version}.tar.gz"
gperf_tarball_url="${gperf_url}/${gperf_tarball}"
# Use ORAS cache for gperf downloads (gperf upstream can be unreliable)
USE_ORAS_CACHE="${USE_ORAS_CACHE:-yes}"
# We need to build the libseccomp library from sources to create a static
# library for the musl libc.
# However, ppc64le, riscv64 and s390x have no musl targets in Rust. Hence, we do
@@ -68,7 +75,23 @@ trap finish EXIT
build_and_install_gperf() {
echo "Build and install gperf version ${gperf_version}"
mkdir -p "${gperf_install_dir}"
curl -sLO "${gperf_tarball_url}"
# Use ORAS cache if available and enabled
if [[ "${USE_ORAS_CACHE}" == "yes" ]] && [[ -f "${oras_cache_helper}" ]]; then
echo "Using ORAS cache for gperf download"
source "${oras_cache_helper}"
local cached_tarball
cached_tarball=$(download_component gperf "$(pwd)")
if [[ -f "${cached_tarball}" ]]; then
gperf_tarball="${cached_tarball}"
else
echo "ORAS cache download failed, falling back to direct download"
curl -sLO "${gperf_tarball_url}"
fi
else
curl -sLO "${gperf_tarball_url}"
fi
tar -xf "${gperf_tarball}"
pushd "gperf-${gperf_version}"
# Unset $CC for configure, we will always use native for gperf

View File

@@ -83,3 +83,7 @@ Documents that help to understand and contribute to Kata Containers.
If you have a suggestion for how we can improve the
[website](https://katacontainers.io), please raise an issue (or a PR) on
[the repository that holds the source for the website](https://github.com/OpenStackweb/kata-netlify-refresh).
### Toolchain Guidance
* [Toolchain Guidance](./Toochain-Guidance.md)

39
docs/Toochain-Guidance.md Normal file
View File

@@ -0,0 +1,39 @@
# Toolchains
As a community we want to strike a balance between having up-to-date toolchains, to receive the
latest security fixes and to be able to benefit from new features and packages, whilst not being
too bleeding edge and disrupting downstream and other consumers. As a result we have the following
guidelines (note, not hard rules) for our go and rust toolchains that we are attempting to try out:
## Go toolchain
Go is released [every six months](https://go.dev/wiki/Go-Release-Cycle) with support for the
[last two major release versions](https://go.dev/doc/devel/release#policy). We always want to
ensure that we are on a supported version so we receive security fixes. To try and make
things easier for some of our users, we aim to be using the older of the two supported major
versions, unless there is a compelling reason to adopt the newer version.
In practice this means that we bump our major version of the go toolchain every six months to
version (1.x-1) in response to a new version (1.x) coming out, which makes our current version
(1.x-2) no longer supported. We will bump the minor version whenever required to satisfy
dependency updates, or security fixes.
Our go toolchain version is recorded in [`versions.yaml`](../versions.yaml) under
`.languages.golang.version` and should match with the version in our `go.mod` files.
## Rust toolchain
Rust has a [six week](https://doc.rust-lang.org/book/appendix-05-editions.html#:~:text=The%20Rust%20language%20and%20compiler,these%20tiny%20changes%20add%20up.)
release cycle and they only support the latest stable release, so if we wanted to remain on a
supported release we would only ever build with the latest stable and bump every 6 weeks.
However feedback from our community has indicated that this is a challenge as downstream consumers
often want to get rust from their distro, or downstream fork and these struggle to keep up with
the six week release schedule. As a result the community has agreed to try out a policy of
"stable-2", where we aim to build with a rust version that is two versions behind the latest stable
version.
In practice this should mean that we bump our rust toolchain every six weeks, to version
1.x-2 when 1.x is released as stable and we should be picking up the latest point release
of that version, if there were any.
The rust-toolchain that we are using is recorded in [`rust-toolchain.toml`](../rust-toolchain.toml).

View File

@@ -198,7 +198,7 @@ fn join_params_with_dash(str: &str, num: i32) -> Result<String> {
return Err("number must be positive");
}
let result = format!("{}-{}", str, num);
let result = format!("{str}-{num}");
Ok(result)
}
@@ -253,13 +253,13 @@ mod tests {
// Run the tests
for (i, d) in tests.iter().enumerate() {
// Create a string containing details of the test
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
// Call the function under test
let result = join_params_with_dash(d.str, d.num);
// Update the test details string with the results of the call
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
// Perform the checks
if d.result.is_ok() {
@@ -267,8 +267,8 @@ mod tests {
continue;
}
let expected_error = format!("{}", d.result.as_ref().unwrap_err());
let actual_error = format!("{}", result.unwrap_err());
let expected_error = format!("{d.result.as_ref().unwrap_err()}");
let actual_error = format!("{result.unwrap_err()}");
assert!(actual_error == expected_error, msg);
}
}

9
docs/assets/favicon.svg Normal file
View File

@@ -0,0 +1,9 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
<!-- Dark background matching the site -->
<rect width="32" height="32" rx="4" fill="#1a1a2e"/>
<!-- Kata logo scaled and centered -->
<g transform="translate(-27, -2) scale(0.75)">
<path d="M70.925 25.22L58.572 37.523 46.27 25.22l2.192-2.192 10.11 10.11 10.11-10.11zm-6.575-.2l-3.188-3.188 3.188-3.188 3.188 3.188zm-4.93-2.54l3.736 3.736-3.736 3.736zm-1.694 7.422l-8.07-8.07 8.07-8.07zm1.694-16.14l3.686 3.686-3.686 3.686zm-13.15 4.682L58.572 6.143l12.353 12.303-2.192 2.192-10.16-10.11-10.11 10.11zm26.997 0L58.572 3.752 43.878 18.446l3.387 3.387-3.387 3.387 14.694 14.694L73.266 25.22l-3.337-3.387z" fill="#f15b3e"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 710 B

View File

@@ -31,6 +31,7 @@
- [Setting Sysctls with Kata](how-to-use-sysctls-with-kata.md)
- [What Is VMCache and How To Enable It](what-is-vm-cache-and-how-do-I-use-it.md)
- [What Is VM Templating and How To Enable It](what-is-vm-templating-and-how-do-I-use-it.md)
- [How to Use Template in runtime-rs](how-to-use-template-in-runtime-rs.md)
- [Privileged Kata Containers](privileged.md)
- [How to load kernel modules in Kata Containers](how-to-load-kernel-modules-with-kata.md)
- [How to use Kata Containers with `virtio-mem`](how-to-use-virtio-mem-with-kata.md)

View File

@@ -256,7 +256,7 @@ spec:
values:
- NODE_NAME
volumes:
- name: trusted-storage
- name: trusted-image-storage
persistentVolumeClaim:
claimName: trusted-pvc
containers:

View File

@@ -97,6 +97,8 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.hypervisor.use_legacy_serial` | `boolean` | uses legacy serial device for guest's console (QEMU) |
| `io.katacontainers.config.hypervisor.default_gpus` | uint32 | the minimum number of GPUs required for the VM. Only used by remote hypervisor to help with instance selection |
| `io.katacontainers.config.hypervisor.default_gpu_model` | string | the GPU model required for the VM. Only used by remote hypervisor to help with instance selection |
| `io.katacontainers.config.hypervisor.block_device_num_queues` | `usize` | The number of queues to use for block devices (runtime-rs only) |
| `io.katacontainers.config.hypervisor.block_device_queue_size` | uint32 | The size of the of the queue to use for block devices (runtime-rs only) |
## Container Options
| Key | Value Type | Comments |

View File

@@ -104,12 +104,20 @@ LOW_WATER_MARK=32768
sudo dmsetup create "${POOL_NAME}" \
--table "0 ${LENGTH_IN_SECTORS} thin-pool ${META_DEV} ${DATA_DEV} ${DATA_BLOCK_SIZE} ${LOW_WATER_MARK}"
# Determine plugin name based on containerd config version
CONFIG_VERSION=$(containerd config dump | awk '/^version/ {print $3}')
if [ "$CONFIG_VERSION" -ge 2 ]; then
PLUGIN="io.containerd.snapshotter.v1.devmapper"
else
PLUGIN="devmapper"
fi
cat << EOF
#
# Add this to your config.toml configuration file and restart containerd daemon
#
[plugins]
[plugins.devmapper]
[plugins."${PLUGIN}"]
pool_name = "${POOL_NAME}"
root_path = "${DATA_DIR}"
base_image_size = "10GB"

View File

@@ -0,0 +1,119 @@
# How to Use Template in runtime-rs
## What is VM Templating
VM templating is a Kata Containers feature that enables new VM creation using a cloning technique. When enabled, new VMs are created by cloning from a pre-created template VM, and they will share the same initramfs, kernel and agent memory in readonly mode. It is very much like a process fork done by the kernel but here we *fork* VMs.
For more details on VM templating, refer to the [What is VM templating and how do I use it](./what-is-vm-templating-and-how-do-I-use-it.md) article.
## How to Enable VM Templating
VM templating can be enabled by changing your Kata Containers config file (`/opt/kata/share/defaults/kata-containers/runtime-rs/configuration.toml`, overridden by `/etc/kata-containers/configuration.toml` if provided) such that:
- `qemu` version `v4.1.0` or above is specified in `hypervisor.qemu`->`path` section
- `enable_template = true`
- `template_path = "/run/vc/vm/template"` (default value, can be customized as needed)
- `initrd =` is set
- `image =` option is commented out or removed
- `shared_fs =` option is commented out or removed
- `default_memory =` should be set to more than 256MB
Then you can create a VM template for later usage by calling:
### Initialize and create the VM template
The `factory init` command creates a VM template by launching a new VM, initializing the Kata Agent, then pausing and saving its state (memory and device snapshots) to the template directory. This saved template is used to rapidly clone new VMs using QEMU's memory sharing capabilities.
```bash
sudo kata-ctl factory init
```
### Check the status of the VM template
The `factory status` command checks whether a VM template currently exists by verifying the presence of template files (memory snapshot and device state). It will output "VM factory is on" if the template exists, or "VM factory is off" otherwise.
```bash
sudo kata-ctl factory status
```
### Destroy and clean up the VM template
The `factory destroy` command removes the VM template by remove the `tmpfs` filesystem and deleting the template directory along with all its contents.
```bash
sudo kata-ctl factory destroy
```
## How to Create a New VM from VM Template
In the Go version of Kata Containers, the VM templating mechanism is implemented using virtio-9p (9pfs). However, 9pfs is not supported in runtime-rs due to its poor performance, limited cache coherence, and security risks. Instead, runtime-rs adopts `VirtioFS` as the default mechanism to provide rootfs for containers and VMs.
Yet, when enabling the VM template mechanism, `VirtioFS` introduces conflicts in memory sharing because its DAX-based shared memory mapping overlaps with the template's page-sharing design. To resolve these conflicts and ensure strict isolation between cloned VMs, runtime-rs replaces `VirtioFS` with the snapshotter approach — specifically, the `blockfile` snapshotter.
The `blockfile` snapshotter is used in runtime-rs because it provides each VM with an independent block-based root filesystem, ensuring strong isolation and full compatibility with the VM templating mechanism.
### Configure Snapshotter
#### Check if `Blockfile` Snapshotter is Available
```bash
ctr plugins ls | grep blockfile
```
If not available, continue with the following steps:
#### Create Scratch File
```bash
dd if=/dev/zero of=/opt/containerd/blockfile bs=1M count=500
sudo mkfs.ext4 /opt/containerd/blockfile
```
#### Configure containerd
Edit the containerd configuration file:
```bash
sudo vim /etc/containerd/config.toml
```
Add or modify the following configuration for the `blockfile` snapshotter:
```toml
[plugins."io.containerd.snapshotter.v1.blockfile"]
scratch_file = "/opt/containerd/blockfile"
root_path = ""
fs_type = "ext4"
mount_options = []
recreate_scratch = true
```
#### Restart containerd
After modifying the configuration, restart containerd to apply changes:
```bash
sudo systemctl restart containerd
```
### Run Container with `blockfile` Snapshotter
After the VM template is created, you can pull an image and run a container using the `blockfile` snapshotter:
```bash
ctr run --rm -t --snapshotter blockfile docker.io/library/busybox:latest template sh
```
We can verify whether a VM was launched from a template or started normally by checking the launch parameters — if the parameters contain `incoming`, it indicates that the VM was started from a template rather than created directly.
## Performance Test
The comparative experiment between **template-based VM** creation and **direct VM** creation showed that the template-based approach achieved a ≈ **73.2%** reduction in startup latency (average launch time of **0.6s** vs. **0.82s**) and a ≈ **79.8%** reduction in memory usage (average memory usage of **178.2 MiB** vs. **223.2 MiB**), demonstrating significant improvements in VM startup efficiency and resource utilization.
The test script is as follows:
```bash
# Clear the page cache, dentries, and inodes to free up memory
echo 3 | sudo tee /proc/sys/vm/drop_caches
# Display the current memory usage
free -h
# Create 100 normal VMs and template-based VMs, and track the time
time for I in $(seq 100); do
echo -n " ${I}th" # Display the iteration number
ctr run -d --runtime io.containerd.kata.v2 --snapshotter blockfile docker.io/library/busybox:latest normal/template${I}
done
# Display the memory usage again after running the test
free -h

View File

@@ -8,50 +8,11 @@ Kata Containers requires nested virtualization or bare metal. Check
[hardware requirements](./../../README.md#hardware-requirements) to see if your system is capable of running Kata
Containers.
## Packaged installation methods
The packaged installation method uses your distribution's native package format (such as RPM or DEB).
> **Note:**
>
> We encourage you to select an installation method that provides
> automatic updates, to ensure you get the latest security updates and
> bug fixes.
| Installation method | Description | Automatic updates | Use case |
|------------------------------------------------------|----------------------------------------------------------------------------------------------|-------------------|-----------------------------------------------------------------------------------------------|
| [Using official distro packages](#official-packages) | Kata packages provided by Linux distributions official repositories | yes | Recommended for most users. |
| [Automatic](#automatic-installation) | Run a single command to install a full system | **No!** | For those wanting the latest release quickly. |
| [Using kata-deploy Helm chart](#kata-deploy-helm-chart) | The preferred way to deploy the Kata Containers distributed binaries on a Kubernetes cluster | **No!** | Best way to give it a try on kata-containers on an already up and running Kubernetes cluster. |
### Kata Deploy Helm Chart
The Kata Deploy Helm chart is a convenient way to install all of the binaries and
The Kata Deploy Helm chart is the preferred way to install all of the binaries and
artifacts required to run Kata Containers on Kubernetes.
[Use Kata Deploy Helm Chart](/tools/packaging/kata-deploy/helm-chart/README.md) to install Kata Containers on a Kubernetes Cluster.
### Official packages
Kata packages are provided by official distribution repositories for:
| Distribution (link to installation guide) | Minimum versions |
|----------------------------------------------------------|--------------------------------------------------------------------------------|
| [CentOS](centos-installation-guide.md) | 8 |
| [Fedora](fedora-installation-guide.md) | 34 |
### Automatic Installation
[Use `kata-manager`](/utils/README.md) to automatically install a working Kata Containers system.
## Installing on a Cloud Service Platform
* [Amazon Web Services (AWS)](aws-installation-guide.md)
* [Google Compute Engine (GCE)](gce-installation-guide.md)
* [Microsoft Azure](azure-installation-guide.md)
* [Minikube](minikube-installation-guide.md)
* [VEXXHOST OpenStack Cloud](vexxhost-installation-guide.md)
## Further information
* [upgrading document](../Upgrading.md)

View File

@@ -1,135 +0,0 @@
# Install Kata Containers on Amazon Web Services
Kata Containers on Amazon Web Services (AWS) makes use of [i3.metal](https://aws.amazon.com/ec2/instance-types/i3/) instances. Most of the installation procedure is identical to that for Kata on your preferred distribution, except that you have to run it on bare metal instances since AWS doesn't support nested virtualization yet. This guide walks you through creating an i3.metal instance.
## Install and Configure AWS CLI
### Requirements
* Python:
* Python 2 version 2.6.5+
* Python 3 version 3.3+
### Install
Install with this command:
```bash
$ pip install awscli --upgrade --user
```
### Configure
First, verify it:
```bash
$ aws --version
```
Then configure it:
```bash
$ aws configure
```
Specify the required parameters:
```
AWS Access Key ID []: <your-key-id-from-iam>
AWS Secret Access Key []: <your-secret-access-key-from-iam>
Default region name []: <your-aws-region-for-your-i3-metal-instance>
Default output format [None]: <yaml-or-json-or-empty>
```
Alternatively, you can create the files: `~/.aws/credentials` and `~/.aws/config`:
```bash
$ cat <<EOF > ~/.aws/credentials
[default]
aws_access_key_id = <your-key-id-from-iam>
aws_secret_access_key = <your-secret-access-key-from-iam>
EOF
$ cat <<EOF > ~/.aws/config
[default]
region = <your-aws-region-for-your-i3-metal-instance>
EOF
```
For more information on how to get AWS credentials please refer to [this guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). Alternatively, you can ask the administrator of your AWS account to issue one with the AWS CLI:
```sh
$ aws_username="myusername"
$ aws iam create-access-key --user-name="$aws_username"
```
More general AWS CLI guidelines can be found [here](https://docs.aws.amazon.com/cli/latest/userguide/installing.html).
## Create or Import an EC2 SSH key pair
You will need this to access your instance.
To create:
```bash
$ aws ec2 create-key-pair --key-name MyKeyPair | grep KeyMaterial | cut -d: -f2- | tr -d ' \n\"\,' > MyKeyPair.pem
$ chmod 400 MyKeyPair.pem
```
Alternatively to import using your public SSH key:
```bash
$ aws ec2 import-key-pair --key-name "MyKeyPair" --public-key-material file://MyKeyPair.pub
```
## Launch i3.metal instance
Get the latest Bionic Ubuntu AMI (Amazon Image) or the latest AMI for the Linux distribution you would like to use. For example:
```bash
$ aws ec2 describe-images --owners 099720109477 --filters "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server*" --query 'sort_by(Images, &CreationDate)[].ImageId '
```
This command will produce output similar to the following:
```
[
...
"ami-063aa838bd7631e0b",
"ami-03d5270fcb641f79b"
]
```
Launch the EC2 instance and pick IP the `INSTANCEID`:
```bash
$ aws ec2 run-instances --image-id ami-03d5270fcb641f79b --count 1 --instance-type i3.metal --key-name MyKeyPair --associate-public-ip-address > /tmp/aws.json
$ export INSTANCEID=$(grep InstanceId /tmp/aws.json | cut -d: -f2- | tr -d ' \n\"\,')
```
Wait for the instance to come up, the output of the following command should be `running`:
```bash
$ aws ec2 describe-instances --instance-id=${INSTANCEID} | grep running | cut -d: -f2- | tr -d ' \"\,'
```
Get the public IP address for the instances:
```bash
$ export IP=$(aws ec2 describe-instances --instance-id=${INSTANCEID} | grep PublicIpAddress | cut -d: -f2- | tr -d ' \n\"\,')
```
Refer to [this guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-launch.html) for more details on how to launch instances with the AWS CLI.
SSH into the machine
```bash
$ ssh -i MyKeyPair.pem ubuntu@${IP}
```
Go onto the next step.
## Install Kata
The process for installing Kata itself on bare metal is identical to that of a virtualization-enabled VM.
For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](../install/README.md).

View File

@@ -1,18 +0,0 @@
# Install Kata Containers on Microsoft Azure
Kata Containers on Azure use nested virtualization to provide an identical installation
experience to Kata on your preferred Linux distribution.
This guide assumes you have an Azure account set up and tools to remotely login to your virtual
machine (SSH). Instructions will use the Azure Portal to avoid
local dependencies and setup.
## Create a new virtual machine with nesting support
Create a new virtual machine with:
* Nesting support (v3 series)
* your distro of choice
## Set up with distribution specific quick start
Follow distribution specific [install guides](../install/README.md#packaged-installation-methods).

View File

@@ -1,21 +0,0 @@
# Install Kata Containers on CentOS
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E dnf install -y centos-release-advanced-virtualization
$ sudo -E dnf module disable -y virt:rhel
$ source /etc/os-release
$ cat <<EOF | sudo -E tee /etc/yum.repos.d/kata-containers.repo
[kata-containers]
name=Kata Containers
baseurl=http://mirror.centos.org/\$contentdir/\$releasever/virt/\$basearch/kata-containers
enabled=1
gpgcheck=1
skip_if_unavailable=1
EOF
$ sudo -E dnf install -y kata-containers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,10 +0,0 @@
# Install Kata Containers on Fedora
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E dnf -y install kata-containers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,127 +0,0 @@
# Install Kata Containers on Google Compute Engine
Kata Containers on Google Compute Engine (GCE) makes use of [nested virtualization](https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances). Most of the installation procedure is identical to that for Kata on your preferred distribution, but enabling nested virtualization currently requires extra steps on GCE. This guide walks you through creating an image and instance with nested virtualization enabled. Note that `kata-runtime check` checks for nested virtualization, but does not fail if support is not found.
As a pre-requisite this guide assumes an installed and configured instance of the [Google Cloud SDK](https://cloud.google.com/sdk/downloads). For a zero-configuration option, all of the commands below were been tested under [Google Cloud Shell](https://cloud.google.com/shell/) (as of Jun 2018). Verify your `gcloud` installation and configuration:
```bash
$ gcloud info || { echo "ERROR: no Google Cloud SDK"; exit 1; }
```
## Create an Image with Nested Virtualization Enabled
VM images on GCE are grouped into families under projects. Officially supported images are automatically discoverable with `gcloud compute images list`. That command produces a list similar to the following (likely with different image names):
```bash
$ gcloud compute images list
NAME PROJECT FAMILY DEPRECATED STATUS
centos-7-v20180523 centos-cloud centos-7 READY
coreos-stable-1745-5-0-v20180531 coreos-cloud coreos-stable READY
cos-beta-67-10575-45-0 cos-cloud cos-beta READY
cos-stable-66-10452-89-0 cos-cloud cos-stable READY
debian-9-stretch-v20180510 debian-cloud debian-9 READY
rhel-7-v20180522 rhel-cloud rhel-7 READY
sles-11-sp4-v20180523 suse-cloud sles-11 READY
ubuntu-1604-xenial-v20180522 ubuntu-os-cloud ubuntu-1604-lts READY
ubuntu-1804-bionic-v20180522 ubuntu-os-cloud ubuntu-1804-lts READY
```
Each distribution has its own project, and each project can host images for multiple versions of the distribution, typically grouped into families. We recommend you select images by project and family, rather than by name. This ensures any scripts or other automation always works with a non-deprecated image, including security updates, updates to GCE-specific scripts, etc.
### Create the Image
The following example (substitute your preferred distribution project and image family) produces an image with nested virtualization enabled in your currently active GCE project:
```bash
$ SOURCE_IMAGE_PROJECT=ubuntu-os-cloud
$ SOURCE_IMAGE_FAMILY=ubuntu-1804-lts
$ IMAGE_NAME=${SOURCE_IMAGE_FAMILY}-nested
$ gcloud compute images create \
--source-image-project $SOURCE_IMAGE_PROJECT \
--source-image-family $SOURCE_IMAGE_FAMILY \
--licenses=https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx \
$IMAGE_NAME
```
If successful, `gcloud` reports that the image was created. Verify that the image has the nested virtualization license with `gcloud compute images describe $IMAGE_NAME`. This produces output like the following (some fields have been removed for clarity and to redact personal info):
```yaml
diskSizeGb: '10'
kind: compute#image
licenseCodes:
- '1002001'
- '5926592092274602096'
licenses:
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
name: ubuntu-1804-lts-nested
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20180522
sourceImageId: '3280575157699667619'
sourceType: RAW
status: READY
```
The primary criterion of interest here is the presence of the `enable-vmx` license. Without that licence Kata will not work. Without that license Kata does not work. The presence of that license instructs the Google Compute Engine hypervisor to enable Intel's VT-x instructions in virtual machines created from the image. Note that nested virtualization is only available in VMs running on Intel Haswell or later CPU micro-architectures.
### Verify VMX is Available
Assuming you created a nested-enabled image using the previous instructions, verify that VMs created from this image are VMX-enabled with the following:
1. Create a VM from the image created previously:
```bash
$ gcloud compute instances create \
--image $IMAGE_NAME \
--machine-type n1-standard-2 \
--min-cpu-platform "Intel Broadwell" \
kata-testing
```
> **NOTE**: In most zones the `--min-cpu-platform` argument can be omitted. It is only necessary in GCE Zones that include hosts based on Intel's Ivybridge platform.
2. Verify that the VMX CPUID flag is set:
```bash
$ gcloud compute ssh kata-testing
# While ssh'd into the VM:
$ [ -z "$(lscpu|grep GenuineIntel)" ] && { echo "ERROR: Need an Intel CPU"; exit 1; }
```
If this fails, ensure you created your instance from the correct image and that the previously listed `enable-vmx` license is included.
## Install Kata
The process for installing Kata itself on a virtualization-enabled VM is identical to that for bare metal.
For detailed information to install Kata on your distribution of choice, see the [Kata Containers installation user guides](../install/README.md).
## Create a Kata-enabled Image
Optionally, after installing Kata, create an image to preserve the fruits of your labor:
```bash
$ gcloud compute instances stop kata-testing
$ gcloud compute images create \
--source-disk kata-testing \
kata-base
```
The result is an image that includes any changes made to the `kata-testing` instance as well as the `enable-vmx` flag. Verify this with `gcloud compute images describe kata-base`. The result, which omits some fields for clarity, should be similar to the following:
```yaml
diskSizeGb: '10'
kind: compute#image
licenseCodes:
- '1002001'
- '5926592092274602096'
licenses:
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
name: kata-base
selfLink: https://www.googleapis.com/compute/v1/projects/my-kata-project/global/images/kata-base
sourceDisk: https://www.googleapis.com/compute/v1/projects/my-kata-project/zones/us-west1-a/disks/kata-testing
sourceType: RAW
status: READY
```

View File

@@ -1,16 +0,0 @@
# Install Kata Containers on VEXXHOST
Kata Containers on VEXXHOST use nested virtualization to provide an identical
installation experience to Kata on your preferred Linux distribution.
This guide assumes you have an OpenStack public cloud account set up and tools
to remotely connect to your virtual machine (SSH).
## Create a new virtual machine with nesting support
All regions support nested virtualization using the V2 flavors (those prefixed
with v2). The recommended machine type for container workloads is `v2-highcpu` range.
## Set up with distribution specific quick start
Follow distribution specific [install guides](../install/README.md#packaged-installation-methods).

View File

@@ -48,7 +48,7 @@ $ make test
- Run a test in the current package in verbose mode:
```bash
# Example
# Example
$ test="config::tests::test_get_log_level"
$ cargo test "$test" -vv -- --exact --nocapture
@@ -223,7 +223,7 @@ What's wrong with this function?
```rust
fn foo(config: &Config, path_prefix: String, container_id: String, pid: String) -> Result<()> {
let mut full_path = format!("{}/{}", path_prefix, container_id);
let mut full_path = format!("{path_prefix}/{container_id}");
let _ = remove_recursively(&mut full_path);

View File

@@ -3,4 +3,4 @@
Kata Containers supports passing certain GPUs from the host into the container. Select the GPU vendor for detailed information:
- [Intel Discrete GPUs](Intel-Discrete-GPU-passthrough-and-Kata.md)/[Intel Integrated GPUs](Intel-GPU-passthrough-and-Kata.md)
- [NVIDIA](NVIDIA-GPU-passthrough-and-Kata.md)
- [NVIDIA GPUs](NVIDIA-GPU-passthrough-and-Kata.md) and [Enabling NVIDIA GPU workloads using GPU passthrough with Kata Containers](NVIDIA-GPU-passthrough-and-Kata-QEMU.md)

View File

@@ -0,0 +1,569 @@
# Enabling NVIDIA GPU workloads using GPU passthrough with Kata Containers
This page provides:
1. A description of the components involved when running GPU workloads with
Kata Containers using the NVIDIA TEE and non-TEE GPU runtime classes.
1. An explanation of the orchestration flow on a Kubernetes node for this
scenario.
1. A deployment guide enabling to utilize these runtime classes.
The goal is to educate readers familiar with Kubernetes and Kata Containers
on NVIDIA's reference implementation which is reflected in Kata CI's build
and test framework. With this, we aim to enable readers to leverage this
stack, or to use the principles behind this stack in order to run GPU
workloads on their variant of the Kata Containers stack.
We assume the reader is familiar with Kubernetes, Kata Containers, and
Confidential Containers.
> **Note:**
>
> The current supported mode for enabling GPU workloads in the TEE scenario
> is single GPU passthrough (one GPU per pod) on AMD64 platforms (AMD SEV-SNP
> being the only supported TEE scenario so far with support for Intel TDX being
> on the way).
## Component Overview
Before providing deployment guidance, we describe the components involved to
support running GPU workloads. We start from a top to bottom perspective
from the NVIDIA GPU operator via the Kata runtime to the components within
the NVIDIA GPU Utility Virtual Machine (UVM) root filesystem.
### NVIDIA GPU Operator
A central component is the
[NVIDIA GPU operator](https://github.com/NVIDIA/gpu-operator) which can be
deployed onto your cluster as a helm chart. Installing the GPU operator
delivers various operands on your nodes in the form of Kubernetes DaemonSets.
These operands are vital to support the flow of orchestrating pod manifests
using NVIDIA GPU runtime classes with GPU passthrough on your nodes. Without
getting into the details, the most important operands and their
responsibilities are:
- **nvidia-vfio-manager:** Binding discovered NVIDIA GPUs to the `vfio-pci`
driver for VFIO passthrough.
- **nvidia-cc-manager:** Transitioning GPUs into confidential computing (CC)
and non-CC mode (see the
[NVIDIA/k8s-cc-manager](https://github.com/NVIDIA/k8s-cc-manager)
repository).
- **nvidia-kata-manager:** Creating host-side CDI specifications for GPU
passthrough, resulting in the file `/var/run/cdi/nvidia.yaml`, containing
`kind: nvidia.com/pgpu` (see the
[NVIDIA/k8s-kata-manager](https://github.com/NVIDIA/k8s-kata-manager)
repository).
- **nvidia-sandbox-device-plugin** (see the
[NVIDIA/sandbox-device-plugin](https://github.com/NVIDIA/sandbox-device-plugin)
repository):
- Allocating GPUs during pod deployment.
- Discovering NVIDIA GPUs, their capabilities, and advertising these to
the Kubernetes control plane (allocatable resources as type
`nvidia.com/pgpu` resources will appear for the node and GPU Device IDs
will be registered with Kubelet). These GPUs can thus be allocated as
container resources in your pod manifests. See below GPU operator
deployment instructions for the use of the key `pgpu`, controlled via a
variable.
To summarize, the GPU operator manages the GPUs on each node, allowing for
simple orchestration of pod manifests using Kata Containers. Once the cluster
with GPU operator and Kata bits is up and running, the end user can schedule
Kata NVIDIA GPU workloads, using resource limits and the
`kata-qemu-nvidia-gpu` or `kata-qemu-nvidia-gpu-snp` runtime classes, for
example:
```yaml
apiVersion: v1
kind: Pod
...
spec:
...
runtimeClassName: kata-qemu-nvidia-gpu-snp
...
resources:
limits:
"nvidia.com/pgpu": 1
...
```
When this happens, the Kubelet calls into the sandbox device plugin to
allocate a GPU. The sandbox device plugin returns `DeviceSpec` entries to the
Kubelet for the allocated GPU. The Kubelet uses internal device IDs for
tracking of allocated GPUs and includes the device specifications in the CRI
request when scheduling the pod through containerd. Containerd processes the
device specifications and includes the device configuration in the OCI
runtime spec used to invoke the Kata runtime during the create container
request.
### Kata runtime
The Kata runtime for the NVIDIA GPU handlers is configured to cold-plug VFIO
devices (`cold_plug_vfio` is set to `root-port` while
`hot_plug_vfio` is set to `no-port`). Cold-plug is by design the only
supported mode for NVIDIA GPU passthrough of the NVIDIA reference stack.
With cold-plug, the Kata runtime attaches the GPU at VM launch time, when
creating the pod sandbox. This happens *before* the create container request,
i.e., before the Kata runtime receives the OCI spec including device
configurations from containerd. Thus, a mechanism to acquire the device
information is required. This is done by the runtime calling the
`coldPlugDevices()` function during sandbox creation. In this function,
the runtime queries Kubelet's Pod Resources API to discover allocated GPU
device IDs (e.g., `nvidia.com/pgpu = [vfio0]`). The runtime formats these as
CDI device identifiers and injects them into the OCI spec using
`config.InjectCDIDevices()`. The runtime then consults the host CDI
specifications and determines the device path the GPU is backed by
(e.g., `/dev/vfio/devices/vfio0`). Finally, the runtime resolves the device's
PCI BDF (e.g., `0000:21:00`) and cold-plugs the GPU by launching QEMU with
relevant parameters for device passthrough (e.g.,
`-device vfio-pci,host=0000:21:00.0,x-pci-vendor-id=0x10de,x-pci-device-id=0x2321,bus=rp0,iommufd=iommufdvfio-faf829f2ea7aec330`).
The runtime also creates *inner runtime* CDI annotations
which map host VFIO devices to guest GPU devices. These are annotations
intended for the kata-agent, here referred to as the inner runtime (inside the
UVM), to properly handle GPU passthrough into containers. These annotations
serve as metadata providing the kata-agent with the information needed to
attach the passthrough devices to the correct container.
The annotations are key-value pairs consisting of `cdi.k8s.io/vfio<num>` keys
(derived from the host VFIO device path, e.g., `/dev/vfio/devices/vfio1`) and
`nvidia.com/gpu=<index>` values (referencing the corresponding device in the
guest CDI spec). These annotations are injected by the runtime during container
creation via the `annotateContainerWithVFIOMetadata` function (see
`container.go`).
We continue describing the orchestration flow inside the UVM in the next
section.
### Kata NVIDIA GPU UVM
#### UVM composition
To better understand the orchestration flow inside the NVIDIA GPU UVM, we
first look at the components its root filesystem contains. Should you decide
to use your own root filesystem to enable NVIDIA GPU scenarios, this should
give you a good idea on what ingredients you need.
From a file system perspective, the UVM is composed of two files: a standard
Kata kernel image and the NVIDIA GPU rootfs in initrd or disk image format.
These two files are being utilized for the QEMU launch command when the UVM
is created.
The two most important pieces in Kata Container's build recipes for the
NVIDIA GPU root filesystem are the `nvidia_chroot.sh` and `nvidia_rootfs.sh`
files. The build follows a two-stage process. In the first stage, a
full-fledged Ubuntu-based root filesystem is composed within a chroot
environment. In this stage, NVIDIA kernel modules are built and signed
against the current Kata kernel and relevant NVIDIA packages are installed.
In the second stage, a chiseled build is performed: Only relevant contents
from the first stage are copied and compressed into a new distro-less root
filesystem folder. Kata's build infrastructure then turns this root
filesystem into the NVIDIA initrd and image files.
The resulting root filesystem contains the following software components:
- NVRC - the
[NVIDIA Runtime Container init system](https://github.com/NVIDIA/nvrc/tree/main)
- NVIDIA drivers (kernel modules)
- NVIDIA user space driver libraries
- NVIDIA user space tools
- kata-agent
- confidential computing guest components: the attestation agent,
confidential data hub and api-server-rest binaries
- CRI-O pause container (for the guest image-pull method)
- BusyBox utilities (provides a base set of libraries and binaries, and a
linker)
- some supporting files, such as file containing a list of supported GPU
device IDs which NVRC reads
#### UVM orchestration flow
When the Kata runtime asks QEMU to launch the VM, the UVM's Linux kernel
boots and mounts the root filesystem. After this, NVRC starts as the initial
process.
NVRC scans for NVIDIA GPUs on the PCI bus, loads the
NVIDIA kernel modules, waits for driver initialization, creates the device nodes,
and initializes the GPU hardware (using the `nvidia-smi` binary). NVRC also
creates the guest-side CDI specification file (using the
`nvidia-ctk cdi generate` command). This file specifies devices of
`kind: nvidia.com/gpu`, i.e., GPUs appearing to be physical GPUs on regular
bare metal systems. The guest CDI specification also contains `containerEdits`
for each device, specifying device nodes (e.g., `/dev/nvidia0`,
`/dev/nvidiactl`), library mounts, and environment variables to be mounted
into the container which receives the passthrough GPU.
Then, NVRC forks the Kata agent while continuing to run as the
init system. This allows NVRC to handle ongoing GPU management tasks
while kata-agent focuses on container lifecycle management. See the
[NVRC sources](https://github.com/NVIDIA/nvrc/blob/main/src/main.rs) for an
overview on the steps carried out by NVRC.
When the Kata runtime sends the create container request, the Kata agent
parses the inner runtime CDI annotation. For example, for the inner runtime
annotation `"cdi.k8s.io/vfio1": "nvidia.com/gpu=0"`, the agent looks up device
`0` in the guest CDI specification with `kind: nvidia.com/gpu`.
The Kata agent also reads the guest CDI specification's `containerEdits`
section and injects relevant contents into the OCI spec of the respective
container. The kata agent then creates and starts a `rustjail` container
based on the final OCI spec. The container now has relevant device nodes,
binaries and low-level libraries available, and can start a user application
linked against the CUDA runtime API (e.g., `libcudart.so` and other
libraries). When used, the CUDA runtime API in turn calls the CUDA driver
API and kernel drivers, interacting with the pass-through GPU device.
An additional step is exercised in our CI samples: when using images from an
authenticated registry, the guest-pull mechanism triggers attestation using
trustee's Key Broker Service (KBS) for secure release of the NGC API
authentication key used to access the NVCR container registry. As part of
this, the attestation agent exercises composite attestation and transitions
the GPU into `Ready` state (without this, the GPU has to explicitly be
transitioned into `Ready` state by passing the `nvrc.smi.srs=1` kernel
parameter via the shim config, causing NVRC to transition the GPU into the
`Ready` state).
## Deployment Guidance
This guidance assumes you use bare-metal machines with proper support for
Kata's non-TEE and TEE GPU workload deployment scenarios for your Kubernetes
nodes. We provide guidance based on the upstream Kata CI procedures for the
NVIDIA GPU CI validation jobs. Note that, this setup:
- uses the guest image pull method to pull container image layers
- uses the genpolicy tool to attach Kata agent security policies to the pod
manifest
- has dedicated (composite) attestation tests, a CUDA vectorAdd test, and a
NIM/RA test sample with secure API key release
A similar deployment guide and scenario description can be found in NVIDIA resources
under
[Early Access: NVIDIA GPU Operator with Confidential Containers based on Kata](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/confidential-containers.html).
### Requirements
The requirements for the TEE scenario are:
- Ubuntu 25.10 as host OS
- CPU with AMD SEV-SNP support with proper BIOS/UEFI version and settings
- CC-capable Hopper/Blackwell GPU with proper VBIOS version.
BIOS and VBIOS configuration is out of scope for this guide. Other resources,
such as the documentation found on the
[NVIDIA Trusted Computing Solutions](https://docs.nvidia.com/nvtrust/index.html)
page and the above linked NVIDIA documentation, provide guidance on
selecting proper hardware and on properly configuring its firmware and OS.
### Installation
#### Containerd and Kubernetes
First, set up your Kubernetes cluster. For instance, in Kata CI, our NVIDIA
jobs use a single-node vanilla Kubernetes cluster with a 2.x containerd
version and Kata's current supported Kubernetes version. We set this cluster
up using the `deploy_k8s` function from `tests/integration/kubernetes/gha-run.sh`
as follows:
```bash
$ export KUBERNETES="vanilla"
$ export CONTAINER_ENGINE="containerd"
$ export CONTAINER_ENGINE_VERSION="v2.1"
$ source tests/gha-run-k8s-common.sh
$ deploy_k8s
```
> **Note:**
>
> We recommend to configure your Kubelet with a higher
> `runtimeRequestTimeout` timeout value than the two minute default timeout.
> Using the guest-pull mechanism, pulling large images may take a significant
> amount of time and may delay container start, possibly leading your Kubelet
> to de-allocate your pod before it transitions from the *container created*
> to the *container running* state.
> **Note:**
>
> The NVIDIA GPU runtime classes use VFIO cold-plug which, as
> described above, requires the Kata runtime to query Kubelet's Pod Resources
> API to discover allocated GPU devices during sandbox creation. For
> Kubernetes versions **older than 1.34**, you must explicitly enable the
> `KubeletPodResourcesGet` feature gate in your Kubelet configuration. For
> Kubernetes 1.34 and later, this feature is enabled by default.
#### GPU Operator
Assuming you have the helm tools installed, deploy the latest version of the
GPU Operator as a helm chart (minimum version: `v25.10.0`):
```bash
$ helm repo add nvidia https://helm.ngc.nvidia.com/nvidia && helm repo update
$ helm install --wait --generate-name \
-n gpu-operator --create-namespace \
nvidia/gpu-operator \
--set sandboxWorkloads.enabled=true \
--set sandboxWorkloads.defaultWorkload=vm-passthrough \
--set kataManager.enabled=true \
--set kataManager.config.runtimeClasses=null \
--set kataManager.repository=nvcr.io/nvidia/cloud-native \
--set kataManager.image=k8s-kata-manager \
--set kataManager.version=v0.2.4 \
--set ccManager.enabled=true \
--set ccManager.defaultMode=on \
--set ccManager.repository=nvcr.io/nvidia/cloud-native \
--set ccManager.image=k8s-cc-manager \
--set ccManager.version=v0.2.0 \
--set sandboxDevicePlugin.repository=nvcr.io/nvidia/cloud-native \
--set sandboxDevicePlugin.image=nvidia-sandbox-device-plugin \
--set sandboxDevicePlugin.version=v0.0.1 \
--set 'sandboxDevicePlugin.env[0].name=P_GPU_ALIAS' \
--set 'sandboxDevicePlugin.env[0].value=pgpu' \
--set nfd.enabled=true \
--set nfd.nodefeaturerules=true
```
> **Note:**
>
> For heterogeneous clusters with different GPU types, you can omit
> the `P_GPU_ALIAS` environment variable lines. This will cause the sandbox
> device plugin to create GPU model-specific resource types (e.g.,
> `nvidia.com/GH100_H100L_94GB`) instead of the generic `nvidia.com/pgpu`,
> which in turn can be used by pods through respective resource limits.
> For simplicity, this guide uses the generic alias.
> **Note:**
>
> Using `--set sandboxWorkloads.defaultWorkload=vm-passthrough` causes all
> your nodes to be labeled for GPU VM passthrough. Remove this parameter if
> you intend to only use selected nodes for this scenario, and label these
> nodes by hand, using:
> `kubectl label node <node-name> nvidia.com/gpu.workload.config=vm-passthrough`.
#### Kata Containers
Install the latest Kata Containers helm chart, similar to
[existing documentation](https://github.com/kata-containers/kata-containers/blob/main/tools/packaging/kata-deploy/helm-chart/README.md)
(minimum version: `3.24.0`).
```bash
$ export VERSION=$(curl -sSL https://api.github.com/repos/kata-containers/kata-containers/releases/latest | jq .tag_name | tr -d '"')
$ export CHART="oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy"
$ helm install kata-deploy \
--namespace kata-system \
--create-namespace \
-f "https://raw.githubusercontent.com/kata-containers/kata-containers/refs/tags/${VERSION}/tools/packaging/kata-deploy/helm-chart/kata-deploy/try-kata-nvidia-gpu.values.yaml" \
--set nfd.enabled=false \
--set shims.qemu-nvidia-gpu-tdx.enabled=false \
--wait --timeout 10m --atomic \
"${CHART}" --version "${VERSION}"
```
#### Trustee's KBS for remote attestation
For our Kata CI runners we use Trustee's KBS for composite attestation for
secure key release, for instance, for test scenarios which use authenticated
container images. In such scenarios, the credentials to access the
authenticated container registry are only released to the confidential guest
after successful attestation. Please see the section below for more
information about this.
```bash
$ export NVIDIA_VERIFIER_MODE="remote"
$ export KBS_INGRESS="nodeport"
$ bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
$ bash tests/integration/kubernetes/gha-run.sh install-kbs-client
```
Please note, that Trustee can also be deployed via any other upstream
mechanism as documented by the
[confidential-containers repository](https://github.com/confidential-containers/trustee).
For our architecture it is important to set up KBS in the remote verifier
mode which requires entering a licensing agreement with NVIDIA, see the
[notes in confidential-containers repository](https://github.com/confidential-containers/trustee/blob/main/deps/verifier/src/nvidia/README.md).
### Cluster validation and preparation
If you did not use the `sandboxWorkloads.defaultWorkload=vm-passthrough`
parameter during GPU operator deployment, label your nodes for GPU VM
passthrough, for the example of using all nodes for GPU passthrough, run:
```bash
$ kubectl label nodes --all nvidia.com/gpu.workload.config=vm-passthrough --overwrite
```
Check if the `nvidia-cc-manager` pod is running if you intend to run GPU TEE
scenarios. If not, you need to manually label the node as CC capable. Current
GPU Operator node feature rules do not yet recognize all CC capable GPU PCI
IDs. Run the following command:
```bash
$ kubectl label nodes --all nvidia.com/cc.capable=true
```
After this, assure the `nvidia-cc-manager` pod is running. With the suggested
parameters for GPU Operator deployment, the `nvidia-cc-manager` will
automatically transition the GPU into CC mode.
After deployment, you can transition your node(s) to the desired CC state,
using either the `on` or `off` value, depending on your scenario. For the
non-CC scenario, transition to the `off` state via:
`kubectl label nodes --all nvidia.com/cc.mode=off` and wait until all pods
are back running. When an actual change is exercised, various GPU operator
operands will be restarted.
Ensure all pods are running:
```bash
$ kubectl get pods -A
```
On your node(s), ensure for correct driver binding. Your GPU device should be
bound to the VFIO driver, i.e., showing `Kernel driver in use: vfio-pci`
when running:
```bash
$ lspci -nnk -d 10de:
```
### Run the CUDA vectorAdd sample
Create the following file:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: cuda-vectoradd-kata
namespace: default
annotations:
io.katacontainers.config.hypervisor.kernel_params: "nvrc.smi.srs=1"
spec:
runtimeClassName: ${GPU_RUNTIME_CLASS_NAME}
restartPolicy: Never
containers:
- name: cuda-vectoradd
image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda12.5.0-ubuntu22.04"
resources:
limits:
nvidia.com/pgpu: "1"
memory: 16Gi
```
Depending on your scenario and on the CC state, export your desired runtime
class name define the environment variable:
```bash
$ export GPU_RUNTIME_CLASS_NAME="kata-qemu-nvidia-gpu-snp"
```
Then, deploy the sample Kubernetes pod manifest and observe the pod logs:
```bash
$ envsubst < ./cuda-vectoradd-kata.yaml.in | kubectl apply -f -
$ kubectl wait --for=condition=Ready pod/cuda-vectoradd-kata --timeout=60s
$ kubectl logs -n default cuda-vectoradd-kata
```
Expect the following output:
```
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
```
To stop the pod, run: `kubectl delete pod cuda-vectoradd-kata`.
### Next steps
#### Transition between CC and non-CC mode
Use the previously described node labeling approach to transition between
the CC and non-CC mode. In case of the non-CC mode, you can use the
`kata-qemu-nvidia-gpu` value for the `GPU_RUNTIME_CLASS_NAME` runtime class
variable in the above CUDA vectorAdd sample. The `kata-qemu-nvidia-gpu-snp`
runtime class will **NOT** work in this mode - and vice versa.
#### Run Kata CI tests locally
Upstream Kata CI runs the CUDA vectorAdd test, a composite attestation test,
and a basic NIM/RAG deployment. Running CI tests for the TEE GPU scenario
requires KBS to be deployed (except for the CUDA vectorAdd test). The best
place to get started running these tests locally is to look into our
[NVIDIA CI workflow manifest](https://github.com/kata-containers/kata-containers/blob/main/.github/workflows/run-k8s-tests-on-nvidia-gpu.yaml)
and into the underling
[run_kubernetes_nv_tests.sh](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/run_kubernetes_nv_tests.sh)
script. For example, to run the CUDA vectorAdd scenario against the TEE GPU
runtime class use the following commands:
```bash
# create the kata runtime class the test framework uses
$ export KATA_HYPERVISOR=qemu-nvidia-gpu-snp
$ kubectl delete runtimeclass kata --ignore-not-found
$ kubectl get runtimeclass "kata-${KATA_HYPERVISOR}" -o json | \
jq '.metadata.name = "kata" | del(.metadata.uid, .metadata.resourceVersion, .metadata.creationTimestamp)' | \
kubectl apply -f -
$ cd tests/integration/kubernetes
$ K8S_TEST_NV="k8s-nvidia-cuda.bats" ./gha-run.sh run-nv-tests
```
> **Note:**
>
> The other scenarios require an NGC API key to run, i.e., to export the
> `NGC_API_KEY` variable with a valid NGC API key.
#### Deploy pods using attestation
Attestation is a fundamental piece of the confidential containers solution.
In our upstream CI we use attestation at the example of leveraging the
authenticated container image pull mechanism where container images reside
in the authenticated NVCR registry (`k8s-nvidia-nim.bats`), and for
requesting secrets from KBS (`k8s-confidential-attestation.bats`). KBS will
release the image pull secret to a confidential guest. To get the
authentication credentials from inside the guest, KBS must already be
deployed and configured. In our CI samples, we configure KBS with the guest
image pull secret, a resource policy, and launch the pod with certain kernel
command line parameters:
`"agent.image_registry_auth=kbs:///default/credentials/nvcr agent.aa_kbc_params=cc_kbc::${CC_KBS_ADDR}"`.
The `agent.aa_kbc_params` option is a general configuration for attestation.
For your use case, you need to set the IP address and port under which KBS
is reachable through the `CC_KBS_ADDR` variable (see our CI sample). This
tells the guest how to reach KBS. Something like this must be set whenever
attestation is used, but on its own this parameter does not trigger
attestation. The `agent.image_registry_auth` option tells the guest to ask
for a resource from KBS and use it as the authentication configuration. When
this is set, the guest will request this resource at boot (and trigger
attestation) regardless of which image is being pulled.
To deploy your own pods using authenticated container images, or secure key
release for attestation, follow steps similar to our mentioned CI samples.
#### Deploy pods with Kata agent security policies
With GPU passthrough being supported by the
[genpolicy tool](https://github.com/kata-containers/kata-containers/tree/main/src/tools/genpolicy),
you can use the tool to create a Kata agent security policy. Our CI deploys
all sample pod manifests with a Kata agent security policy.
#### Deploy pods using your own containers and manifests
You can author pod manifests leveraging your own containers, for instance,
containers built using the CUDA container toolkit. We recommend to start
with a CUDA base container.
The GPU is transitioned into the `Ready` state via attestation, for instance,
when pulling authenticated images. If your deployment scenario does not use
attestation, please refer back to the CUDA vectorAdd pod manifest. In this
manifest, we ensure that NVRC sets the GPU to `Ready` state by adding the
following annotation in the manifest:
`io.katacontainers.config.hypervisor.kernel_params: "nvrc.smi.srs=1"`
> **Notes:**
>
> - musl-based container images (e.g., using Alpine), or distro-less
> containers are not supported.
> - for the TEE scenario, only single-GPU passthrough per pod is supported,
> so your pod resource limit must be: `nvidia.com/pgpu: "1"` (on a system
> with multiple GPUs, you can thus pass through one GPU per pod).

View File

@@ -1,10 +1,25 @@
# Using NVIDIA GPU device with Kata Containers
This page gives an overview on the different modes in which GPUs can be passed
to a Kata Containers container, provides host system requirements, explains how
Kata Containers guest components can be built to support the NVIDIA GPU
scenario, and gives practical usage examples using `ctr`.
Please see the guide
[Enabling NVIDIA GPU workloads using GPU passthrough with Kata Containers](NVIDIA-GPU-passthrough-and-Kata-QEMU.md)
for a documentation of an end-to-end reference implementation of a Kata
Containers stack for GPU passthrough using QEMU, the go-based Kata Runtime,
and an NVIDIA-specific root filesystem. This reference implementation is built
and validated in Kata's CI, and it can be used to test GPU workloads with Kata
components and Kubernetes out of the box.
## Comparison between Passthrough and vGPU Modes
An NVIDIA GPU device can be passed to a Kata Containers container using GPU
passthrough (NVIDIA GPU pass-through mode) as well as GPU mediated passthrough
passthrough (NVIDIA GPU passthrough mode) as well as GPU mediated passthrough
(NVIDIA `vGPU` mode).
NVIDIA GPU pass-through mode, an entire physical GPU is directly assigned to one
NVIDIA GPU passthrough mode, an entire physical GPU is directly assigned to one
VM, bypassing the NVIDIA Virtual GPU Manager. In this mode of operation, the GPU
is accessed exclusively by the NVIDIA driver running in the VM to which it is
assigned. The GPU is not shared among VMs.
@@ -20,18 +35,20 @@ with [MIG-slices](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/).
| Technology | Description | Behavior | Detail |
| --- | --- | --- | --- |
| NVIDIA GPU pass-through mode | GPU passthrough | Physical GPU assigned to a single VM | Direct GPU assignment to VM without limitation |
| NVIDIA GPU passthrough mode | GPU passthrough | Physical GPU assigned to a single VM | Direct GPU assignment to VM without limitation |
| NVIDIA vGPU time-sliced | GPU time-sliced | Physical GPU time-sliced for multiple VMs | Mediated passthrough |
| NVIDIA vGPU MIG-backed | GPU with MIG-slices | Physical GPU MIG-sliced for multiple VMs | Mediated passthrough |
## Hardware Requirements
## Host Requirements
NVIDIA GPUs Recommended for Virtualization:
### Hardware
NVIDIA GPUs recommended for virtualization:
- NVIDIA Tesla (T4, M10, P6, V100 or newer)
- NVIDIA Quadro RTX 6000/8000
## Host BIOS Requirements
### Firmware
Some hardware requires a larger PCI BARs window, for example, NVIDIA Tesla P100,
K40m
@@ -55,9 +72,7 @@ Some hardware vendors use a different name in BIOS, such as:
If one is using a GPU based on the Ampere architecture and later additionally
SR-IOV needs to be enabled for the `vGPU` use-case.
The following steps outline the workflow for using an NVIDIA GPU with Kata.
## Host Kernel Requirements
### Kernel
The following configurations need to be enabled on your host kernel:
@@ -70,7 +85,13 @@ The following configurations need to be enabled on your host kernel:
Your host kernel needs to be booted with `intel_iommu=on` on the kernel command
line.
## Install and configure Kata Containers
## Build the Kata Components
This section explains how to build an environment with Kata Containers bits
supporting the GPU scenario. We first deploy and configure the regular Kata
components, then describe how to build the guest kernel and root filesystem.
### Install and configure Kata Containers
To use non-large BARs devices (for example, NVIDIA Tesla T4), you need Kata
version 1.3.0 or above. Follow the [Kata Containers setup
@@ -101,7 +122,7 @@ hotplug_vfio_on_root_bus = true
pcie_root_port = 1
```
## Build Kata Containers kernel with GPU support
### Build guest kernel with GPU support
The default guest kernel installed with Kata Containers does not provide GPU
support. To use an NVIDIA GPU with Kata Containers, you need to build a kernel
@@ -160,11 +181,11 @@ code, using `Dragonball VMM` for NVIDIA GPU `hot-plug/hot-unplug` requires apply
addition to the above kernel configuration items. Follow these steps to build for NVIDIA GPU `hot-[un]plug`
for `Dragonball`:
```sh
# Prepare .config to support both upcall and nvidia gpu
```sh
# Prepare .config to support both upcall and nvidia gpu
$ ./build-kernel.sh -v 5.10.25 -e -t dragonball -g nvidia -f setup
# Build guest kernel to support both upcall and nvidia gpu
# Build guest kernel to support both upcall and nvidia gpu
$ ./build-kernel.sh -v 5.10.25 -e -t dragonball -g nvidia build
# Install guest kernel to support both upcall and nvidia gpu
@@ -196,303 +217,7 @@ Before using the new guest kernel, please update the `kernel` parameters in
kernel = "/usr/share/kata-containers/vmlinuz-nvidia-gpu.container"
```
## NVIDIA GPU pass-through mode with Kata Containers
Use the following steps to pass an NVIDIA GPU device in pass-through mode with Kata:
1. Find the Bus-Device-Function (BDF) for the GPU device on the host:
```sh
$ sudo lspci -nn -D | grep -i nvidia
0000:d0:00.0 3D controller [0302]: NVIDIA Corporation Device [10de:20b9] (rev a1)
```
> PCI address `0000:d0:00.0` is assigned to the hardware GPU device.
> `10de:20b9` is the device ID of the hardware GPU device.
2. Find the IOMMU group for the GPU device:
```sh
$ BDF="0000:d0:00.0"
$ readlink -e /sys/bus/pci/devices/$BDF/iommu_group
```
The previous output shows that the GPU belongs to IOMMU group 192. The next
step is to bind the GPU to the VFIO-PCI driver.
```sh
$ BDF="0000:d0:00.0"
$ DEV="/sys/bus/pci/devices/$BDF"
$ echo "vfio-pci" > $DEV/driver_override
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
# To return the device to the standard driver, we simply clear the
# driver_override and reprobe the device, ex:
$ echo > $DEV/preferred_driver
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
```
3. Check the IOMMU group number under `/dev/vfio`:
```sh
$ ls -l /dev/vfio
total 0
crw------- 1 zvonkok zvonkok 243, 0 Mar 18 03:06 192
crw-rw-rw- 1 root root 10, 196 Mar 18 02:27 vfio
```
4. Start a Kata container with the GPU device:
```sh
# You may need to `modprobe vhost-vsock` if you get
# host system doesn't support vsock: stat /dev/vhost-vsock
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch uname -r
```
5. Run `lspci` within the container to verify the GPU device is seen in the list
of the PCI devices. Note the vendor-device id of the GPU (`10de:20b9`) in the `lspci` output.
```sh
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -nn | grep '10de:20b9'"
```
6. Additionally, you can check the PCI BARs space of the NVIDIA GPU device in the container:
```sh
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -s 02:00.0 -vv | grep Region"
```
> **Note**: If you see a message similar to the above, the BAR space of the NVIDIA
> GPU has been successfully allocated.
## NVIDIA vGPU mode with Kata Containers
NVIDIA vGPU is a licensed product on all supported GPU boards. A software license
is required to enable all vGPU features within the guest VM. NVIDIA vGPU manager
needs to be installed on the host to configure GPUs in vGPU mode. See [NVIDIA Virtual GPU Software Documentation v14.0 through 14.1](https://docs.nvidia.com/grid/14.0/) for more details.
### NVIDIA vGPU time-sliced
In the time-sliced mode, the GPU is not partitioned and the workload uses the
whole GPU and shares access to the GPU engines. Processes are scheduled in
series. The best effort scheduler is the default one and can be exchanged by
other scheduling policies see the documentation above how to do that.
Beware if you had `MIG` enabled before to disable `MIG` on the GPU if you want
to use `time-sliced` `vGPU`.
```sh
$ sudo nvidia-smi -mig 0
```
Enable the virtual functions for the physical GPU in the `sysfs` file system.
```sh
$ sudo /usr/lib/nvidia/sriov-manage -e 0000:41:00.0
```
Get the `BDF` of the available virtual function on the GPU, and choose one for the
following steps.
```sh
$ cd /sys/bus/pci/devices/0000:41:00.0/
$ ls -l | grep virtfn
```
#### List all available vGPU instances
The following shell snippet will walk the `sysfs` and only print instances
that are available, that can be created.
```sh
# The 00.0 is often the PF of the device the VFs will have the funciont in the
# BDF incremented by some values so e.g. the very first VF is 0000:41:00.4
cd /sys/bus/pci/devices/0000:41:00.0/
for vf in $(ls -d virtfn*)
do
BDF=$(basename $(readlink -f $vf))
for md in $(ls -d $vf/mdev_supported_types/*)
do
AVAIL=$(cat $md/available_instances)
NAME=$(cat $md/name)
DIR=$(basename $md)
if [ $AVAIL -gt 0 ]; then
echo "| BDF | INSTANCES | NAME | DIR |"
echo "+--------------+-----------+----------------+------------+"
printf "| %12s |%10d |%15s | %10s |\n\n" "$BDF" "$AVAIL" "$NAME" "$DIR"
fi
done
done
```
If there are available instances you get something like this (for the first VF),
beware that the output is highly dependent on the GPU you have, if there is no
output check again if `MIG` is really disabled.
```sh
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-4C | nvidia-692 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-8C | nvidia-693 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-10C | nvidia-694 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-16C | nvidia-695 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-20C | nvidia-696 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-40C | nvidia-697 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-80C | nvidia-698 |
```
Change to the `mdev_supported_types` directory for the virtual function on which
you want to create the `vGPU`. Taking the first output as an example:
```sh
$ cd virtfn0/mdev_supported_types/nvidia-692
$ UUIDGEN=$(uuidgen)
$ sudo bash -c "echo $UUIDGEN > create"
```
Confirm that the `vGPU` was created. You should see the `UUID` pointing to a
subdirectory of the `sysfs` space.
```sh
$ ls -l /sys/bus/mdev/devices/
```
Get the `IOMMU` group number and verify there is a `VFIO` device created to use
with Kata.
```sh
$ ls -l /sys/bus/mdev/devices/*/
$ ls -l /dev/vfio
```
Use the `VFIO` device created in the same way as in the pass-through use-case.
Beware that the guest needs the NVIDIA guest drivers, so one would need to build
a new guest `OS` image.
### NVIDIA vGPU MIG-backed
We're not going into detail what `MIG` is but briefly it is a technology to
partition the hardware into independent instances with guaranteed quality of
service. For more details see [NVIDIA Multi-Instance GPU User Guide](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/).
First enable `MIG` mode for a GPU, depending on the platform you're running
a reboot would be necessary. Some platforms support GPU reset.
```sh
$ sudo nvidia-smi -mig 1
```
If the platform supports a GPU reset one can run, otherwise you will get a
warning to reboot the server.
```sh
$ sudo nvidia-smi --gpu-reset
```
The driver per default provides a number of profiles that users can opt-in when
configuring the MIG feature.
```sh
$ sudo nvidia-smi mig -lgip
+-----------------------------------------------------------------------------+
| GPU instance profiles: |
| GPU Name ID Instances Memory P2P SM DEC ENC |
| Free/Total GiB CE JPEG OFA |
|=============================================================================|
| 0 MIG 1g.10gb 19 7/7 9.50 No 14 0 0 |
| 1 0 0 |
+-----------------------------------------------------------------------------+
| 0 MIG 1g.10gb+me 20 1/1 9.50 No 14 1 0 |
| 1 1 1 |
+-----------------------------------------------------------------------------+
| 0 MIG 2g.20gb 14 3/3 19.50 No 28 1 0 |
| 2 0 0 |
+-----------------------------------------------------------------------------+
...
```
Create the GPU instances that correspond to the `vGPU` types of the `MIG-backed`
`vGPUs` that you will create [NVIDIA A100 PCIe 80GB Virtual GPU Types](https://docs.nvidia.com/grid/13.0/grid-vgpu-user-guide/index.html#vgpu-types-nvidia-a100-pcie-80gb).
```sh
# MIG 1g.10gb --> vGPU A100D-1-10C
$ sudo nvidia-smi mig -cgi 19
```
List the GPU instances and get the GPU instance id to create the compute
instance.
```sh
$ sudo nvidia-smi mig -lgi # list the created GPU instances
$ sudo nvidia-smi mig -cci -gi 9 # each GPU instance can have several compute
# instances. Instance -> Workload
```
Verify that the compute instances were created within the GPU instance
```sh
$ nvidia-smi
... snip ...
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| 0 9 0 0 | 0MiB / 9728MiB | 14 0 | 1 0 0 0 0 |
| | 0MiB / 4095MiB | | |
+------------------+----------------------+-----------+-----------------------+
... snip ...
```
We can use the [snippet](#list-all-available-vgpu-instances) from before to list
the available `vGPU` instances, this time `MIG-backed`.
```sh
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.5 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:01.6 | 1 |GRID A100D-1-10C | nvidia-699 |
... snip ...
```
Repeat the steps after the [snippet](#list-all-available-vgpu-instances) listing
to create the corresponding `mdev` device and use the guest `OS` created in the
previous section with `time-sliced` `vGPUs`.
## Install NVIDIA Driver + Toolkit in Kata Containers Guest OS
### Build Guest OS with NVIDIA Driver and Toolkit
Consult the [Developer-Guide](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#create-a-rootfs-image) on how to create a
rootfs base image for a distribution of your choice. This is going to be used as
@@ -583,9 +308,12 @@ Enable the `guest_hook_path` in Kata's `configuration.toml`
guest_hook_path = "/usr/share/oci/hooks"
```
As the last step one can remove the additional packages and files that were added
to the `$ROOTFS_DIR` to keep it as small as possible.
One has built a NVIDIA rootfs, kernel and now we can run any GPU container
without installing the drivers into the container. Check NVIDIA device status
with `nvidia-smi`
with `nvidia-smi`:
```sh
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/nvidia/cuda:11.6.0-base-ubuntu20.04" cuda nvidia-smi
@@ -611,8 +339,309 @@ Fri Mar 18 10:36:59 2022
+-----------------------------------------------------------------------------+
```
As the last step one can remove the additional packages and files that were added
to the `$ROOTFS_DIR` to keep it as small as possible.
## Usage Examples with Kata Containers
The following sections give usage examples for this based on the different modes.
### NVIDIA GPU passthrough mode
Use the following steps to pass an NVIDIA GPU device in passthrough mode with Kata:
1. Find the Bus-Device-Function (BDF) for the GPU device on the host:
```sh
$ sudo lspci -nn -D | grep -i nvidia
0000:d0:00.0 3D controller [0302]: NVIDIA Corporation Device [10de:20b9] (rev a1)
```
> PCI address `0000:d0:00.0` is assigned to the hardware GPU device.
> `10de:20b9` is the device ID of the hardware GPU device.
2. Find the IOMMU group for the GPU device:
```sh
$ BDF="0000:d0:00.0"
$ readlink -e /sys/bus/pci/devices/$BDF/iommu_group
```
The previous output shows that the GPU belongs to IOMMU group 192. The next
step is to bind the GPU to the VFIO-PCI driver.
```sh
$ BDF="0000:d0:00.0"
$ DEV="/sys/bus/pci/devices/$BDF"
$ echo "vfio-pci" > $DEV/driver_override
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
# To return the device to the standard driver, we simply clear the
# driver_override and reprobe the device, ex:
$ echo > $DEV/preferred_driver
$ echo $BDF > $DEV/driver/unbind
$ echo $BDF > /sys/bus/pci/drivers_probe
```
3. Check the IOMMU group number under `/dev/vfio`:
```sh
$ ls -l /dev/vfio
total 0
crw------- 1 zvonkok zvonkok 243, 0 Mar 18 03:06 192
crw-rw-rw- 1 root root 10, 196 Mar 18 02:27 vfio
```
4. Start a Kata container with the GPU device:
```sh
# You may need to `modprobe vhost-vsock` if you get
# host system doesn't support vsock: stat /dev/vhost-vsock
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch uname -r
```
5. Run `lspci` within the container to verify the GPU device is seen in the list
of the PCI devices. Note the vendor-device id of the GPU (`10de:20b9`) in the `lspci` output.
```sh
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -nn | grep '10de:20b9'"
```
6. Additionally, you can check the PCI BARs space of the NVIDIA GPU device in the container:
```sh
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/192 --rm -t "docker.io/library/archlinux:latest" arch sh -c "lspci -s 02:00.0 -vv | grep Region"
```
> **Note**: If you see a message similar to the above, the BAR space of the NVIDIA
> GPU has been successfully allocated.
### NVIDIA vGPU mode
NVIDIA vGPU is a licensed product on all supported GPU boards. A software license
is required to enable all vGPU features within the guest VM. NVIDIA vGPU manager
needs to be installed on the host to configure GPUs in vGPU mode. See
[NVIDIA Virtual GPU Software Documentation v14.0 through 14.1](https://docs.nvidia.com/grid/14.0/)
for more details.
#### NVIDIA vGPU time-sliced
In the time-sliced mode, the GPU is not partitioned and the workload uses the
whole GPU and shares access to the GPU engines. Processes are scheduled in
series. The best effort scheduler is the default one and can be exchanged by
other scheduling policies see the documentation above how to do that.
Beware if you had `MIG` enabled before to disable `MIG` on the GPU if you want
to use `time-sliced` `vGPU`.
```sh
$ sudo nvidia-smi -mig 0
```
Enable the virtual functions for the physical GPU in the `sysfs` file system.
```sh
$ sudo /usr/lib/nvidia/sriov-manage -e 0000:41:00.0
```
Get the `BDF` of the available virtual function on the GPU, and choose one for the
following steps.
```sh
$ cd /sys/bus/pci/devices/0000:41:00.0/
$ ls -l | grep virtfn
```
##### List all available vGPU instances
The following shell snippet will walk the `sysfs` and only print instances
that are available, that can be created.
```sh
# The 00.0 is often the PF of the device. The VFs will have the function in the
# BDF incremented by some values so e.g. the very first VF is 0000:41:00.4
cd /sys/bus/pci/devices/0000:41:00.0/
for vf in $(ls -d virtfn*)
do
BDF=$(basename $(readlink -f $vf))
for md in $(ls -d $vf/mdev_supported_types/*)
do
AVAIL=$(cat $md/available_instances)
NAME=$(cat $md/name)
DIR=$(basename $md)
if [ $AVAIL -gt 0 ]; then
echo "| BDF | INSTANCES | NAME | DIR |"
echo "+--------------+-----------+----------------+------------+"
printf "| %12s |%10d |%15s | %10s |\n\n" "$BDF" "$AVAIL" "$NAME" "$DIR"
fi
done
done
```
If there are available instances you get something like this (for the first VF),
beware that the output is highly dependent on the GPU you have, if there is no
output check again if `MIG` is really disabled.
```sh
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-4C | nvidia-692 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-8C | nvidia-693 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-10C | nvidia-694 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-16C | nvidia-695 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-20C | nvidia-696 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-40C | nvidia-697 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 | GRID A100D-80C | nvidia-698 |
```
Change to the `mdev_supported_types` directory for the virtual function on which
you want to create the `vGPU`. Taking the first output as an example:
```sh
$ cd virtfn0/mdev_supported_types/nvidia-692
$ UUIDGEN=$(uuidgen)
$ sudo bash -c "echo $UUIDGEN > create"
```
Confirm that the `vGPU` was created. You should see the `UUID` pointing to a
subdirectory of the `sysfs` space.
```sh
$ ls -l /sys/bus/mdev/devices/
```
Get the `IOMMU` group number and verify there is a `VFIO` device created to use
with Kata.
```sh
$ ls -l /sys/bus/mdev/devices/*/
$ ls -l /dev/vfio
```
Use the `VFIO` device created in the same way as in the passthrough use-case.
Beware that the guest needs the NVIDIA guest drivers, so one would need to build
a new guest `OS` image.
#### NVIDIA vGPU MIG-backed
We're not going into detail what `MIG` is but briefly it is a technology to
partition the hardware into independent instances with guaranteed quality of
service. For more details see
[NVIDIA Multi-Instance GPU User Guide](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/).
First enable `MIG` mode for a GPU, depending on the platform you're running
a reboot would be necessary. Some platforms support GPU reset.
```sh
$ sudo nvidia-smi -mig 1
```
If the platform supports a GPU reset one can run, otherwise you will get a
warning to reboot the server.
```sh
$ sudo nvidia-smi --gpu-reset
```
The driver per default provides a number of profiles that users can opt-in when
configuring the MIG feature.
```sh
$ sudo nvidia-smi mig -lgip
+-----------------------------------------------------------------------------+
| GPU instance profiles: |
| GPU Name ID Instances Memory P2P SM DEC ENC |
| Free/Total GiB CE JPEG OFA |
|=============================================================================|
| 0 MIG 1g.10gb 19 7/7 9.50 No 14 0 0 |
| 1 0 0 |
+-----------------------------------------------------------------------------+
| 0 MIG 1g.10gb+me 20 1/1 9.50 No 14 1 0 |
| 1 1 1 |
+-----------------------------------------------------------------------------+
| 0 MIG 2g.20gb 14 3/3 19.50 No 28 1 0 |
| 2 0 0 |
+-----------------------------------------------------------------------------+
...
```
Create the GPU instances that correspond to the `vGPU` types of the `MIG-backed`
`vGPUs` that you will create
[NVIDIA A100 PCIe 80GB Virtual GPU Types](https://docs.nvidia.com/grid/13.0/grid-vgpu-user-guide/index.html#vgpu-types-nvidia-a100-pcie-80gb).
```sh
# MIG 1g.10gb --> vGPU A100D-1-10C
$ sudo nvidia-smi mig -cgi 19
```
List the GPU instances and get the GPU instance id to create the compute
instance.
```sh
$ sudo nvidia-smi mig -lgi # list the created GPU instances
$ sudo nvidia-smi mig -cci -gi 9 # each GPU instance can have several compute
# instances. Instance -> Workload
```
Verify that the compute instances were created within the GPU instance
```sh
$ nvidia-smi
... snip ...
+-----------------------------------------------------------------------------+
| MIG devices: |
+------------------+----------------------+-----------+-----------------------+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|
| | | ECC| |
|==================+======================+===========+=======================|
| 0 9 0 0 | 0MiB / 9728MiB | 14 0 | 1 0 0 0 0 |
| | 0MiB / 4095MiB | | |
+------------------+----------------------+-----------+-----------------------+
... snip ...
```
We can use the [snippet](#list-all-available-vgpu-instances) from before to list
the available `vGPU` instances, this time `MIG-backed`.
```sh
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.4 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:00.5 | 1 |GRID A100D-1-10C | nvidia-699 |
| BDF | INSTANCES | NAME | DIR |
+--------------+-----------+----------------+------------+
| 0000:41:01.6 | 1 |GRID A100D-1-10C | nvidia-699 |
... snip ...
```
Repeat the steps after the [snippet](#list-all-available-vgpu-instances) listing
to create the corresponding `mdev` device and use the guest `OS` created in the
previous section with `time-sliced` `vGPUs`.
## References

View File

@@ -1,24 +1,20 @@
# Table of Contents
**Note:**: This guide used to contain an end-to-end flow to build a
custom Kata containers root filesystem with QAT out-of-tree SR-IOV virtual
function driver and run QAT enabled containers. The former is no longer necessary
so the instructions are dropped. If the use-case is still of interest, please file
an issue in either of the QAT Kubernetes specific repos linked below.
# Introduction
Intel® QuickAssist Technology (QAT) provides hardware acceleration
for security (cryptography) and compression. These instructions cover the
steps for the latest [Ubuntu LTS release](https://ubuntu.com/download/desktop)
which already include the QAT host driver. These instructions can be adapted to
any Linux distribution. These instructions guide the user on how to download
the kernel sources, compile kernel driver modules against those sources, and
load them onto the host as well as preparing a specially built Kata Containers
kernel and custom Kata Containers rootfs.
for security (cryptography) and compression. Kata Containers can enable
these acceleration functions for containers using QAT SR-IOV with the
support from [Intel QAT Device Plugin for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)
or [Intel QAT DRA Resource Driver for Kubernetes](https://github.com/intel/intel-resource-drivers-for-kubernetes).
* Download kernel sources
* Compile Kata kernel
* Compile kernel driver modules against those sources
* Download rootfs
* Add driver modules to rootfs
* Build rootfs image
## Helpful Links before starting
## More Information
[Intel® QuickAssist Technology at `01.org`](https://www.intel.com/content/www/us/en/developer/topic-technology/open/quick-assist-technology/overview.html)
@@ -26,554 +22,6 @@ kernel and custom Kata Containers rootfs.
[Intel Device Plugin for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)
[Intel DRA Resource Driver for Kubernetes](https://github.com/intel/intel-resource-drivers-for-kubernetes)
[Intel® QuickAssist Technology for Crypto Poll Mode Driver](https://dpdk-docs.readthedocs.io/en/latest/cryptodevs/qat.html)
## Steps to enable Intel® QAT in Kata Containers
There are some steps to complete only once, some steps to complete with every
reboot, and some steps to complete when the host kernel changes.
## Script variables
The following list of variables must be set before running through the
scripts. These variables refer to locations to store modules and configuration
files on the host and links to the drivers to use. Modify these as
needed to point to updated drivers or different install locations.
### Set environment variables (Every Reboot)
Make sure to check [`01.org`](https://www.intel.com/content/www/us/en/developer/topic-technology/open/quick-assist-technology/overview.html) for
the latest driver.
```bash
$ export QAT_DRIVER_VER=qat1.7.l.4.14.0-00031.tar.gz
$ export QAT_DRIVER_URL=https://downloadmirror.intel.com/30178/eng/${QAT_DRIVER_VER}
$ export QAT_CONF_LOCATION=~/QAT_conf
$ export QAT_DOCKERFILE=https://raw.githubusercontent.com/intel/intel-device-plugins-for-kubernetes/main/demo/openssl-qat-engine/Dockerfile
$ export QAT_SRC=~/src/QAT
$ export GOPATH=~/src/go
$ export KATA_KERNEL_LOCATION=~/kata
$ export KATA_ROOTFS_LOCATION=~/kata
```
## Prepare the Ubuntu Host
The host could be a bare metal instance or a virtual machine. If using a
virtual machine, make sure that KVM nesting is enabled. The following
instructions reference an Intel® C62X chipset. Some of the instructions must be
modified if using a different Intel® QAT device. The Intel® QAT chipset can be
identified by executing the following.
### Identify which PCI Bus the Intel® QAT card is on
```bash
$ for i in 0434 0435 37c8 1f18 1f19; do lspci -d 8086:$i; done
```
### Install necessary packages for Ubuntu
These packages are necessary to compile the Kata kernel, Intel® QAT driver, and to
prepare the rootfs for Kata. [Docker](https://docs.docker.com/engine/install/ubuntu/)
also needs to be installed to be able to build the rootfs. To test that
everything works a Kubernetes pod is started requesting Intel® QAT resources. For the
pass through of the virtual functions the kernel boot parameter needs to have
`INTEL_IOMMU=on`.
```bash
$ sudo apt update
$ sudo apt install -y golang-go build-essential python pkg-config zlib1g-dev libudev-dev bison libelf-dev flex libtool automake autotools-dev autoconf bc libpixman-1-dev coreutils libssl-dev
$ sudo sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"/' /etc/default/grub
$ sudo update-grub
$ sudo reboot
```
### Download Intel® QAT drivers
This will download the [Intel® QAT drivers](https://www.intel.com/content/www/us/en/developer/topic-technology/open/quick-assist-technology/overview.html).
Make sure to check the website for the latest version.
```bash
$ mkdir -p $QAT_SRC
$ cd $QAT_SRC
$ curl -L $QAT_DRIVER_URL | tar zx
```
### Copy Intel® QAT configuration files and enable virtual functions
Modify the instructions below as necessary if using a different Intel® QAT hardware
platform. You can learn more about customizing configuration files at the
[Intel® QAT Engine repository](https://github.com/intel/QAT_Engine/#copy-the-correct-intel-quickassist-technology-driver-config-files)
This section starts from a base config file and changes the `SSL` section to
`SHIM` to support the OpenSSL engine. There are more tweaks that you can make
depending on the use case and how many Intel® QAT engines should be run. You
can find more information about how to customize in the
[Intel® QuickAssist Technology Software for Linux* - Programmer's Guide.](https://www.intel.com/content/www/us/en/content-details/709196/intel-quickassist-technology-api-programmer-s-guide.html)
> **Note: This section assumes that a Intel® QAT `c6xx` platform is used.**
```bash
$ mkdir -p $QAT_CONF_LOCATION
$ cp $QAT_SRC/quickassist/utilities/adf_ctl/conf_files/c6xxvf_dev0.conf.vm $QAT_CONF_LOCATION/c6xxvf_dev0.conf
$ sed -i 's/\[SSL\]/\[SHIM\]/g' $QAT_CONF_LOCATION/c6xxvf_dev0.conf
```
### Expose and Bind Intel® QAT virtual functions to VFIO-PCI (Every reboot)
To enable virtual functions, the host OS should have IOMMU groups enabled. In
the UEFI Firmware Intel® Virtualization Technology for Directed I/O
(Intel® VT-d) must be enabled. Also, the kernel boot parameter should be
`intel_iommu=on` or `intel_iommu=ifgx_off`. This should have been set from
the instructions above. Check the output of `/proc/cmdline` to confirm. The
following commands assume you installed an Intel® QAT card, IOMMU is on, and
VT-d is enabled. The vendor and device ID add to the `VFIO-PCI` driver so that
each exposed virtual function can be bound to the `VFIO-PCI` driver. Once
complete, each virtual function passes into a Kata Containers container using
the PCIe device passthrough feature. For Kubernetes, the
[Intel device plugin](https://github.com/intel/intel-device-plugins-for-kubernetes)
for Kubernetes handles the binding of the driver, but the VFs still must be
enabled.
```bash
$ sudo modprobe vfio-pci
$ QAT_PCI_BUS_PF_NUMBERS=$((lspci -d :435 && lspci -d :37c8 && lspci -d :19e2 && lspci -d :6f54) | cut -d ' ' -f 1)
$ QAT_PCI_BUS_PF_1=$(echo $QAT_PCI_BUS_PF_NUMBERS | cut -d ' ' -f 1)
$ echo 16 | sudo tee /sys/bus/pci/devices/0000:$QAT_PCI_BUS_PF_1/sriov_numvfs
$ QAT_PCI_ID_VF=$(cat /sys/bus/pci/devices/0000:${QAT_PCI_BUS_PF_1}/virtfn0/uevent | grep PCI_ID)
$ QAT_VENDOR_AND_ID_VF=$(echo ${QAT_PCI_ID_VF/PCI_ID=} | sed 's/:/ /')
$ echo $QAT_VENDOR_AND_ID_VF | sudo tee --append /sys/bus/pci/drivers/vfio-pci/new_id
```
Loop through all the virtual functions and bind to the VFIO driver
```bash
$ for f in /sys/bus/pci/devices/0000:$QAT_PCI_BUS_PF_1/virtfn*
do QAT_PCI_BUS_VF=$(basename $(readlink $f))
echo $QAT_PCI_BUS_VF | sudo tee --append /sys/bus/pci/drivers/c6xxvf/unbind
echo $QAT_PCI_BUS_VF | sudo tee --append /sys/bus/pci/drivers/vfio-pci/bind
done
```
### Check Intel® QAT virtual functions are enabled
If the following command returns empty, then the virtual functions are not
properly enabled. This command checks the enumerated device IDs for just the
virtual functions. Using the Intel® QAT as an example, the physical device ID
is `37c8` and virtual function device ID is `37c9`. The following command checks
if VF's are enabled for any of the currently known Intel® QAT device ID's. The
following `ls` command should show the 16 VF's bound to `VFIO-PCI`.
```bash
$ for i in 0442 0443 37c9 19e3; do lspci -d 8086:$i; done
```
Another way to check is to see what PCI devices that `VFIO-PCI` is mapped to.
It should match the device ID's of the VF's.
```bash
$ ls -la /sys/bus/pci/drivers/vfio-pci
```
## Prepare Kata Containers
### Download Kata kernel Source
This example automatically uses the latest Kata kernel supported by Kata. It
follows the instructions from the
[packaging kernel repository](../../tools/packaging/kernel)
and uses the latest Kata kernel
[config](../../tools/packaging/kernel/configs).
There are some patches that must be installed as well, which the
`build-kernel.sh` script should automatically apply. If you are using a
different kernel version, then you might need to manually apply them. Since
the Kata Containers kernel has a minimal set of kernel flags set, you must
create a Intel® QAT kernel fragment with the necessary `CONFIG_CRYPTO_*` options set.
Update the config to set some of the `CRYPTO` flags to enabled. This might
change with different kernel versions. The following instructions were tested
with kernel `v5.4.0-64-generic`.
```bash
$ mkdir -p $GOPATH
$ cd $GOPATH
$ go get -v github.com/kata-containers/kata-containers
$ cat << EOF > $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/configs/fragments/common/qat.conf
CONFIG_PCIEAER=y
CONFIG_UIO=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
CONFIG_CRYPTO_CBC=y
CONFIG_MODULES=y
CONFIG_MODULE_SIG=y
CONFIG_CRYPTO_AUTHENC=y
CONFIG_CRYPTO_DH=y
EOF
$ $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/build-kernel.sh setup
```
### Build Kata kernel
```bash
$ cd $GOPATH
$ export LINUX_VER=$(ls -d kata-linux-*)
$ sed -i 's/EXTRAVERSION =/EXTRAVERSION = .qat.container/' $LINUX_VER/Makefile
$ $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/build-kernel.sh build
```
### Copy Kata kernel
```bash
$ export KATA_KERNEL_NAME=vmlinux-${LINUX_VER}_qat
$ mkdir -p $KATA_KERNEL_LOCATION
$ cp ${GOPATH}/${LINUX_VER}/vmlinux ${KATA_KERNEL_LOCATION}/${KATA_KERNEL_NAME}
```
### Prepare Kata root filesystem
These instructions build upon the OS builder instructions located in the
[Developer Guide](../Developer-Guide.md). At this point it is recommended that
[Docker](https://docs.docker.com/engine/install/ubuntu/) is installed first, and
then [Kata-deploy](../../tools/packaging/kata-deploy)
is use to install Kata. This will make sure that the correct `agent` version
is installed into the rootfs in the steps below.
The following instructions use Ubuntu as the root filesystem with systemd as
the init and will add in the `kmod` binary, which is not a standard binary in
a Kata rootfs image. The `kmod` binary is necessary to load the Intel® QAT
kernel modules when the virtual machine rootfs boots.
```bash
$ export OSBUILDER=$GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder
$ export ROOTFS_DIR=${OSBUILDER}/rootfs-builder/rootfs
$ export EXTRA_PKGS='kmod'
```
Make sure that the `kata-agent` version matches the installed `kata-runtime`
version. Also make sure the `kata-runtime` install location is in your `PATH`
variable. The following `AGENT_VERSION` can be set manually to match
the `kata-runtime` version if the following commands don't work.
```bash
$ export PATH=$PATH:/opt/kata/bin
$ cd $GOPATH
$ export AGENT_VERSION=$(kata-runtime version | head -n 1 | grep -o "[0-9.]\+")
$ cd ${OSBUILDER}/rootfs-builder
$ sudo rm -rf ${ROOTFS_DIR}
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SECCOMP=no ./rootfs.sh ubuntu'
```
### Compile Intel® QAT drivers for Kata Containers kernel and add to Kata Containers rootfs
After the Kata Containers kernel builds with the proper configuration flags,
you must build the Intel® QAT drivers against that Kata Containers kernel
version in a similar way they were previously built for the host OS. You must
set the `KERNEL_SOURCE_ROOT` variable to the Kata Containers kernel source
directory and build the Intel® QAT drivers again. The `make` command will
install the Intel® QAT modules into the Kata rootfs.
```bash
$ cd $GOPATH
$ export LINUX_VER=$(ls -d kata*)
$ export KERNEL_MAJOR_VERSION=$(awk '/^VERSION =/{print $NF}' $GOPATH/$LINUX_VER/Makefile)
$ export KERNEL_PATHLEVEL=$(awk '/^PATCHLEVEL =/{print $NF}' $GOPATH/$LINUX_VER/Makefile)
$ export KERNEL_SUBLEVEL=$(awk '/^SUBLEVEL =/{print $NF}' $GOPATH/$LINUX_VER/Makefile)
$ export KERNEL_EXTRAVERSION=$(awk '/^EXTRAVERSION =/{print $NF}' $GOPATH/$LINUX_VER/Makefile)
$ export KERNEL_ROOTFS_DIR=${KERNEL_MAJOR_VERSION}.${KERNEL_PATHLEVEL}.${KERNEL_SUBLEVEL}${KERNEL_EXTRAVERSION}
$ cd $QAT_SRC
$ KERNEL_SOURCE_ROOT=$GOPATH/$LINUX_VER ./configure --enable-icp-sriov=guest
$ sudo -E make all -j $(nproc)
$ sudo -E make INSTALL_MOD_PATH=$ROOTFS_DIR qat-driver-install -j $(nproc)
```
The `usdm_drv` module also needs to be copied into the rootfs modules path and
`depmod` should be run.
```bash
$ sudo cp $QAT_SRC/build/usdm_drv.ko $ROOTFS_DIR/lib/modules/${KERNEL_ROOTFS_DIR}/updates/drivers
$ sudo depmod -a -b ${ROOTFS_DIR} ${KERNEL_ROOTFS_DIR}
$ cd ${OSBUILDER}/image-builder
$ script -fec 'sudo -E USE_DOCKER=true ./image_builder.sh ${ROOTFS_DIR}'
```
> **Note: Ignore any errors on modules.builtin and modules.order when running
> `depmod`.**
### Copy Kata rootfs
```bash
$ mkdir -p $KATA_ROOTFS_LOCATION
$ cp ${OSBUILDER}/image-builder/kata-containers.img $KATA_ROOTFS_LOCATION
```
## Verify Intel® QAT works in a container
The following instructions uses a OpenSSL Dockerfile that builds the
Intel® QAT engine to allow OpenSSL to offload crypto functions. It is a
convenient way to test that VFIO device passthrough for the Intel® QAT VFs are
working properly with the Kata Containers VM.
### Build OpenSSL Intel® QAT engine container
Use the OpenSSL Intel® QAT [Dockerfile](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/demo/openssl-qat-engine)
to build a container image with an optimized OpenSSL engine for
Intel® QAT. Using `docker build` with the Kata Containers runtime can sometimes
have issues. Therefore, make sure that `runc` is the default Docker container
runtime.
```bash
$ cd $QAT_SRC
$ curl -O $QAT_DOCKERFILE
$ sudo docker build -t openssl-qat-engine .
```
> **Note: The Intel® QAT driver version in this container might not match the
> Intel® QAT driver compiled and loaded on the host when compiling.**
### Test Intel® QAT with the ctr tool
The `ctr` tool can be used to interact with the containerd daemon. It may be
more convenient to use this tool to verify the kernel and image instead of
setting up a Kubernetes cluster. The correct Kata runtimes need to be added
to the containerd `config.toml`. Below is a sample snippet that can be added
to allow QEMU and Cloud Hypervisor (CLH) to work with `ctr`.
```
[plugins.cri.containerd.runtimes.kata-qemu]
runtime_type = "io.containerd.kata-qemu.v2"
privileged_without_host_devices = true
pod_annotations = ["io.katacontainers.*"]
[plugins.cri.containerd.runtimes.kata-qemu.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
[plugins.cri.containerd.runtimes.kata-clh]
runtime_type = "io.containerd.kata-clh.v2"
privileged_without_host_devices = true
pod_annotations = ["io.katacontainers.*"]
[plugins.cri.containerd.runtimes.kata-clh.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration-clh.toml"
```
In addition, containerd expects the binary to be in `/usr/local/bin` so add
this small script so that it redirects to be able to use either QEMU or
Cloud Hypervisor with Kata.
```bash
$ echo '#!/usr/bin/env bash' | sudo tee /usr/local/bin/containerd-shim-kata-qemu-v2
$ echo 'KATA_CONF_FILE=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml /opt/kata/bin/containerd-shim-kata-v2 $@' | sudo tee -a /usr/local/bin/containerd-shim-kata-qemu-v2
$ sudo chmod +x /usr/local/bin/containerd-shim-kata-qemu-v2
$ echo '#!/usr/bin/env bash' | sudo tee /usr/local/bin/containerd-shim-kata-clh-v2
$ echo 'KATA_CONF_FILE=/opt/kata/share/defaults/kata-containers/configuration-clh.toml /opt/kata/bin/containerd-shim-kata-v2 $@' | sudo tee -a /usr/local/bin/containerd-shim-kata-clh-v2
$ sudo chmod +x /usr/local/bin/containerd-shim-kata-clh-v2
```
After the OpenSSL image is built and imported into containerd, a Intel® QAT
virtual function exposed in the step above can be added to the `ctr` command.
Make sure to change the `/dev/vfio` number to one that actually exists on the
host system. When using the `ctr` tool, the`configuration.toml` for Kata needs
to point to the custom Kata kernel and rootfs built above and the Intel® QAT
modules in the Kata rootfs need to load at boot. The following steps assume that
`kata-deploy` was used to install Kata and QEMU is being tested. If using a
different hypervisor, different install method for Kata, or a different
Intel® QAT chipset then the command will need to be modified.
> **Note: The following was tested with
[containerd v1.4.6](https://github.com/containerd/containerd/releases/tag/v1.4.6).**
```bash
$ config_file="/opt/kata/share/defaults/kata-containers/configuration-qemu.toml"
$ sudo sed -i "/kernel =/c kernel = "\"${KATA_ROOTFS_LOCATION}/${KATA_KERNEL_NAME}\""" $config_file
$ sudo sed -i "/image =/c image = "\"${KATA_KERNEL_LOCATION}/kata-containers.img\""" $config_file
$ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 modules-load=usdm_drv,qat_c62xvf"/g' $config_file
$ sudo docker save -o openssl-qat-engine.tar openssl-qat-engine:latest
$ sudo ctr images import openssl-qat-engine.tar
$ sudo ctr run --runtime io.containerd.run.kata-qemu.v2 --privileged -t --rm --device=/dev/vfio/180 --mount type=bind,src=/dev,dst=/dev,options=rbind:rw --mount type=bind,src=${QAT_CONF_LOCATION}/c6xxvf_dev0.conf,dst=/etc/c6xxvf_dev0.conf,options=rbind:rw docker.io/library/openssl-qat-engine:latest bash
```
Below are some commands to run in the container image to verify Intel® QAT is
working
```sh
root@67561dc2757a/ # cat /proc/modules
qat_c62xvf 16384 - - Live 0xffffffffc00d9000 (OE)
usdm_drv 86016 - - Live 0xffffffffc00e8000 (OE)
intel_qat 249856 - - Live 0xffffffffc009b000 (OE)
root@67561dc2757a/ # adf_ctl restart
Restarting all devices.
Processing /etc/c6xxvf_dev0.conf
root@67561dc2757a/ # adf_ctl status
Checking status of all devices.
There is 1 QAT acceleration device(s) in the system:
qat_dev0 - type: c6xxvf, inst_id: 0, node_id: 0, bsf: 0000:01:01.0, #accel: 1 #engines: 1 state: up
root@67561dc2757a/ # openssl engine -c -t qat-hw
(qat-hw) Reference implementation of QAT crypto engine v0.6.1
[RSA, DSA, DH, AES-128-CBC-HMAC-SHA1, AES-128-CBC-HMAC-SHA256, AES-256-CBC-HMAC-SHA1, AES-256-CBC-HMAC-SHA256, TLS1-PRF, HKDF, X25519, X448]
[ available ]
```
### Test Intel® QAT in Kubernetes
Start a Kubernetes cluster with containerd as the CRI. The host should
already be setup with 16 virtual functions of the Intel® QAT card bound to
`VFIO-PCI`. Verify this by looking in `/dev/vfio` for a listing of devices.
You might need to disable Docker before initializing Kubernetes. Be aware
that the OpenSSL container image built above will need to be exported from
Docker and imported into containerd.
If Kata is installed through [`kata-deploy`](../../tools/packaging/kata-deploy/helm-chart/README.md)
there will be multiple `configuration.toml` files associated with different
hypervisors. Rather than add in the custom Kata kernel, Kata rootfs, and
kernel modules to each `configuration.toml` as the default, instead use
[annotations](../how-to/how-to-load-kernel-modules-with-kata.md)
in the Kubernetes YAML file to tell Kata which kernel and rootfs to use. The
easy way to do this is to use `kata-deploy` which will install the Kata binaries
to `/opt` and properly configure the `/etc/containerd/config.toml` with annotation
support. However, the `configuration.toml` needs to enable support for
annotations as well. The following configures both QEMU and Cloud Hypervisor
`configuration.toml` files that are currently available with Kata Container
versions 2.0 and higher.
```bash
$ sudo sed -i 's/enable_annotations\s=\s\[\]/enable_annotations = [".*"]/' /opt/kata/share/defaults/kata-containers/configuration-qemu.toml
$ sudo sed -i 's/enable_annotations\s=\s\[\]/enable_annotations = [".*"]/' /opt/kata/share/defaults/kata-containers/configuration-clh.toml
```
Export the OpenSSL image from Docker and import into containerd.
```bash
$ sudo docker save -o openssl-qat-engine.tar openssl-qat-engine:latest
$ sudo ctr -n=k8s.io images import openssl-qat-engine.tar
```
The [Intel® QAT Plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/cmd/qat_plugin/README.md)
needs to be started so that the virtual functions can be discovered and
used by Kubernetes.
The following YAML file can be used to start a Kata container with Intel® QAT
support. If Kata is installed with `kata-deploy`, then the containerd
`configuration.toml` should have all of the Kata runtime classes already
populated and annotations supported. To use a Intel® QAT virtual function, the
Intel® QAT plugin needs to be started after the VF's are bound to `VFIO-PCI` as
described [above](#expose-and-bind-intel-qat-virtual-functions-to-vfio-pci-every-reboot).
Edit the following to point to the correct Kata kernel and rootfs location
built with Intel® QAT support.
```bash
$ cat << EOF > kata-openssl-qat.yaml
apiVersion: v1
kind: Pod
metadata:
name: kata-openssl-qat
labels:
app: kata-openssl-qat
annotations:
io.katacontainers.config.hypervisor.kernel: "$KATA_KERNEL_LOCATION/$KATA_KERNEL_NAME"
io.katacontainers.config.hypervisor.image: "$KATA_ROOTFS_LOCATION/kata-containers.img"
io.katacontainers.config.hypervisor.kernel_params: "modules-load=usdm_drv,qat_c62xvf"
spec:
runtimeClassName: kata-qemu
containers:
- name: kata-openssl-qat
image: docker.io/library/openssl-qat-engine:latest
imagePullPolicy: IfNotPresent
resources:
limits:
qat.intel.com/generic: 1
cpu: 1
securityContext:
capabilities:
add: ["IPC_LOCK", "SYS_ADMIN"]
volumeMounts:
- mountPath: /etc/c6xxvf_dev0.conf
name: etc-mount
- mountPath: /dev
name: dev-mount
volumes:
- name: dev-mount
hostPath:
path: /dev
- name: etc-mount
hostPath:
path: $QAT_CONF_LOCATION/c6xxvf_dev0.conf
EOF
```
Use `kubectl` to start the pod. Verify that Intel® QAT card acceleration is
working with the Intel® QAT engine.
```bash
$ kubectl apply -f kata-openssl-qat.yaml
```
```sh
$ kubectl exec -it kata-openssl-qat -- adf_ctl restart
Restarting all devices.
Processing /etc/c6xxvf_dev0.conf
$ kubectl exec -it kata-openssl-qat -- adf_ctl status
Checking status of all devices.
There is 1 QAT acceleration device(s) in the system:
qat_dev0 - type: c6xxvf, inst_id: 0, node_id: 0, bsf: 0000:01:01.0, #accel: 1 #engines: 1 state: up
$ kubectl exec -it kata-openssl-qat -- openssl engine -c -t qat-hw
(qat-hw) Reference implementation of QAT crypto engine v0.6.1
[RSA, DSA, DH, AES-128-CBC-HMAC-SHA1, AES-128-CBC-HMAC-SHA256, AES-256-CBC-HMAC-SHA1, AES-256-CBC-HMAC-SHA256, TLS1-PRF, HKDF, X25519, X448]
[ available ]
```
### Troubleshooting
* Check that `/dev/vfio` has VFs enabled.
```sh
$ ls /dev/vfio
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 vfio
```
* Check that the modules load when inside the Kata Container.
```sh
bash-5.0# grep -E "qat|usdm_drv" /proc/modules
qat_c62xvf 16384 - - Live 0x0000000000000000 (O)
usdm_drv 86016 - - Live 0x0000000000000000 (O)
intel_qat 184320 - - Live 0x0000000000000000 (O)
```
* Verify that at least the first `c6xxvf_dev0.conf` file mounts inside the
container image in `/etc`. You will need one configuration file for each VF
passed into the container.
```sh
bash-5.0# ls /etc
c6xxvf_dev0.conf c6xxvf_dev11.conf c6xxvf_dev14.conf c6xxvf_dev3.conf c6xxvf_dev6.conf c6xxvf_dev9.conf resolv.conf
c6xxvf_dev1.conf c6xxvf_dev12.conf c6xxvf_dev15.conf c6xxvf_dev4.conf c6xxvf_dev7.conf hostname
c6xxvf_dev10.conf c6xxvf_dev13.conf c6xxvf_dev2.conf c6xxvf_dev5.conf c6xxvf_dev8.conf hosts
```
* Check `dmesg` inside the container to see if there are any issues with the
Intel® QAT driver.
* If there are issues building the OpenSSL Intel® QAT container image, then
check to make sure that runc is the default runtime for building container.
```sh
$ cat /etc/systemd/system/docker.service.d/50-runtime.conf
[Service]
Environment="DOCKER_DEFAULT_RUNTIME=--default-runtime runc"
```
## Optional Scripts
### Verify Intel® QAT card counters are incremented
To check the built in firmware counters, the Intel® QAT driver has to be compiled
and installed to the host and can't rely on the built in host driver. The
counters will increase when the accelerator is actively being used. To verify
Intel® QAT is actively accelerating the containerized application, use the
following instructions to check if any of the counters increment. Make
sure to change the PCI Device ID to match whats in the system.
```bash
$ for i in 0434 0435 37c8 1f18 1f19; do lspci -d 8086:$i; done
$ sudo watch cat /sys/kernel/debug/qat_c6xx_0000\:b1\:00.0/fw_counters
$ sudo watch cat /sys/kernel/debug/qat_c6xx_0000\:b3\:00.0/fw_counters
$ sudo watch cat /sys/kernel/debug/qat_c6xx_0000\:b5\:00.0/fw_counters
```

View File

@@ -1,3 +1,3 @@
[toolchain]
# Keep in sync with versions.yaml
channel = "1.85.1"
channel = "1.89"

44
src/agent/Cargo.lock generated
View File

@@ -459,15 +459,9 @@ version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec 0.8.0",
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb"
[[package]]
name = "bit-vec"
version = "0.8.0"
@@ -786,9 +780,9 @@ dependencies = [
[[package]]
name = "container-device-interface"
version = "0.1.0"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "653849f0c250f73d9afab4b2a9a6b07adaee1f34c44ffa6f2d2c3f9392002c1a"
checksum = "2605001b0e8214dae8af146a43ccaa965d960403e330f174c21327154530df8b"
dependencies = [
"anyhow",
"clap",
@@ -1213,9 +1207,9 @@ dependencies = [
[[package]]
name = "fancy-regex"
version = "0.14.0"
version = "0.16.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
checksum = "998b056554fbe42e03ae0e152895cd1a7e1002aec800fdc6635d20270260c46f"
dependencies = [
"bit-set",
"regex-automata 0.4.9",
@@ -1250,7 +1244,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7ced92e76e966ca2fd84c8f7aa01a4aea65b0eb6648d72f7c8f3e2764a67fece"
dependencies = [
"crc32fast",
"libz-sys",
"miniz_oxide",
]
@@ -2014,9 +2007,9 @@ dependencies = [
[[package]]
name = "jsonschema"
version = "0.30.0"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1b46a0365a611fbf1d2143104dcf910aada96fafd295bab16c60b802bf6fa1d"
checksum = "d46662859bc5f60a145b75f4632fbadc84e829e45df6c5de74cfc8e05acb96b5"
dependencies = [
"ahash 0.8.12",
"base64 0.22.1",
@@ -2266,17 +2259,6 @@ dependencies = [
"uuid 0.8.2",
]
[[package]]
name = "libz-sys"
version = "1.1.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b70e7a7df205e92a1a4cd9aaae7898dac0aa555503cc0a649494d0d60e7651d"
dependencies = [
"cc",
"pkg-config",
"vcpkg",
]
[[package]]
name = "linux-raw-sys"
version = "0.3.8"
@@ -3423,9 +3405,9 @@ dependencies = [
[[package]]
name = "referencing"
version = "0.30.0"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8eff4fa778b5c2a57e85c5f2fe3a709c52f0e60d23146e2151cbef5893f420e"
checksum = "9e9c261f7ce75418b3beadfb3f0eb1299fe8eb9640deba45ffa2cb783098697d"
dependencies = [
"ahash 0.8.12",
"fluent-uri 0.3.2",
@@ -3719,7 +3701,7 @@ dependencies = [
"anyhow",
"async-trait",
"awaitgroup",
"bit-vec 0.6.3",
"bit-vec",
"capctl",
"caps",
"cfg-if",
@@ -4821,12 +4803,6 @@ version = "1.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "943ce29a8a743eb10d6082545d861b24f9d1b160b7d741e0f2cdf726bec909c5"
[[package]]
name = "vcpkg"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426"
[[package]]
name = "version_check"
version = "0.9.5"

View File

@@ -186,7 +186,7 @@ base64 = "0.22"
sha2 = "0.10.8"
async-compression = { version = "0.4.22", features = ["tokio", "gzip"] }
container-device-interface = "0.1.0"
container-device-interface = "0.1.1"
[target.'cfg(target_arch = "s390x")'.dependencies]
pv_core = { git = "https://github.com/ibm-s390-linux/s390-tools", rev = "4942504a9a2977d49989a5e5b7c1c8e07dc0fa41", package = "s390_pv_core" }
@@ -206,6 +206,7 @@ lto = true
seccomp = ["rustjail/seccomp"]
standard-oci-runtime = ["rustjail/standard-oci-runtime"]
agent-policy = ["kata-agent-policy"]
init-data = []
[[bin]]
name = "kata-agent"

View File

@@ -41,6 +41,14 @@ ifeq ($(AGENT_POLICY),yes)
override EXTRA_RUSTFEATURES += agent-policy
endif
##VAR INIT_DATA=yes|no define if agent enables the init data feature
INIT_DATA ?= yes
# Enable the init data fature of rust build
ifeq ($(INIT_DATA),yes)
override EXTRA_RUSTFEATURES += init-data
endif
include ../../utils.mk
##VAR STANDARD_OCI_RUNTIME=yes|no define if agent enables standard oci runtime feature
@@ -122,7 +130,7 @@ $(TARGET): $(GENERATED_CODE) $(TARGET_PATH)
$(TARGET_PATH): show-summary
@RUSTFLAGS="$(EXTRA_RUSTFLAGS) --deny warnings" cargo build --target $(TRIPLE) $(if $(findstring release,$(BUILD_TYPE)),--release) $(EXTRA_RUSTFEATURES)
$(GENERATED_FILES): %: %.in
$(GENERATED_FILES): %: %.in $(VERSION_FILE)
@sed $(foreach r,$(GENERATED_REPLACEMENTS),-e 's|@$r@|$($r)|g') "$<" > "$@"
##TARGET optimize: optimized build

View File

@@ -10,7 +10,7 @@ use anyhow::{bail, Result};
use slog::{debug, error, info, warn};
use tokio::io::AsyncWriteExt;
static POLICY_LOG_FILE: &str = "/tmp/policy.txt";
static POLICY_LOG_FILE: &str = "/tmp/policy.jsonl";
static POLICY_DEFAULT_FILE: &str = "/etc/kata-opa/default-policy.rego";
/// Convenience macro to obtain the scope logger
@@ -26,7 +26,7 @@ pub struct AgentPolicy {
/// When true policy errors are ignored, for debug purposes.
allow_failures: bool,
/// "/tmp/policy.txt" log file for policy activity.
/// "/tmp/policy.jsonl" log file for policy activity.
log_file: Option<tokio::fs::File>,
/// Regorus engine
@@ -213,7 +213,7 @@ impl AgentPolicy {
// The Policy text can be obtained directly from the pod YAML.
}
_ => {
let log_entry = format!("[\"ep\":\"{ep}\",{input}],\n\n");
let log_entry = format!("{{\"kind\":\"{ep}\",\"request\":{input}}}\n");
if let Err(e) = log_file.write_all(log_entry.as_bytes()).await {
warn!(sl!(), "policy: log_eval_input: write_all failed: {}", e);

View File

@@ -44,7 +44,7 @@ async-trait.workspace = true
inotify = "0.9.2"
libseccomp = { version = "0.3.0", optional = true }
zbus = "3.12.0"
bit-vec = "0.6.3"
bit-vec = "0.8.0"
xattr = "0.2.3"
# Local dependencies

View File

@@ -21,7 +21,7 @@ fn to_capshashset(cfd_log: RawFd, capabilities: &Option<HashSet<LinuxCapability>
let binding: HashSet<LinuxCapability> = HashSet::new();
let caps = capabilities.as_ref().unwrap_or(&binding);
for cap in caps.iter() {
match Capability::from_str(&format!("CAP_{}", cap)) {
match Capability::from_str(&format!("CAP_{cap}")) {
Err(_) => {
log_child!(cfd_log, "{} is not a cap", &cap.to_string());
continue;

View File

@@ -1097,7 +1097,7 @@ impl Manager {
devices_group_info
);
Self::setup_allowed_all_mode(pod_cg).with_context(|| {
format!("Setup allowed all devices mode for {}", pod_cpath)
format!("Setup allowed all devices mode for {pod_cpath}")
})?;
devices_group_info.allowed_all = true;
}
@@ -1109,11 +1109,11 @@ impl Manager {
if !is_allowded_all {
Self::setup_devcg_whitelist(pod_cg).with_context(|| {
format!("Setup device cgroup whitelist for {}", pod_cpath)
format!("Setup device cgroup whitelist for {pod_cpath}")
})?;
} else {
Self::setup_allowed_all_mode(pod_cg)
.with_context(|| format!("Setup allowed all mode for {}", pod_cpath))?;
.with_context(|| format!("Setup allowed all mode for {pod_cpath}"))?;
devices_group_info.allowed_all = true;
}
@@ -1132,7 +1132,7 @@ impl Manager {
if let Some(devices_group_info) = devices_group_info.as_ref() {
if !devices_group_info.allowed_all {
Self::setup_devcg_whitelist(&cg)
.with_context(|| format!("Setup device cgroup whitelist for {}", cpath))?;
.with_context(|| format!("Setup device cgroup whitelist for {cpath}"))?;
}
}

View File

@@ -57,8 +57,8 @@ fn parse_parent(slice: String) -> Result<String> {
if subslice.is_empty() {
return Err(anyhow!("invalid slice name: {}", slice));
}
slice_path = format!("{}/{}{}{}", slice_path, prefix, subslice, SLICE_SUFFIX);
prefix = format!("{}{}-", prefix, subslice);
slice_path = format!("{slice_path}/{prefix}{subslice}{SLICE_SUFFIX}");
prefix = format!("{prefix}{subslice}-");
}
slice_path.remove(0);
Ok(slice_path)
@@ -68,9 +68,9 @@ fn get_unit_name(prefix: String, name: String) -> String {
if name.ends_with(SLICE_SUFFIX) {
name
} else if prefix.is_empty() {
format!("{}{}", name, SCOPE_SUFFIX)
format!("{name}{SCOPE_SUFFIX}")
} else {
format!("{}-{}{}", prefix, name, SCOPE_SUFFIX)
format!("{prefix}-{name}{SCOPE_SUFFIX}")
}
}

View File

@@ -346,7 +346,7 @@ pub fn init_child() {
Ok(_) => log_child!(cfd_log, "temporary parent process exit successfully"),
Err(e) => {
log_child!(cfd_log, "temporary parent process exit:child exit: {:?}", e);
let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
let _ = write_sync(cwfd, SYNC_FAILED, format!("{e:?}").as_str());
}
}
}
@@ -544,13 +544,13 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
sched::setns(fd, s).or_else(|e| {
if s == CloneFlags::CLONE_NEWUSER {
if e != Errno::EINVAL {
let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
let _ = write_sync(cwfd, SYNC_FAILED, format!("{e:?}").as_str());
return Err(e);
}
Ok(())
} else {
let _ = write_sync(cwfd, SYNC_FAILED, format!("{:?}", e).as_str());
let _ = write_sync(cwfd, SYNC_FAILED, format!("{e:?}").as_str());
Err(e)
}
})?;
@@ -685,7 +685,7 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
let _ = write_sync(
cwfd,
SYNC_FAILED,
format!("setgroups failed: {:?}", e).as_str(),
format!("setgroups failed: {e:?}").as_str(),
);
})?;
}
@@ -808,7 +808,7 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
if init {
let fd = fcntl::open(
format!("/proc/self/fd/{}", fifofd).as_str(),
format!("/proc/self/fd/{fifofd}").as_str(),
OFlag::O_RDONLY | OFlag::O_CLOEXEC,
Mode::from_bits_truncate(0),
)?;
@@ -1171,14 +1171,14 @@ impl BaseContainer for LinuxContainer {
.stderr(child_stderr)
.env(INIT, format!("{}", p.init))
.env(NO_PIVOT, format!("{}", self.config.no_pivot_root))
.env(CRFD_FD, format!("{}", crfd))
.env(CWFD_FD, format!("{}", cwfd))
.env(CLOG_FD, format!("{}", cfd_log))
.env(CRFD_FD, format!("{crfd}"))
.env(CWFD_FD, format!("{cwfd}"))
.env(CLOG_FD, format!("{cfd_log}"))
.env(CONSOLE_SOCKET_FD, console_name)
.env(PIDNS_ENABLED, format!("{}", pidns.enabled));
if p.init {
child = child.env(FIFO_FD, format!("{}", fifofd));
child = child.env(FIFO_FD, format!("{fifofd}"));
}
if pidns.fd.is_some() {
@@ -1687,7 +1687,7 @@ impl LinuxContainer {
return anyhow!(e).context(format!("container {} already exists", id.as_str()));
}
anyhow!(e).context(format!("fail to create container directory {}", root))
anyhow!(e).context(format!("fail to create container directory {root}"))
})?;
unistd::chown(
@@ -1695,7 +1695,7 @@ impl LinuxContainer {
Some(unistd::getuid()),
Some(unistd::getgid()),
)
.context(format!("Cannot change owner of container {} root", id))?;
.context(format!("Cannot change owner of container {id} root"))?;
let spec = config.spec.as_ref().unwrap();
let linux_cgroups_path = spec

View File

@@ -528,7 +528,7 @@ pub fn pivot_rootfs<P: ?Sized + NixPath + std::fmt::Debug>(path: &P) -> Result<(
// Change to the new root so that the pivot_root actually acts on it.
unistd::fchdir(newroot)?;
pivot_root(".", ".").context(format!("failed to pivot_root on {:?}", path))?;
pivot_root(".", ".").context(format!("failed to pivot_root on {path:?}"))?;
// Currently our "." is oldroot (according to the current kernel code).
// However, purely for safety, we will fchdir(oldroot) since there isn't
@@ -929,7 +929,7 @@ fn create_devices(devices: &[LinuxDevice], bind: bool) -> Result<()> {
for dev in DEFAULT_DEVICES.iter() {
let dev_path = dev.path().display().to_string();
let path = Path::new(&dev_path[1..]);
op(dev, path).context(format!("Creating container device {:?}", dev))?;
op(dev, path).context(format!("Creating container device {dev:?}"))?;
}
for dev in devices {
let dev_path = &dev.path();
@@ -941,9 +941,9 @@ fn create_devices(devices: &[LinuxDevice], bind: bool) -> Result<()> {
anyhow!(msg)
})?;
if let Some(dir) = path.parent() {
fs::create_dir_all(dir).context(format!("Creating container device {:?}", dev))?;
fs::create_dir_all(dir).context(format!("Creating container device {dev:?}"))?;
}
op(dev, path).context(format!("Creating container device {:?}", dev))?;
op(dev, path).context(format!("Creating container device {dev:?}"))?;
}
stat::umask(old);
Ok(())

View File

@@ -18,10 +18,10 @@ pub fn is_enabled() -> Result<bool> {
pub fn add_mount_label(data: &mut String, label: &str) {
if data.is_empty() {
let context = format!("context=\"{}\"", label);
let context = format!("context=\"{label}\"");
data.push_str(&context);
} else {
let context = format!(",context=\"{}\"", label);
let context = format!(",context=\"{label}\"");
data.push_str(&context);
}
}

View File

@@ -68,7 +68,7 @@ mod tests {
#[test]
fn test_from_str() {
let device = Address::from_str("a.1").unwrap();
assert_eq!(format!("{}", device), "0a.0001");
assert_eq!(format!("{device}"), "0a.0001");
assert!(Address::from_str("").is_err());
assert!(Address::from_str(".").is_err());

View File

@@ -102,10 +102,10 @@ mod tests {
fn test_new_device() {
// Valid devices
let device = Device::new(0, 0).unwrap();
assert_eq!(format!("{}", device), "0.0.0000");
assert_eq!(format!("{device}"), "0.0.0000");
let device = Device::new(3, 0xffff).unwrap();
assert_eq!(format!("{}", device), "0.3.ffff");
assert_eq!(format!("{device}"), "0.3.ffff");
// Invalid device
let device = Device::new(4, 0);
@@ -116,13 +116,13 @@ mod tests {
fn test_device_from_str() {
// Valid devices
let device = Device::from_str("0.0.0").unwrap();
assert_eq!(format!("{}", device), "0.0.0000");
assert_eq!(format!("{device}"), "0.0.0000");
let device = Device::from_str("0.0.0000").unwrap();
assert_eq!(format!("{}", device), "0.0.0000");
assert_eq!(format!("{device}"), "0.0.0000");
let device = Device::from_str("0.3.ffff").unwrap();
assert_eq!(format!("{}", device), "0.3.ffff");
assert_eq!(format!("{device}"), "0.3.ffff");
// Invalid devices
let device = Device::from_str("0.0");

View File

@@ -110,7 +110,7 @@ impl CDHClient {
pub async fn get_resource(&self, resource_path: &str) -> Result<Vec<u8>> {
let req = GetResourceRequest {
ResourcePath: format!("kbs://{}", resource_path),
ResourcePath: format!("kbs://{resource_path}"),
..Default::default()
};
let res = self

View File

@@ -260,7 +260,7 @@ impl Default for AgentConfig {
debug_console_vport: 0,
log_vport: 0,
container_pipe_size: DEFAULT_CONTAINER_PIPE_SIZE,
server_addr: format!("{}:{}", VSOCK_ADDR, DEFAULT_AGENT_VSOCK_PORT),
server_addr: format!("{VSOCK_ADDR}:{DEFAULT_AGENT_VSOCK_PORT}"),
passfd_listener_port: 0,
cgroup_no_v1: String::from(""),
unified_cgroup_hierarchy: false,
@@ -269,7 +269,7 @@ impl Default for AgentConfig {
no_proxy: String::from(""),
guest_components_rest_api: GuestComponentsFeatures::default(),
guest_components_procs: GuestComponentsProcs::default(),
secure_storage_integrity: false,
secure_storage_integrity: true,
#[cfg(feature = "agent-policy")]
policy_file: String::from(""),
mem_agent: None,
@@ -417,7 +417,7 @@ impl AgentConfig {
// generate our config from it.
// The agent will fail to start if the configuration file is not present,
// or if it can't be parsed properly.
if param.starts_with(format!("{}=", CONFIG_FILE).as_str()) {
if param.starts_with(format!("{CONFIG_FILE}=").as_str()) {
let config_file = get_string_value(param)?;
return AgentConfig::from_config_file(&config_file)
.context("AgentConfig from kernel cmdline");
@@ -651,7 +651,7 @@ impl AgentConfig {
#[instrument]
pub fn from_config_file(file: &str) -> Result<AgentConfig> {
let config = fs::read_to_string(file)
.with_context(|| format!("Failed to read config file {}", file))?;
.with_context(|| format!("Failed to read config file {file}"))?;
AgentConfig::from_str(&config)
}
@@ -668,7 +668,7 @@ impl AgentConfig {
}
if let Ok(value) = env::var(TRACING_ENV_VAR) {
let name_value = format!("{}={}", TRACING_ENV_VAR, value);
let name_value = format!("{TRACING_ENV_VAR}={value}");
self.tracing = get_bool_value(&name_value).unwrap_or(false);
}
@@ -911,7 +911,7 @@ mod tests {
no_proxy: "",
guest_components_rest_api: GuestComponentsFeatures::default(),
guest_components_procs: GuestComponentsProcs::default(),
secure_storage_integrity: false,
secure_storage_integrity: true,
#[cfg(feature = "agent-policy")]
policy_file: "",
mem_agent: None,
@@ -1364,7 +1364,7 @@ mod tests {
},
TestData {
contents: "",
secure_storage_integrity: false,
secure_storage_integrity: true,
..Default::default()
},
TestData {
@@ -1442,7 +1442,7 @@ mod tests {
// Now, test various combinations of file contents and environment
// variables.
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let file_path = dir.path().join("cmdline");
@@ -1470,40 +1470,36 @@ mod tests {
let config =
AgentConfig::from_cmdline(filename, vec![]).expect("Failed to parse command line");
assert_eq!(d.debug_console, config.debug_console, "{}", msg);
assert_eq!(d.dev_mode, config.dev_mode, "{}", msg);
assert_eq!(d.cgroup_no_v1, config.cgroup_no_v1, "{}", msg);
assert_eq!(d.debug_console, config.debug_console, "{msg}");
assert_eq!(d.dev_mode, config.dev_mode, "{msg}");
assert_eq!(d.cgroup_no_v1, config.cgroup_no_v1, "{msg}");
assert_eq!(
d.unified_cgroup_hierarchy, config.unified_cgroup_hierarchy,
"{}",
msg
"{msg}"
);
assert_eq!(d.log_level, config.log_level, "{}", msg);
assert_eq!(d.hotplug_timeout, config.hotplug_timeout, "{}", msg);
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{}", msg);
assert_eq!(d.server_addr, config.server_addr, "{}", msg);
assert_eq!(d.tracing, config.tracing, "{}", msg);
assert_eq!(d.https_proxy, config.https_proxy, "{}", msg);
assert_eq!(d.no_proxy, config.no_proxy, "{}", msg);
assert_eq!(d.log_level, config.log_level, "{msg}");
assert_eq!(d.hotplug_timeout, config.hotplug_timeout, "{msg}");
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{msg}");
assert_eq!(d.server_addr, config.server_addr, "{msg}");
assert_eq!(d.tracing, config.tracing, "{msg}");
assert_eq!(d.https_proxy, config.https_proxy, "{msg}");
assert_eq!(d.no_proxy, config.no_proxy, "{msg}");
assert_eq!(
d.guest_components_rest_api, config.guest_components_rest_api,
"{}",
msg
"{msg}"
);
assert_eq!(
d.guest_components_procs, config.guest_components_procs,
"{}",
msg
"{msg}"
);
assert_eq!(
d.secure_storage_integrity, config.secure_storage_integrity,
"{}",
msg
"{msg}"
);
#[cfg(feature = "agent-policy")]
assert_eq!(d.policy_file, config.policy_file, "{}", msg);
assert_eq!(d.policy_file, config.policy_file, "{msg}");
assert_eq!(d.mem_agent, config.mem_agent, "{}", msg);
assert_eq!(d.mem_agent, config.mem_agent, "{msg}");
for v in vars_to_unset {
env::remove_var(v);
@@ -1568,7 +1564,7 @@ mod tests {
#[case("panic", Ok(slog::Level::Critical))]
fn test_logrus_to_slog_level(#[case] input: &str, #[case] expected: Result<slog::Level>) {
let result = logrus_to_slog_level(input);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1593,7 +1589,7 @@ mod tests {
#[case("agent.log=panic", Ok(slog::Level::Critical))]
fn test_get_log_level(#[case] input: &str, #[case] expected: Result<slog::Level>) {
let result = get_log_level(input);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1636,7 +1632,7 @@ Caused by:
#[case("agent.cdi_timeout=320", Ok(time::Duration::from_secs(320)))]
fn test_timeout(#[case] param: &str, #[case] expected: Result<time::Duration>) {
let result = get_timeout(param);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1676,7 +1672,7 @@ Caused by:
)))]
fn test_get_container_pipe_size(#[case] param: &str, #[case] expected: Result<i32>) {
let result = get_container_pipe_size(param);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1697,7 +1693,7 @@ Caused by:
#[case("x= = ", Ok(" = ".into()))]
fn test_get_string_value(#[case] param: &str, #[case] expected: Result<String>) {
let result = get_string_value(param);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1716,7 +1712,7 @@ Caused by:
#[case] expected: Result<GuestComponentsFeatures>,
) {
let result = get_guest_components_features_value(input);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}
@@ -1739,7 +1735,7 @@ Caused by:
#[case] expected: Result<GuestComponentsProcs>,
) {
let result = get_guest_components_procs_value(param);
let msg = format!("expected: {:?}, result: {:?}", expected, result);
let msg = format!("expected: {expected:?}, result: {result:?}");
assert_result!(expected, result, msg);
}

View File

@@ -16,7 +16,10 @@ use crate::pci;
use crate::sandbox::Sandbox;
use crate::uevent::{wait_for_uevent, Uevent, UeventMatcher};
use anyhow::{anyhow, Context, Result};
use kata_types::device::{DRIVER_BLK_CCW_TYPE, DRIVER_BLK_MMIO_TYPE, DRIVER_BLK_PCI_TYPE};
#[cfg(target_arch = "s390x")]
use kata_types::device::DRIVER_BLK_CCW_TYPE;
use kata_types::device::{DRIVER_BLK_MMIO_TYPE, DRIVER_BLK_PCI_TYPE};
use protocols::agent::Device;
use regex::Regex;
use std::path::Path;
@@ -28,6 +31,7 @@ use tracing::instrument;
#[derive(Debug)]
pub struct VirtioBlkPciDeviceHandler {}
#[cfg(target_arch = "s390x")]
#[derive(Debug)]
pub struct VirtioBlkCcwDeviceHandler {}
@@ -52,6 +56,7 @@ impl DeviceHandler for VirtioBlkPciDeviceHandler {
}
}
#[cfg(target_arch = "s390x")]
#[async_trait::async_trait]
impl DeviceHandler for VirtioBlkCcwDeviceHandler {
#[instrument]
@@ -164,7 +169,7 @@ pub struct VirtioBlkPciMatcher {
impl VirtioBlkPciMatcher {
pub fn new(relpath: &str) -> VirtioBlkPciMatcher {
let root_bus = create_pci_root_bus_path();
let re = format!(r"^{}{}/virtio[0-9]+/block/", root_bus, relpath);
let re = format!(r"^{root_bus}{relpath}/virtio[0-9]+/block/");
VirtioBlkPciMatcher {
rex: Regex::new(&re).expect("BUG: failed to compile VirtioBlkPciMatcher regex"),
@@ -186,7 +191,7 @@ pub struct VirtioBlkMmioMatcher {
impl VirtioBlkMmioMatcher {
pub fn new(devname: &str) -> VirtioBlkMmioMatcher {
VirtioBlkMmioMatcher {
suffix: format!(r"/block/{}", devname),
suffix: format!(r"/block/{devname}"),
}
}
}
@@ -206,10 +211,8 @@ pub struct VirtioBlkCCWMatcher {
#[cfg(target_arch = "s390x")]
impl VirtioBlkCCWMatcher {
pub fn new(root_bus_path: &str, device: &ccw::Device) -> Self {
let re = format!(
r"^{}/0\.[0-3]\.[0-9a-f]{{1,4}}/{}/virtio[0-9]+/block/",
root_bus_path, device
);
let re =
format!(r"^{root_bus_path}/0\.[0-3]\.[0-9a-f]{{1,4}}/{device}/virtio[0-9]+/block/");
VirtioBlkCCWMatcher {
rex: Regex::new(&re).expect("BUG: failed to compile VirtioBlkCCWMatcher regex"),
}
@@ -238,12 +241,12 @@ mod tests {
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.subsystem = BLOCK.to_string();
uev_a.devname = devname.to_string();
uev_a.devpath = format!("{}{}/virtio4/block/{}", root_bus, relpath_a, devname);
uev_a.devpath = format!("{root_bus}{relpath_a}/virtio4/block/{devname}");
let matcher_a = VirtioBlkPciMatcher::new(relpath_a);
let mut uev_b = uev_a.clone();
let relpath_b = "/0000:00:0a.0/0000:00:0b.0";
uev_b.devpath = format!("{}{}/virtio0/block/{}", root_bus, relpath_b, devname);
uev_b.devpath = format!("{root_bus}{relpath_b}/virtio0/block/{devname}");
let matcher_b = VirtioBlkPciMatcher::new(relpath_b);
assert!(matcher_a.is_match(&uev_a));
@@ -264,10 +267,7 @@ mod tests {
uev.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev.subsystem = subsystem.to_string();
uev.devname = devname.to_string();
uev.devpath = format!(
"{}/0.0.0001/{}/virtio1/{}/{}",
root_bus, relpath, subsystem, devname
);
uev.devpath = format!("{root_bus}/0.0.0001/{relpath}/virtio1/{subsystem}/{devname}");
// Valid path
let device = ccw::Device::from_str(relpath).unwrap();
@@ -275,40 +275,25 @@ mod tests {
assert!(matcher.is_match(&uev));
// Invalid paths
uev.devpath = format!(
"{}/0.0.0001/0.0.0003/virtio1/{}/{}",
root_bus, subsystem, devname
);
uev.devpath = format!("{root_bus}/0.0.0001/0.0.0003/virtio1/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
uev.devpath = format!("0.0.0001/{}/virtio1/{}/{}", relpath, subsystem, devname);
uev.devpath = format!("0.0.0001/{relpath}/virtio1/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
uev.devpath = format!(
"{}/0.0.0001/{}/virtio/{}/{}",
root_bus, relpath, subsystem, devname
);
uev.devpath = format!("{root_bus}/0.0.0001/{relpath}/virtio/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
uev.devpath = format!("{}/0.0.0001/{}/virtio1", root_bus, relpath);
uev.devpath = format!("{root_bus}/0.0.0001/{relpath}/virtio1");
assert!(!matcher.is_match(&uev));
uev.devpath = format!(
"{}/1.0.0001/{}/virtio1/{}/{}",
root_bus, relpath, subsystem, devname
);
uev.devpath = format!("{root_bus}/1.0.0001/{relpath}/virtio1/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
uev.devpath = format!(
"{}/0.4.0001/{}/virtio1/{}/{}",
root_bus, relpath, subsystem, devname
);
uev.devpath = format!("{root_bus}/0.4.0001/{relpath}/virtio1/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
uev.devpath = format!(
"{}/0.0.10000/{}/virtio1/{}/{}",
root_bus, relpath, subsystem, devname
);
uev.devpath = format!("{root_bus}/0.0.10000/{relpath}/virtio1/{subsystem}/{devname}");
assert!(!matcher.is_match(&uev));
}
@@ -321,17 +306,13 @@ mod tests {
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.subsystem = BLOCK.to_string();
uev_a.devname = devname_a.to_string();
uev_a.devpath = format!(
"/sys/devices/virtio-mmio-cmdline/virtio-mmio.0/virtio0/block/{}",
devname_a
);
uev_a.devpath =
format!("/sys/devices/virtio-mmio-cmdline/virtio-mmio.0/virtio0/block/{devname_a}");
let matcher_a = VirtioBlkMmioMatcher::new(devname_a);
let mut uev_b = uev_a.clone();
uev_b.devpath = format!(
"/sys/devices/virtio-mmio-cmdline/virtio-mmio.4/virtio4/block/{}",
devname_b
);
uev_b.devpath =
format!("/sys/devices/virtio-mmio-cmdline/virtio-mmio.4/virtio4/block/{devname_b}");
let matcher_b = VirtioBlkMmioMatcher::new(devname_b);
assert!(matcher_a.is_match(&uev_a));

View File

@@ -425,7 +425,7 @@ pub fn update_env_pci(
let mut guest_addrs = Vec::<String>::new();
for host_addr_str in val.split(',') {
let host_addr = pci::Address::from_str(host_addr_str)
.with_context(|| format!("Can't parse {} environment variable", name))?;
.with_context(|| format!("Can't parse {name} environment variable"))?;
let host_guest = pcimap
.get(cid)
.ok_or_else(|| anyhow!("No PCI mapping found for container {}", cid))?;
@@ -433,11 +433,11 @@ pub fn update_env_pci(
.get(&host_addr)
.ok_or_else(|| anyhow!("Unable to translate host PCI address {}", host_addr))?;
guest_addrs.push(format!("{}", guest_addr));
addr_map.insert(host_addr_str.to_string(), format!("{}", guest_addr));
guest_addrs.push(format!("{guest_addr}"));
addr_map.insert(host_addr_str.to_string(), format!("{guest_addr}"));
}
pci_dev_map.insert(format!("{}_INFO", name), addr_map);
pci_dev_map.insert(format!("{name}_INFO"), addr_map);
envvar.replace_range(eqpos + 1.., guest_addrs.join(",").as_str());
}
@@ -526,7 +526,7 @@ fn update_spec_devices(
"Missing devices in OCI spec: {:?}",
updates
.keys()
.map(|d| format!("{:?}", d))
.map(|d| format!("{d:?}"))
.collect::<Vec<_>>()
.join(" ")
));
@@ -572,7 +572,7 @@ pub fn pcipath_to_sysfs(root_bus_sysfs: &str, pcipath: &pci::Path) -> Result<Str
for i in 0..pcipath.len() {
let bdf = format!("{}:{}", bus, pcipath[i]);
relpath = format!("{}/{}", relpath, bdf);
relpath = format!("{relpath}/{bdf}");
if i == pcipath.len() - 1 {
// Final device need not be a bridge
@@ -580,7 +580,7 @@ pub fn pcipath_to_sysfs(root_bus_sysfs: &str, pcipath: &pci::Path) -> Result<Str
}
// Find out the bus exposed by bridge
let bridgebuspath = format!("{}{}/pci_bus", root_bus_sysfs, relpath);
let bridgebuspath = format!("{root_bus_sysfs}{relpath}/pci_bus");
let mut files: Vec<_> = fs::read_dir(&bridgebuspath)?.collect();
match files.pop() {
@@ -1120,7 +1120,7 @@ mod tests {
// Create mock sysfs files to indicate that 0000:00:02.0 is a bridge to bus 01
let bridge2bus = "0000:01";
let bus2path = format!("{}/pci_bus/{}", bridge2path, bridge2bus);
let bus2path = format!("{bridge2path}/pci_bus/{bridge2bus}");
fs::create_dir_all(bus2path).unwrap();
@@ -1134,9 +1134,9 @@ mod tests {
assert!(relpath.is_err());
// Create mock sysfs files for a bridge at 0000:01:03.0 to bus 02
let bridge3path = format!("{}/0000:01:03.0", bridge2path);
let bridge3path = format!("{bridge2path}/0000:01:03.0");
let bridge3bus = "0000:02";
let bus3path = format!("{}/pci_bus/{}", bridge3path, bridge3bus);
let bus3path = format!("{bridge3path}/pci_bus/{bridge3bus}");
fs::create_dir_all(bus3path).unwrap();
@@ -1169,7 +1169,7 @@ mod tests {
let devname = "vda";
let root_bus = create_pci_root_bus_path();
let relpath = "/0000:00:0a.0/0000:03:0b.0";
let devpath = format!("{}{}/virtio4/block/{}", root_bus, relpath, devname);
let devpath = format!("{root_bus}{relpath}/virtio4/block/{devname}");
let mut uev = crate::uevent::Uevent::default();
uev.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
@@ -1272,7 +1272,7 @@ mod tests {
cdi_timeout,
)
.await;
println!("modfied spec {:?}", spec);
println!("modfied spec {spec:?}");
assert!(res.is_ok(), "{}", res.err().unwrap());
let linux = spec.linux().as_ref().unwrap();

View File

@@ -69,7 +69,7 @@ impl NetPciMatcher {
let root_bus = create_pci_root_bus_path();
NetPciMatcher {
devpath: format!("{}{}", root_bus, relpath),
devpath: format!("{root_bus}{relpath}"),
}
}
}
@@ -106,10 +106,7 @@ struct NetCcwMatcher {
#[cfg(target_arch = "s390x")]
impl NetCcwMatcher {
pub fn new(root_bus_path: &str, device: &ccw::Device) -> Self {
let re = format!(
r"{}/0\.[0-3]\.[0-9a-f]{{1,4}}/{}/virtio[0-9]+/net/",
root_bus_path, device
);
let re = format!(r"{root_bus_path}/0\.[0-3]\.[0-9a-f]{{1,4}}/{device}/virtio[0-9]+/net/");
NetCcwMatcher {
re: Regex::new(&re).expect("BUG: failed to compile NetCCWMatcher regex"),
}
@@ -139,7 +136,7 @@ mod tests {
let mut uev_a = crate::uevent::Uevent::default();
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.devpath = format!("{}{}", root_bus, relpath_a);
uev_a.devpath = format!("{root_bus}{relpath_a}");
uev_a.subsystem = String::from("net");
uev_a.interface = String::from("eth0");
let matcher_a = NetPciMatcher::new(relpath_a);
@@ -147,7 +144,7 @@ mod tests {
let relpath_b = "/0000:00:02.0/0000:01:02.0";
let mut uev_b = uev_a.clone();
uev_b.devpath = format!("{}{}", root_bus, relpath_b);
uev_b.devpath = format!("{root_bus}{relpath_b}");
let matcher_b = NetPciMatcher::new(relpath_b);
assert!(matcher_a.is_match(&uev_a));
@@ -158,7 +155,7 @@ mod tests {
let relpath_c = "/0000:00:02.0/0000:01:03.0";
let net_substr = "/net/eth0";
let mut uev_c = uev_a.clone();
uev_c.devpath = format!("{}{}{}", root_bus, relpath_c, net_substr);
uev_c.devpath = format!("{root_bus}{relpath_c}{net_substr}");
let matcher_c = NetPciMatcher::new(relpath_c);
assert!(matcher_c.is_match(&uev_c));

View File

@@ -67,7 +67,7 @@ pub struct PmemBlockMatcher {
impl PmemBlockMatcher {
pub fn new(devname: &str) -> PmemBlockMatcher {
let suffix = format!(r"/block/{}", devname);
let suffix = format!(r"/block/{devname}");
PmemBlockMatcher { suffix }
}

View File

@@ -58,7 +58,7 @@ pub struct ScsiBlockMatcher {
impl ScsiBlockMatcher {
pub fn new(scsi_addr: &str) -> ScsiBlockMatcher {
let search = format!(r"/0:0:{}/block/", scsi_addr);
let search = format!(r"/0:0:{scsi_addr}/block/");
ScsiBlockMatcher { search }
}
@@ -118,18 +118,14 @@ mod tests {
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.subsystem = BLOCK.to_string();
uev_a.devname = devname.to_string();
uev_a.devpath = format!(
"{}/0000:00:00.0/virtio0/host0/target0:0:0/0:0:{}/block/sda",
root_bus, addr_a
);
uev_a.devpath =
format!("{root_bus}/0000:00:00.0/virtio0/host0/target0:0:0/0:0:{addr_a}/block/sda");
let matcher_a = ScsiBlockMatcher::new(addr_a);
let mut uev_b = uev_a.clone();
let addr_b = "2:0";
uev_b.devpath = format!(
"{}/0000:00:00.0/virtio0/host0/target0:0:2/0:0:{}/block/sdb",
root_bus, addr_b
);
uev_b.devpath =
format!("{root_bus}/0000:00:00.0/virtio0/host0/target0:0:2/0:0:{addr_b}/block/sdb");
let matcher_b = ScsiBlockMatcher::new(addr_b);
assert!(matcher_a.is_match(&uev_a));

View File

@@ -170,7 +170,7 @@ pub struct VfioMatcher {
impl VfioMatcher {
pub fn new(grp: IommuGroup) -> VfioMatcher {
VfioMatcher {
syspath: format!("/devices/virtual/vfio/{}", grp),
syspath: format!("/devices/virtual/vfio/{grp}"),
}
}
}
@@ -215,7 +215,7 @@ impl PciMatcher {
pub fn new(relpath: &str) -> Result<PciMatcher> {
let root_bus = create_pci_root_bus_path();
Ok(PciMatcher {
devpath: format!("{}{}", root_bus, relpath),
devpath: format!("{root_bus}{relpath}"),
})
}
}
@@ -425,12 +425,12 @@ mod tests {
let mut uev_a = crate::uevent::Uevent::default();
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.devname = format!("vfio/{}", grpa);
uev_a.devpath = format!("/devices/virtual/vfio/{}", grpa);
uev_a.devname = format!("vfio/{grpa}");
uev_a.devpath = format!("/devices/virtual/vfio/{grpa}");
let matcher_a = VfioMatcher::new(grpa);
let mut uev_b = uev_a.clone();
uev_b.devpath = format!("/devices/virtual/vfio/{}", grpb);
uev_b.devpath = format!("/devices/virtual/vfio/{grpb}");
let matcher_b = VfioMatcher::new(grpb);
assert!(matcher_a.is_match(&uev_a));
@@ -531,12 +531,12 @@ mod tests {
async fn test_vfio_ap_matcher() {
let subsystem = "ap";
let card = "0a";
let relpath = format!("{}.0001", card);
let relpath = format!("{card}.0001");
let mut uev = Uevent::default();
uev.action = U_EVENT_ACTION_ADD.to_string();
uev.subsystem = subsystem.to_string();
uev.devpath = format!("{}/card{}/{}", AP_ROOT_BUS_PATH, card, relpath);
uev.devpath = format!("{AP_ROOT_BUS_PATH}/card{card}/{relpath}");
let ap_address = ap::Address::from_str(&relpath).unwrap();
let matcher = ApMatcher::new(ap_address);
@@ -548,7 +548,7 @@ mod tests {
assert!(!matcher.is_match(&uev_remove));
let mut uev_other_device = uev.clone();
uev_other_device.devpath = format!("{}/card{}/{}.0002", AP_ROOT_BUS_PATH, card, card);
uev_other_device.devpath = format!("{AP_ROOT_BUS_PATH}/card{card}/{card}.0002");
assert!(!matcher.is_match(&uev_other_device));
}
}

View File

@@ -9,6 +9,7 @@
// SPDX-License-Identifier: Apache-2.0
//
#[cfg(feature = "init-data")]
use std::{os::unix::fs::FileTypeExt, path::Path};
use anyhow::{bail, Context, Result};
@@ -37,8 +38,24 @@ pub const AA_CONFIG_PATH: &str = concatcp!(INITDATA_PATH, "/aa.toml");
pub const CDH_CONFIG_PATH: &str = concatcp!(INITDATA_PATH, "/cdh.toml");
/// Magic number of initdata device
#[cfg(feature = "init-data")]
pub const INITDATA_MAGIC_NUMBER: &[u8] = b"initdata";
/// initdata device with disk type 'vd*'
#[cfg(feature = "init-data")]
const INITDATA_PREFIX_DISK_VDX: &str = "vd";
/// initdata device with disk type 'sd*'
#[cfg(feature = "init-data")]
const INITDATA_PREFIX_DISK_SDX: &str = "sd";
#[cfg(not(feature = "init-data"))]
async fn detect_initdata_device(logger: &Logger) -> Result<Option<String>> {
debug!(logger, "Initdata is disabled");
Ok(None)
}
#[cfg(feature = "init-data")]
async fn detect_initdata_device(logger: &Logger) -> Result<Option<String>> {
let dev_dir = Path::new("/dev");
let mut read_dir = tokio::fs::read_dir(dev_dir).await?;
@@ -46,9 +63,15 @@ async fn detect_initdata_device(logger: &Logger) -> Result<Option<String>> {
let filename = entry.file_name();
let filename = filename.to_string_lossy();
debug!(logger, "Initdata check device `{filename}`");
if !filename.starts_with("vd") {
// Currently there're two disk types supported:
// virtio-blk (vd*) and virtio-scsi (sd*)
if !filename.starts_with(INITDATA_PREFIX_DISK_VDX)
&& !filename.starts_with(INITDATA_PREFIX_DISK_SDX)
{
continue;
}
let path = entry.path();
debug!(logger, "Initdata find potential device: `{path:?}`");

View File

@@ -301,12 +301,12 @@ async fn real_main(init_mode: bool) -> std::result::Result<(), Box<dyn std::erro
tracer::end_tracing();
}
eprintln!("{} shutdown complete", NAME);
eprintln!("{NAME} shutdown complete");
let mut wait_errors: Vec<tokio::task::JoinError> = vec![];
for result in results {
if let Err(e) = result {
eprintln!("wait task error: {:#?}", e);
eprintln!("wait task error: {e:#?}");
wait_errors.push(e);
}
}
@@ -746,7 +746,7 @@ mod tests {
skip_if_root!();
}
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC).unwrap();
defer!({
// XXX: Never try to close rfd, because it will be closed by PipeStream in
@@ -759,7 +759,7 @@ mod tests {
shutdown_tx.send(true).unwrap();
let result = create_logger_task(rfd, d.vsock_port, shutdown_rx).await;
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
assert_result!(d.result, result, msg);
}
}

View File

@@ -239,7 +239,7 @@ fn update_guest_metrics() {
Ok(kernel_stats) => {
set_gauge_vec_cpu_time(&GUEST_CPU_TIME, "total", &kernel_stats.total);
for (i, cpu_time) in kernel_stats.cpu_time.iter().enumerate() {
set_gauge_vec_cpu_time(&GUEST_CPU_TIME, format!("{}", i).as_str(), cpu_time);
set_gauge_vec_cpu_time(&GUEST_CPU_TIME, format!("{i}").as_str(), cpu_time);
}
}
}

View File

@@ -88,7 +88,7 @@ pub fn baremount(
let destination_str = destination.to_string_lossy();
if let Ok(m) = get_linux_mount_info(destination_str.deref()) {
if m.fs_type == fs_type {
if m.fs_type == fs_type && !flags.contains(MsFlags::MS_REMOUNT) {
slog_info!(logger, "{source:?} is already mounted at {destination:?}");
return Ok(());
}
@@ -177,7 +177,7 @@ pub fn get_mount_fs_type_from_file(mount_file: &str, mount_point: &str) -> Resul
let content = fs::read_to_string(mount_file)
.map_err(|e| anyhow!("read mount file {}: {}", mount_file, e))?;
let re = Regex::new(format!("device .+ mounted on {} with fstype (.+)", mount_point).as_str())?;
let re = Regex::new(format!("device .+ mounted on {mount_point} with fstype (.+)").as_str())?;
// Read the file line by line using the lines() iterator from std::io::BufRead.
for line in content.lines() {
@@ -336,11 +336,17 @@ mod tests {
let plain = slog_term::PlainSyncDecorator::new(std::io::stdout());
let logger = Logger::root(slog_term::FullFormat::new(plain).build().fuse(), o!());
// Detect actual filesystem types mounted in this environment
// Z runners mount /dev as tmpfs, while normal systems use devtmpfs
let dev_fs_type = get_mount_fs_type("/dev").unwrap_or_else(|_| String::from("devtmpfs"));
let proc_fs_type = get_mount_fs_type("/proc").unwrap_or_else(|_| String::from("proc"));
let sys_fs_type = get_mount_fs_type("/sys").unwrap_or_else(|_| String::from("sysfs"));
let test_cases = [
("dev", "/dev", "devtmpfs"),
("udev", "/dev", "devtmpfs"),
("proc", "/proc", "proc"),
("sysfs", "/sys", "sysfs"),
("dev", "/dev", dev_fs_type.as_str()),
("udev", "/dev", dev_fs_type.as_str()),
("proc", "/proc", proc_fs_type.as_str()),
("sysfs", "/sys", sys_fs_type.as_str()),
];
for &(source, destination, fs_type) in &test_cases {
@@ -349,8 +355,7 @@ mod tests {
let flags = MsFlags::MS_RDONLY;
let options = "mode=755";
println!(
"testing if already mounted baremount({:?} {:?} {:?})",
source, destination, fs_type
"testing if already mounted baremount({source:?} {destination:?} {fs_type:?})"
);
assert!(baremount(source, destination, fs_type, flags, options, &logger).is_ok());
}
@@ -381,6 +386,22 @@ mod tests {
let drain = slog::Discard;
let logger = slog::Logger::root(drain, o!());
// Detect filesystem type of root directory
let tmp_fs_type = get_mount_fs_type("/").unwrap_or_else(|_| String::from("unknown"));
// Error messages that vary based on filesystem type
const DEFAULT_ERROR_EPERM: &str = "Operation not permitted";
const BTRFS_ERROR_ENODEV: &str = "No such device";
// Helper to select error message based on filesystem type (e.g. btrfs for s390x runners)
let get_error_msg = |default: &'static str, btrfs_specific: &'static str| -> &'static str {
if tmp_fs_type == "btrfs" && !btrfs_specific.is_empty() {
btrfs_specific
} else {
default
}
};
let tests = &[
TestData {
test_user: TestUserType::Any,
@@ -416,7 +437,7 @@ mod tests {
fs_type: "bind",
flags: MsFlags::empty(),
options: "bind",
error_contains: "Operation not permitted",
error_contains: get_error_msg(DEFAULT_ERROR_EPERM, BTRFS_ERROR_ENODEV),
},
TestData {
test_user: TestUserType::NonRootOnly,
@@ -439,7 +460,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
skip_loop_by_user!(msg, d.test_user);
@@ -483,7 +504,7 @@ mod tests {
let result = baremount(src, dest, d.fs_type, d.flags, d.options, &logger);
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if d.error_contains.is_empty() {
assert!(result.is_ok(), "{}", msg);
@@ -495,8 +516,15 @@ mod tests {
}
let err = result.unwrap_err();
let error_msg = format!("{}", err);
assert!(error_msg.contains(d.error_contains), "{}", msg);
let error_msg = format!("{err}");
assert!(
error_msg.contains(d.error_contains),
"{}: expected error containing '{}', got '{}'",
msg,
d.error_contains,
error_msg
);
}
}
@@ -590,11 +618,11 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let result = remove_mounts(&d.mounts);
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if d.error_contains.is_empty() {
assert!(result.is_ok(), "{}", msg);
@@ -670,15 +698,15 @@ mod tests {
.iter()
.enumerate()
{
let msg = format!("missing mount file test[{}] with mountpoint: {}", i, mp);
let msg = format!("missing mount file test[{i}] with mountpoint: {mp}");
let result = get_mount_fs_type_from_file("", mp);
let err = result.unwrap_err();
let msg = format!("{}: error: {}", msg, err);
let msg = format!("{msg}: error: {err}");
assert!(
format!("{}", err).contains("No such file or directory"),
format!("{err}").contains("No such file or directory"),
"{}",
msg
);
@@ -686,7 +714,7 @@ mod tests {
// Now, test various combinations of file contents
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let file_path = dir.path().join("mount_stats");
@@ -703,7 +731,7 @@ mod tests {
let result = get_mount_fs_type_from_file(filename, d.mount_point);
// add more details if an assertion fails
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if d.error_contains.is_empty() {
let fs_type = result.unwrap();
@@ -846,7 +874,7 @@ mod tests {
);
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let file_path = dir.path().join("cgroups");
let filename = file_path
@@ -860,7 +888,7 @@ mod tests {
.unwrap_or_else(|_| panic!("{}: failed to write file contents", msg));
let result = get_cgroup_mounts(&logger, filename, false);
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if !d.error_contains.is_empty() {
assert!(result.is_err(), "{}", msg);
@@ -945,7 +973,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
skip_loop_by_user!(msg, d.test_user);
let drain = slog::Discard;
@@ -982,7 +1010,7 @@ mod tests {
nix::mount::umount(&dest).unwrap();
}
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if d.error_contains.is_empty() {
assert!(result.is_ok(), "{}", msg);
} else {
@@ -1033,14 +1061,14 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let result = parse_mount_options(&d.options_vec).unwrap();
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
let expected_result = (d.result.0, d.result.1.to_owned());
assert_eq!(expected_result, result, "{}", msg);
assert_eq!(expected_result, result, "{msg}");
}
}
}

View File

@@ -41,9 +41,9 @@ pub enum LinkFilter<'a> {
impl fmt::Display for LinkFilter<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
LinkFilter::Name(name) => write!(f, "Name: {}", name),
LinkFilter::Index(idx) => write!(f, "Index: {}", idx),
LinkFilter::Address(addr) => write!(f, "Address: {}", addr),
LinkFilter::Name(name) => write!(f, "Name: {name}"),
LinkFilter::Index(idx) => write!(f, "Index: {idx}"),
LinkFilter::Address(addr) => write!(f, "Address: {addr}"),
}
}
}
@@ -272,7 +272,7 @@ impl Handle {
use LinkAttribute as Nla;
let mac_addr = parse_mac_address(addr)
.with_context(|| format!("Failed to parse MAC address: {}", addr))?;
.with_context(|| format!("Failed to parse MAC address: {addr}"))?;
// Hardware filter might not be supported by netlink,
// we may have to dump link list and then find the target link.
@@ -401,11 +401,10 @@ impl Handle {
}
if let RouteAttribute::Oif(index) = attribute {
route.device = self
.find_link(LinkFilter::Index(*index))
.await
.context(format!("error looking up device {index}"))?
.name();
route.device = match self.find_link(LinkFilter::Index(*index)).await {
Ok(link) => link.name(),
Err(_) => String::new(),
};
}
}
@@ -922,6 +921,18 @@ mod tests {
const TEST_DUMMY_INTERFACE: &str = "dummy_for_arp";
const TEST_ARP_IP: &str = "192.0.2.127";
/// Helper function to check if the result is a netlink EACCES error
fn is_netlink_permission_error<T>(result: &Result<T>) -> bool {
if let Err(e) = result {
let error_string = format!("{e:?}");
if error_string.contains("code: Some(-13)") {
println!("INFO: skipping test - netlink operations are restricted in this environment (EACCES)");
return true;
}
}
false
}
#[tokio::test]
async fn find_link_by_name() {
let message = Handle::new()
@@ -989,14 +1000,10 @@ mod tests {
.unwrap()
.list_routes()
.await
.context(format!("available devices: {:?}", devices))
.context(format!("available devices: {devices:?}"))
.expect("Failed to list routes");
assert_ne!(all.len(), 0);
for r in &all {
assert_ne!(r.device.len(), 0);
}
}
#[tokio::test]
@@ -1045,10 +1052,14 @@ mod tests {
let lo = handle.find_link(LinkFilter::Name("lo")).await.unwrap();
for network in list {
handle
.add_addresses(lo.index(), iter::once(network))
.await
.expect("Failed to add IP");
let result = handle.add_addresses(lo.index(), iter::once(network)).await;
// Skip test if netlink operations are restricted (EACCES = -13)
if is_netlink_permission_error(&result) {
return;
}
result.expect("Failed to add IP");
// Make sure the address is there
let result = handle
@@ -1063,10 +1074,14 @@ mod tests {
assert!(result.is_some());
// Update it
handle
.add_addresses(lo.index(), iter::once(network))
.await
.expect("Failed to delete address");
let result = handle.add_addresses(lo.index(), iter::once(network)).await;
// Skip test if netlink operations are restricted (EACCES = -13)
if is_netlink_permission_error(&result) {
return;
}
result.expect("Failed to delete address");
}
}
@@ -1171,7 +1186,7 @@ mod tests {
);
assert_eq!(
stdout.trim(),
format!("{} lladdr {} PERMANENT", TEST_ARP_IP, mac)
format!("{TEST_ARP_IP} lladdr {mac} PERMANENT")
);
clean_env_for_test_add_one_arp_neighbor(TEST_DUMMY_INTERFACE, TEST_ARP_IP);

View File

@@ -191,13 +191,13 @@ mod tests {
fn test_slotfn() {
// Valid slots
let sf = SlotFn::new(0x00, 0x0).unwrap();
assert_eq!(format!("{}", sf), "00.0");
assert_eq!(format!("{sf}"), "00.0");
let sf = SlotFn::from_str("00.0").unwrap();
assert_eq!(format!("{}", sf), "00.0");
assert_eq!(format!("{sf}"), "00.0");
let sf = SlotFn::from_str("00").unwrap();
assert_eq!(format!("{}", sf), "00.0");
assert_eq!(format!("{sf}"), "00.0");
let sf = SlotFn::new(31, 7).unwrap();
let sf2 = SlotFn::from_str("1f.7").unwrap();
@@ -256,12 +256,12 @@ mod tests {
let sf1f_7 = SlotFn::new(0x1f, 7).unwrap();
let addr = Address::new(0, 0, sf0_0);
assert_eq!(format!("{}", addr), "0000:00:00.0");
assert_eq!(format!("{addr}"), "0000:00:00.0");
let addr2 = Address::from_str("0000:00:00.0").unwrap();
assert_eq!(addr, addr2);
let addr = Address::new(0xffff, 0xff, sf1f_7);
assert_eq!(format!("{}", addr), "ffff:ff:1f.7");
assert_eq!(format!("{addr}"), "ffff:ff:1f.7");
let addr2 = Address::from_str("ffff:ff:1f.7").unwrap();
assert_eq!(addr, addr2);
@@ -299,7 +299,7 @@ mod tests {
// Valid paths
let pcipath = Path::new(vec![sf3_0]).unwrap();
assert_eq!(format!("{}", pcipath), "03.0");
assert_eq!(format!("{pcipath}"), "03.0");
let pcipath2 = Path::from_str("03.0").unwrap();
assert_eq!(pcipath, pcipath2);
let pcipath2 = Path::from_str("03").unwrap();
@@ -308,7 +308,7 @@ mod tests {
assert_eq!(pcipath[0], sf3_0);
let pcipath = Path::new(vec![sf3_0, sf4_0]).unwrap();
assert_eq!(format!("{}", pcipath), "03.0/04.0");
assert_eq!(format!("{pcipath}"), "03.0/04.0");
let pcipath2 = Path::from_str("03.0/04.0").unwrap();
assert_eq!(pcipath, pcipath2);
let pcipath2 = Path::from_str("03/04").unwrap();
@@ -318,7 +318,7 @@ mod tests {
assert_eq!(pcipath[1], sf4_0);
let pcipath = Path::new(vec![sf3_0, sf4_0, sf5_0]).unwrap();
assert_eq!(format!("{}", pcipath), "03.0/04.0/05.0");
assert_eq!(format!("{pcipath}"), "03.0/04.0/05.0");
let pcipath2 = Path::from_str("03.0/04.0/05.0").unwrap();
assert_eq!(pcipath, pcipath2);
let pcipath2 = Path::from_str("03/04/05").unwrap();
@@ -329,7 +329,7 @@ mod tests {
assert_eq!(pcipath[2], sf5_0);
let pcipath = Path::new(vec![sfa_5, sfb_6, sfc_7]).unwrap();
assert_eq!(format!("{}", pcipath), "0a.5/0b.6/0c.7");
assert_eq!(format!("{pcipath}"), "0a.5/0b.6/0c.7");
let pcipath2 = Path::from_str("0a.5/0b.6/0c.7").unwrap();
assert_eq!(pcipath, pcipath2);
assert_eq!(pcipath.len(), 3);

View File

@@ -59,10 +59,26 @@ pub fn reseed_rng(data: &[u8]) -> Result<()> {
#[cfg(test)]
mod tests {
use super::*;
use nix::errno::Errno;
use std::fs::File;
use std::io::prelude::*;
use test_utils::skip_if_not_root;
/// Helper function to check if the result is an EPERM error
fn is_permission_error(result: &Result<()>) -> bool {
if let Err(e) = result {
if let Some(errno) = e.downcast_ref::<Errno>() {
if *errno == Errno::EPERM {
println!(
"EPERM: skipping test - reseeding RNG is not permitted in this environment"
);
return true;
}
}
}
false
}
#[test]
fn test_reseed_rng() {
skip_if_not_root!();
@@ -73,6 +89,9 @@ mod tests {
// Ensure the buffer was filled.
assert!(n == POOL_SIZE);
let ret = reseed_rng(&seed);
if is_permission_error(&ret) {
return;
}
assert!(ret.is_ok());
}
@@ -85,6 +104,9 @@ mod tests {
// Ensure the buffer was filled.
assert!(n == POOL_SIZE);
let ret = reseed_rng(&seed);
if is_permission_error(&ret) {
return;
}
if nix::unistd::Uid::effective().is_root() {
assert!(ret.is_ok());
} else {

View File

@@ -72,7 +72,7 @@ use crate::network::setup_guest_dns;
use crate::passfd_io;
use crate::pci;
use crate::random;
use crate::sandbox::Sandbox;
use crate::sandbox::{Sandbox, SandboxError};
use crate::storage::{add_storages, update_ephemeral_mounts, STORAGE_HANDLERS};
use crate::util;
use crate::version::{AGENT_VERSION, API_VERSION};
@@ -138,7 +138,17 @@ fn sl() -> slog::Logger {
// Convenience function to wrap an error and response to ttrpc client
pub fn ttrpc_error(code: ttrpc::Code, err: impl Debug) -> ttrpc::Error {
get_rpc_status(code, format!("{:?}", err))
get_rpc_status(code, format!("{err:?}"))
}
/// Convert SandboxError to ttrpc error with appropriate code.
/// Process not found errors map to NOT_FOUND, others to INVALID_ARGUMENT.
fn sandbox_err_to_ttrpc(err: SandboxError) -> ttrpc::Error {
let code = match &err {
SandboxError::InitProcessNotFound | SandboxError::InvalidExecId => ttrpc::Code::NOT_FOUND,
SandboxError::InvalidContainerId => ttrpc::Code::INVALID_ARGUMENT,
};
ttrpc_error(code, err)
}
#[cfg(not(feature = "agent-policy"))]
@@ -460,7 +470,9 @@ impl AgentService {
let mut sig: libc::c_int = req.signal as libc::c_int;
{
let mut sandbox = self.sandbox.lock().await;
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
let p = sandbox
.find_container_process(cid.as_str(), eid.as_str())
.map_err(sandbox_err_to_ttrpc)?;
// For container initProcess, if it hasn't installed handler for "SIGTERM" signal,
// it will ignore the "SIGTERM" signal sent to it, thus send it "SIGKILL" signal
// instead of "SIGTERM" to terminate it.
@@ -568,7 +580,9 @@ impl AgentService {
let (exit_send, mut exit_recv) = tokio::sync::mpsc::channel(100);
let exit_rx = {
let mut sandbox = self.sandbox.lock().await;
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
let p = sandbox
.find_container_process(cid.as_str(), eid.as_str())
.map_err(sandbox_err_to_ttrpc)?;
p.exit_watchers.push(exit_send);
pid = p.pid;
@@ -665,7 +679,9 @@ impl AgentService {
let term_exit_notifier;
let reader = {
let mut sandbox = self.sandbox.lock().await;
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
let p = sandbox
.find_container_process(cid.as_str(), eid.as_str())
.map_err(sandbox_err_to_ttrpc)?;
term_exit_notifier = p.term_exit_notifier.clone();
@@ -947,12 +963,7 @@ impl agent_ttrpc::AgentService for AgentService {
let p = sandbox
.find_container_process(cid.as_str(), eid.as_str())
.map_err(|e| {
ttrpc_error(
ttrpc::Code::INVALID_ARGUMENT,
format!("invalid argument: {:?}", e),
)
})?;
.map_err(sandbox_err_to_ttrpc)?;
p.close_stdin().await;
@@ -970,12 +981,7 @@ impl agent_ttrpc::AgentService for AgentService {
let mut sandbox = self.sandbox.lock().await;
let p = sandbox
.find_container_process(req.container_id(), req.exec_id())
.map_err(|e| {
ttrpc_error(
ttrpc::Code::UNAVAILABLE,
format!("invalid argument: {:?}", e),
)
})?;
.map_err(sandbox_err_to_ttrpc)?;
let fd = p
.term_master
@@ -990,7 +996,7 @@ impl agent_ttrpc::AgentService for AgentService {
let err = unsafe { libc::ioctl(fd, TIOCSWINSZ, &win) };
Errno::result(err)
.map(drop)
.map_ttrpc_err(|e| format!("ioctl error: {:?}", e))?;
.map_ttrpc_err(|e| format!("ioctl error: {e:?}"))?;
Ok(Empty::new())
}
@@ -1014,20 +1020,20 @@ impl agent_ttrpc::AgentService for AgentService {
#[cfg(not(target_arch = "s390x"))]
{
let pcipath = pci::Path::from_str(&interface.devicePath).map_ttrpc_err(|e| {
format!("Unexpected pci-path for network interface: {:?}", e)
format!("Unexpected pci-path for network interface: {e:?}")
})?;
wait_for_pci_net_interface(&self.sandbox, &pcipath)
.await
.map_ttrpc_err(|e| format!("interface not available: {:?}", e))?;
.map_ttrpc_err(|e| format!("interface not available: {e:?}"))?;
}
#[cfg(target_arch = "s390x")]
{
let ccw_dev = ccw::Device::from_str(&interface.devicePath).map_ttrpc_err(|e| {
format!("Unexpected CCW path for network interface: {:?}", e)
format!("Unexpected CCW path for network interface: {e:?}")
})?;
wait_for_ccw_net_interface(&self.sandbox, &ccw_dev)
.await
.map_ttrpc_err(|e| format!("interface not available: {:?}", e))?;
.map_ttrpc_err(|e| format!("interface not available: {e:?}"))?;
}
}
@@ -1037,7 +1043,7 @@ impl agent_ttrpc::AgentService for AgentService {
.rtnl
.update_interface(&interface)
.await
.map_ttrpc_err(|e| format!("update interface: {:?}", e))?;
.map_ttrpc_err(|e| format!("update interface: {e:?}"))?;
Ok(interface)
}
@@ -1062,13 +1068,13 @@ impl agent_ttrpc::AgentService for AgentService {
.rtnl
.update_routes(new_routes)
.await
.map_ttrpc_err(|e| format!("Failed to update routes: {:?}", e))?;
.map_ttrpc_err(|e| format!("Failed to update routes: {e:?}"))?;
let list = sandbox
.rtnl
.list_routes()
.await
.map_ttrpc_err(|e| format!("Failed to list routes after update: {:?}", e))?;
.map_ttrpc_err(|e| format!("Failed to list routes after update: {e:?}"))?;
Ok(protocols::agent::Routes {
Routes: list,
@@ -1086,7 +1092,7 @@ impl agent_ttrpc::AgentService for AgentService {
update_ephemeral_mounts(sl(), &req.storages, &self.sandbox)
.await
.map_ttrpc_err(|e| format!("Failed to update mounts: {:?}", e))?;
.map_ttrpc_err(|e| format!("Failed to update mounts: {e:?}"))?;
Ok(Empty::new())
}
@@ -1237,7 +1243,7 @@ impl agent_ttrpc::AgentService for AgentService {
.rtnl
.list_interfaces()
.await
.map_ttrpc_err(|e| format!("Failed to list interfaces: {:?}", e))?;
.map_ttrpc_err(|e| format!("Failed to list interfaces: {e:?}"))?;
Ok(protocols::agent::Interfaces {
Interfaces: list,
@@ -1260,7 +1266,7 @@ impl agent_ttrpc::AgentService for AgentService {
.rtnl
.list_routes()
.await
.map_ttrpc_err(|e| format!("list routes: {:?}", e))?;
.map_ttrpc_err(|e| format!("list routes: {e:?}"))?;
Ok(protocols::agent::Routes {
Routes: list,
@@ -1377,7 +1383,7 @@ impl agent_ttrpc::AgentService for AgentService {
.rtnl
.add_arp_neighbors(neighs)
.await
.map_ttrpc_err(|e| format!("Failed to add ARP neighbours: {:?}", e))?;
.map_ttrpc_err(|e| format!("Failed to add ARP neighbours: {e:?}"))?;
Ok(Empty::new())
}
@@ -1597,7 +1603,7 @@ impl agent_ttrpc::AgentService for AgentService {
ma.memcg_set_config_async(mem_agent_memcgconfig_to_memcg_optionconfig(&config))
.await
.map_err(|e| {
let estr = format!("ma.memcg_set_config_async fail: {}", e);
let estr = format!("ma.memcg_set_config_async fail: {e}");
error!(sl(), "{}", estr);
ttrpc::Error::RpcStatus(ttrpc::get_status(ttrpc::Code::INTERNAL, estr))
})?;
@@ -1621,7 +1627,7 @@ impl agent_ttrpc::AgentService for AgentService {
ma.compact_set_config_async(mem_agent_compactconfig_to_compact_optionconfig(&config))
.await
.map_err(|e| {
let estr = format!("ma.compact_set_config_async fail: {}", e);
let estr = format!("ma.compact_set_config_async fail: {e}");
error!(sl(), "{}", estr);
ttrpc::Error::RpcStatus(ttrpc::get_status(ttrpc::Code::INTERNAL, estr))
})?;
@@ -2233,10 +2239,8 @@ fn load_kernel_module(module: &protocols::agent::KernelModule) -> Result<()> {
Some(code) => {
let std_out = String::from_utf8_lossy(&output.stdout);
let std_err = String::from_utf8_lossy(&output.stderr);
let msg = format!(
"load_kernel_module return code: {} stdout:{} stderr:{}",
code, std_out, std_err
);
let msg =
format!("load_kernel_module return code: {code} stdout:{std_out} stderr:{std_err}");
Err(anyhow!(msg))
}
None => Err(anyhow!("Process terminated by signal")),
@@ -2481,6 +2485,26 @@ mod tests {
// normally this module should eixsts...
m.name = "bridge".to_string();
let result = load_kernel_module(&m);
// Skip test if loading kernel modules is not permitted
// or kernel module is not found
if let Err(e) = &result {
let error_string = format!("{e:?}");
// Let's print out the error message first
println!("DEBUG: error: {error_string}");
if error_string.contains("Operation not permitted")
|| error_string.contains("EPERM")
|| error_string.contains("Permission denied")
{
println!("INFO: skipping test - loading kernel modules is not permitted in this environment");
return;
}
if error_string.contains("not found") {
println!("INFO: skipping test - kernel module is not found in this environment");
return;
}
}
assert!(result.is_ok(), "load module should success");
}
@@ -2609,12 +2633,12 @@ mod tests {
},
TestData {
create_container: false,
result: Err(anyhow!(crate::sandbox::ERR_INVALID_CONTAINER_ID)),
result: Err(anyhow!(crate::sandbox::SandboxError::InvalidContainerId)),
..Default::default()
},
TestData {
container_id: "8181",
result: Err(anyhow!(crate::sandbox::ERR_INVALID_CONTAINER_ID)),
result: Err(anyhow!(crate::sandbox::SandboxError::InvalidContainerId)),
..Default::default()
},
TestData {
@@ -2628,7 +2652,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let logger = slog::Logger::root(slog::Discard, o!());
let mut sandbox = Sandbox::new(&logger).unwrap();
@@ -2699,7 +2723,7 @@ mod tests {
// the fd will be closed on Process's dropping.
// unistd::close(wfd).unwrap();
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
assert_result!(d.result, result, msg);
}
}
@@ -2803,7 +2827,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let logger = slog::Logger::root(slog::Discard, o!());
let mut sandbox = Sandbox::new(&logger).unwrap();
@@ -2823,15 +2847,14 @@ mod tests {
let result = update_container_namespaces(&sandbox, &mut oci, d.use_sandbox_pidns);
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
assert_result!(d.result, result, msg);
if let Some(linux) = oci.linux() {
assert_eq!(
d.expected_namespaces,
linux.namespaces().clone().unwrap(),
"{}",
msg
"{msg}"
);
}
}
@@ -2924,7 +2947,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let dir = tempdir().expect("failed to make tempdir");
let block_size_path = dir.path().join("block_size_bytes");
@@ -2944,7 +2967,7 @@ mod tests {
hotplug_probe_path.to_str().unwrap(),
);
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
assert_result!(d.result, result, msg);
}
@@ -3054,7 +3077,7 @@ OtherField:other
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let dir = tempdir().expect("failed to make tempdir");
let proc_status_file_path = dir.path().join("status");
@@ -3065,9 +3088,9 @@ OtherField:other
let result = is_signal_handled(proc_status_file_path.to_str().unwrap(), d.signum);
let msg = format!("{}, result: {:?}", msg, result);
let msg = format!("{msg}, result: {result:?}");
assert_eq!(d.result, result, "{}", msg);
assert_eq!(d.result, result, "{msg}");
}
}
@@ -3366,9 +3389,9 @@ COMMIT
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let result = is_sealed_secret_path(d.source_path);
assert_eq!(d.result, result, "{}", msg);
assert_eq!(d.result, result, "{msg}");
}
}
}

View File

@@ -32,6 +32,7 @@ use rustjail::container::BaseContainer;
use rustjail::container::LinuxContainer;
use rustjail::process::Process;
use slog::Logger;
use thiserror::Error;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use tokio::sync::oneshot;
use tokio::sync::Mutex;
@@ -47,7 +48,16 @@ use crate::storage::StorageDeviceGeneric;
use crate::uevent::{Uevent, UeventMatcher};
use crate::watcher::BindWatcher;
pub const ERR_INVALID_CONTAINER_ID: &str = "Invalid container id";
/// Errors that can occur when looking up processes in the sandbox.
#[derive(Debug, Error)]
pub enum SandboxError {
#[error("Invalid container id")]
InvalidContainerId,
#[error("Process not found: init process missing")]
InitProcessNotFound,
#[error("Process not found: invalid exec id")]
InvalidExecId,
}
type UeventWatcher = (Box<dyn UeventMatcher>, oneshot::Sender<Uevent>);
@@ -248,7 +258,7 @@ impl Sandbox {
}
let mut pid_ns = Namespace::new(&self.logger).get_pid();
pid_ns.path = format!("/proc/{}/ns/pid", init_pid);
pid_ns.path = format!("/proc/{init_pid}/ns/pid");
self.sandbox_pidns = Some(pid_ns);
}
@@ -282,10 +292,14 @@ impl Sandbox {
None
}
pub fn find_container_process(&mut self, cid: &str, eid: &str) -> Result<&mut Process> {
pub fn find_container_process(
&mut self,
cid: &str,
eid: &str,
) -> Result<&mut Process, SandboxError> {
let ctr = self
.get_container(cid)
.ok_or_else(|| anyhow!(ERR_INVALID_CONTAINER_ID))?;
.ok_or(SandboxError::InvalidContainerId)?;
if eid.is_empty() {
let init_pid = ctr.init_process_pid;
@@ -293,10 +307,11 @@ impl Sandbox {
.processes
.values_mut()
.find(|p| p.pid == init_pid)
.ok_or_else(|| anyhow!("cannot find init process!"));
.ok_or(SandboxError::InitProcessNotFound);
}
ctr.get_process(eid).map_err(|_| anyhow!("Invalid exec id"))
ctr.get_process(eid)
.map_err(|_| SandboxError::InvalidExecId)
}
#[instrument]
@@ -711,8 +726,7 @@ mod tests {
let ref_count = new_storage.ref_count().await;
assert_eq!(
ref_count, 1,
"Invalid refcount, got {} expected 1.",
ref_count
"Invalid refcount, got {ref_count} expected 1."
);
// Use the existing sandbox storage
@@ -723,8 +737,7 @@ mod tests {
let ref_count = new_storage.ref_count().await;
assert_eq!(
ref_count, 2,
"Invalid refcount, got {} expected 2.",
ref_count
"Invalid refcount, got {ref_count} expected 2."
);
}
@@ -811,11 +824,7 @@ mod tests {
// Reference counter should decrement to 1.
let storage = &s.storages[storage_path];
let refcount = storage.ref_count().await;
assert_eq!(
refcount, 1,
"Invalid refcount, got {} expected 1.",
refcount
);
assert_eq!(refcount, 1, "Invalid refcount, got {refcount} expected 1.");
assert!(
s.remove_sandbox_storage(storage_path).await.unwrap(),
@@ -959,7 +968,7 @@ mod tests {
assert!(s.sandbox_pidns.is_some());
let ns_path = format!("/proc/{}/ns/pid", test_pid);
let ns_path = format!("/proc/{test_pid}/ns/pid");
assert_eq!(s.sandbox_pidns.unwrap().path, ns_path);
}
@@ -1271,7 +1280,7 @@ mod tests {
let tmpdir_path = tmpdir.path().to_str().unwrap();
for (i, d) in tests.iter().enumerate() {
let current_test_dir_path = format!("{}/test_{}", tmpdir_path, i);
let current_test_dir_path = format!("{tmpdir_path}/test_{i}");
fs::create_dir(&current_test_dir_path).unwrap();
// create numbered directories and fill using root name
@@ -1280,7 +1289,7 @@ mod tests {
"{}/{}{}",
current_test_dir_path, d.directory_autogen_name, j
);
let subfile_path = format!("{}/{}", subdir_path, SYSFS_ONLINE_FILE);
let subfile_path = format!("{subdir_path}/{SYSFS_ONLINE_FILE}");
fs::create_dir(&subdir_path).unwrap();
let mut subfile = File::create(subfile_path).unwrap();
subfile.write_all(b"0").unwrap();
@@ -1307,18 +1316,15 @@ mod tests {
result.is_ok()
);
assert_eq!(result.is_ok(), d.result.is_ok(), "{}", msg);
assert_eq!(result.is_ok(), d.result.is_ok(), "{msg}");
if d.result.is_ok() {
let test_result_val = *d.result.as_ref().ok().unwrap();
let result_val = result.ok().unwrap();
msg = format!(
"test[{}]: {:?}, expected {}, actual {}",
i, d, test_result_val, result_val
);
msg = format!("test[{i}]: {d:?}, expected {test_result_val}, actual {result_val}");
assert_eq!(test_result_val, result_val, "{}", msg);
assert_eq!(test_result_val, result_val, "{msg}");
}
}
}

View File

@@ -44,7 +44,7 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
if let Some(pid) = wait_status.pid() {
let raw_pid = pid.as_raw();
let child_pid = format!("{}", raw_pid);
let child_pid = format!("{raw_pid}");
let logger = logger.new(o!("child-pid" => child_pid));

View File

@@ -11,9 +11,10 @@ use std::str::FromStr;
use std::sync::Arc;
use anyhow::{anyhow, Context, Result};
#[cfg(target_arch = "s390x")]
use kata_types::device::DRIVER_BLK_CCW_TYPE;
use kata_types::device::{
DRIVER_BLK_CCW_TYPE, DRIVER_BLK_MMIO_TYPE, DRIVER_BLK_PCI_TYPE, DRIVER_NVDIMM_TYPE,
DRIVER_SCSI_TYPE,
DRIVER_BLK_MMIO_TYPE, DRIVER_BLK_PCI_TYPE, DRIVER_NVDIMM_TYPE, DRIVER_SCSI_TYPE,
};
use kata_types::mount::StorageDevice;
use protocols::agent::Storage;
@@ -93,9 +94,11 @@ impl StorageHandler for VirtioBlkPciHandler {
}
}
#[cfg(target_arch = "s390x")]
#[derive(Debug)]
pub struct VirtioBlkCcwHandler {}
#[cfg(target_arch = "s390x")]
#[async_trait::async_trait]
impl StorageHandler for VirtioBlkCcwHandler {
#[instrument]

View File

@@ -467,7 +467,7 @@ mod tests {
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
skip_loop_by_user!(msg, d.test_user);
@@ -515,7 +515,7 @@ mod tests {
nix::mount::umount(&mount_point).unwrap();
}
let msg = format!("{}: result: {:?}", msg, result);
let msg = format!("{msg}: result: {result:?}");
if d.error_contains.is_empty() {
assert!(result.is_ok(), "{}", msg);
} else {
@@ -576,7 +576,7 @@ mod tests {
let tempdir = tempdir().expect("failed to create tmpdir");
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let mount_dir = tempdir.path().join(d.mount_path);
fs::create_dir(&mount_dir)
@@ -663,7 +663,7 @@ mod tests {
let tempdir = tempdir().expect("failed to create tmpdir");
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let msg = format!("test[{i}]: {d:?}");
let mount_dir = tempdir.path().join(d.path);
fs::create_dir(&mount_dir)
@@ -674,12 +674,12 @@ mod tests {
// create testing directories and files
for n in 1..COUNT {
let nest_dir = mount_dir.join(format!("nested{}", n));
let nest_dir = mount_dir.join(format!("nested{n}"));
fs::create_dir(&nest_dir)
.unwrap_or_else(|_| panic!("{}: failed to create nest directory", msg));
for f in 1..COUNT {
let filename = nest_dir.join(format!("file{}", f));
let filename = nest_dir.join(format!("file{f}"));
File::create(&filename)
.unwrap_or_else(|_| panic!("{}: failed to create file", msg));
file_mode = filename.as_path().metadata().unwrap().permissions().mode();
@@ -707,9 +707,9 @@ mod tests {
);
for n in 1..COUNT {
let nest_dir = mount_dir.join(format!("nested{}", n));
let nest_dir = mount_dir.join(format!("nested{n}"));
for f in 1..COUNT {
let filename = nest_dir.join(format!("file{}", f));
let filename = nest_dir.join(format!("file{f}"));
let file = Path::new(&filename);
assert_eq!(file.metadata().unwrap().gid(), d.gid);

View File

@@ -147,7 +147,7 @@ mod tests {
let v = vec_locked
.as_deref_mut()
.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
.map_err(|e| std::io::Error::other(e.to_string()))?;
std::io::Write::flush(v)
}

View File

@@ -556,9 +556,9 @@ mod tests {
use test_utils::skip_if_not_root;
async fn create_test_storage(dir: &Path, id: &str) -> Result<(protos::Storage, PathBuf)> {
let src_path = dir.join(format!("src{}", id));
let src_path = dir.join(format!("src{id}"));
let src_filename = src_path.to_str().expect("failed to create src filename");
let dest_path = dir.join(format!("dest{}", id));
let dest_path = dir.join(format!("dest{id}"));
let dest_filename = dest_path.to_str().expect("failed to create dest filename");
std::fs::create_dir_all(src_filename).expect("failed to create path");
@@ -682,7 +682,7 @@ mod tests {
// setup storage0: too many files
for i in 1..21 {
fs::write(src0_path.join(format!("{}.txt", i)), "original").unwrap();
fs::write(src0_path.join(format!("{i}.txt")), "original").unwrap();
}
// setup storage1: two small files
@@ -700,7 +700,7 @@ mod tests {
// setup storage3: many files, but still watchable
for i in 1..MAX_ENTRIES_PER_STORAGE {
fs::write(src3_path.join(format!("{}.txt", i)), "original").unwrap();
fs::write(src3_path.join(format!("{i}.txt")), "original").unwrap();
}
let logger = slog::Logger::root(slog::Discard, o!());
@@ -919,7 +919,7 @@ mod tests {
// Up to 15 files should be okay (can watch 15 files + 1 directory)
for i in 1..MAX_ENTRIES_PER_STORAGE {
fs::write(source_dir.path().join(format!("{}.txt", i)), "original").unwrap();
fs::write(source_dir.path().join(format!("{i}.txt")), "original").unwrap();
}
assert_eq!(
@@ -1387,7 +1387,7 @@ mod tests {
assert!(!dest_dir.path().exists());
for i in 1..21 {
fs::write(source_dir.path().join(format!("{}.txt", i)), "fluff").unwrap();
fs::write(source_dir.path().join(format!("{i}.txt")), "fluff").unwrap();
}
// verify non-watched storage is cleaned up correctly

View File

@@ -70,7 +70,7 @@ impl ExportError for Error {
}
fn make_io_error(desc: String) -> std::io::Error {
std::io::Error::new(ErrorKind::Other, desc)
std::io::Error::other(desc)
}
// Send a trace span to the forwarder running on the host.

2835
src/dragonball/Cargo.lock generated

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More