Set KubeletConfiguration runtimeRequestTimeout to 600s mainly for CoCo
(Confidential Containers) tests, so container creation (attestation,
policy, image pull, VM start) does not hit the default CRI timeout.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Treat waiting.reason RunContainerError and terminated.reason StartError/Error
as container failure, so tests that expect guest image-pull failure (e.g.
wrong credentials) pass when the container fails with those states instead
of only BackOff.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Later on we may even think about merging those, but for now let's at
least make sure the envs used are the same / declared in a similar place
for each job.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Run all CoCo non-TEE variants in a single job on the free runner with an
explicit environment matrix (vmm, snapshotter, pull_type, kbs,
containerd_version).
Here we're testing CoCo only with the "active" version of containerd.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
We were running most of the k8s integration tests on AKS. The ones that
don't actually depend on AKS's environment now run on normal
ubuntu-24.04 GitHub runners instead: we bring up a kubeadm cluster
there, test with both containerd lts and active, and skip attestation
tests since those runtimes don't need them. AKS is left only for the
jobs that do depend on it.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Disable mariner host testing in CI, and auto-generated policy testing
for the temporary replacements of these hosts (based on ubuntu), to work
around missing:
1. cloud-hypervisor/cloud-hypervisor@0a5e79a, that will allow Kata
in the future to disable the nested property of guest VPs. Nested
is enabled by default and doesn't work yet with mariner's MSHV.
2. cloud-hypervisor/cloud-hypervisor@bf6f0f8, exposed by the large
ttrpc replies intentionally produced by the Kata CI Policy tests.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Make `az aks create` command easier to change when needed, by moving the
arguments specific to mariner nodes onto a separate line of this script.
This change also removes the need for `shellcheck disable=SC2046` here.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
```
This release has been tracked in v50.0 group of our roadmap project.
Configurable Nested Virtualization Option on x86_64
The nested=on|off option has been added to --cpu to allow users
to configure nested virtualization support in the guest on x86_64
hosts (for both KVM and MSHV). The default value is on to maintain
consistency with existing behavior. (#7408)
Compression Support for QCOW2
QCOW2 support has been extended to handle compression clusters based on
zlib and zstd. (#7462)
Notable Performance Improvements
Performance of live migration has been improved via an optimized
implementation of dirty bitmap maintenance. (#7468)
Live Disk Resizing Support for Raw Images
The /vm.resize-disk API has been introduced to allow users to resize block
devices backed by raw images while a guest is running. (#7476)
Developer Experience Improvements
Significant improvements have been made to developer experience and
productivity. These include a simplified root manifest, codified and
tightened Clippy lints, and streamlined workflows for cargo clippy and
cargo test. (#7489)
Improved File-level Locking Support
Block devices now use byte-range advisory locks instead of whole-file
locks. While both approaches prevent multiple Cloud Hypervisor instances
from simultaneously accessing the same disk image with write
permissions, byte-range locks provide better compatibility with network
storage backends. (#7494)
Logging Improvements
Logs now include event information generated by the event-monitor
module. (#7512)
Notable Bug Fixes
* Fix several issues around CPUID in the guest (#7485, #7495, #7508)
* Fix snapshot/restore for Windows Guest (#7492)
* Respect queue size in block performance tests (#7515)
* Fix several Serial Manager issues (#7502)
* Fix several seccomp violation issues (#7477, #7497, #7518)
* Fix various issues around block and qcow (#7526, #7528, #7537, #7546,
#7549)
* Retrieve MSRs list correctly on MSHV (#7543)
* Fix live migration (and snapshot/restore) with AMX state (#7534)
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Issue 10838 is resolved by the prior commit, enabling the -m
option of the kernel build for confidential guests which are
not users of the measured rootfs, and by commit
976df22119, which ensures
relevant user space packages are present.
Not every confidential guest has the measured rootfs option
enabled. Every confidential guest is assumed to support CDH's
secure storage features, in contrast.
We also adjust test timeouts to account for occasional spikes on
our bare metal runners (e.g., SNP, TDX, s390x).
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
After containerd 2.0.4, privileged containers handle sysfs mounts a bit
differently, so we can end up with the policy expecting RO and the input
having RW.
The sandbox needs to get privileged mounts when any container in the pod
is privileged, not only when the pause container itself is marked
privileged. So we now compute that and pass it into get_mounts.
One downside: we’re relaxing policy checks (accepting RO/RW mismatch for
sysfs) and giving the pause container privileged mounts whenever the pod
has any privileged workload. For Kata, that means a slightly broader
attack surface for privileged pods—the pause container sees more than it
strictly needs, and we’re being more permissive on sysfs.
It’s a trade-off for compatibility with newer containerd; if you need
maximum isolation, you may want to avoid privileged pods or tighten
policy elsewhere.
Fixes: #12532
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When kata-deploy is deployed with cloud-api-adaptor, it
defaults to qemu instead of configuring the remote shim.
Support ppc64le to enable it correctly when shims.remote.enabled=true
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
The VcpuThreadIds struct expects a mapping from vcpu_id to thread_id,
but get_ch_vcpu_tids() was inserting (tid, vcpu_id) instead of
(vcpu_id, tid).
This caused move_vcpus_to_sandbox_cgroup() to interpret vcpu IDs
(0, 1, 2...) as process IDs when sandbox_cgroup_only=false, leading
to failed attempts to read /proc/0/status.
Fixes: #12479
Signed-off-by: Chiranjeevi Uddanti <244287281+chiranjeevi-max@users.noreply.github.com>
Nowadays on arm64 we use a modern QEMU version which supports the features we
require for NVDIMM, so we remove the arm64-specific code and use the generic
implementation.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This disables virtio-pmem support for Cloud Hypervisor by changing
Kata config defaults and removing the relevant code paths.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Use rstest parameterized tests for QEMU variants, other hypervisors,
and unknown/empty shim cases.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When copying artifacts from the container to the host, detect source
entries that are symlinks and recreate them as symlinks at the
destination instead of copying the target file.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This test uses YAML files from a different directory than the other
k8s CI tests, so annotations have to be added into these separate
files.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
This change adds the CONFIDENTIAL_GUEST variable to the kernel
build logic. Similar to commit
976df22119, we would like to enable
the cryptsetup functionalities not only when building a measured
root file system, but also when building for a confidential guest.
The current state is that not all confidential guests use a
measured root filesystem, and as a matter of fact, we should
indeed decouple these aspects.
With the current convention, a confidential guest is a user of CDH
with its storage features. A better naming of the
CONFIDENTIAL_GUEST variable could have been a naming related to CDH
storage functionality. Further, the kernel build script's -m
parameter could be improved too - as indicated by this change, not
only measured rootfs builds will need the cryptsetup.conf file.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Similar to k8s-guest-pull-image-authenticated and to
k8s-guest-pull-image-signature, enabling k8s-guest-pull-image to
run against the experimental force guest pull method.
Only k8s-guest-pull-image-encrypted requires nydus.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
When qemu exits prematurely, we usually see a message like
msg="Cannot start VM" error="exiting QMP loop, command cancelled"
This is an indirect hint, caused by the QMP server shutting down. It
takes experience to understand what it even means, and it still does not
show what's actually the problem.
With this commit, we're taking the error return from the qemu
subprocess and surface it in the logs, if it's not nil. This means we
automatically capture any non-zero exit codes in the logs.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Turn test_toml_value_types into a parameterized test with one case per type
(string, bool, int). Merge the two invalid-TOML tests (get and set) into one
rstest with two cases, and the two "not an array" tests into one rstest
with two cases.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When writing containerd drop-in or other TOML (e.g. initially empty file),
the serialized document could start with many newlines.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Replace multiple #[test] functions for snapshotter and erofs version
checks with parameterized #[rstest] #[case] tests for consistency and
easier extension.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
With some older kernels some fs implementations don't handle empty
options strings well. This leads to failures in "setup rootfs" step.
E.g. `cgroup: cgroup2: unknown option ""`.
This is fixed by mapping empty string to `None` before passing to
`nix::mount`.
Signed-off-by: Jacek Tomasiak <jtomasiak@arista.com>
Signed-off-by: Jacek Tomasiak <jacek.tomasiak@gmail.com>
Disable provenance and SBOM when building per-arch kata-deploy images so
each tag is a single image manifest. quay.io rejects pushing multi-arch
manifest lists that include attestation manifests (400 manifest invalid).
Add a note in the release script documenting this.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The default RuntimeClass (e.g. kata) is meant to point at the default shim
handler (e.g. kata-qemu-$tee). We were building it in a separate block and
only sometimes adding the same TEE nodeSelectors as the shim-specific
RuntimeClass, leading to kata ending up without the SE/SNP/TDX
nodeSelector while kata-qemu-$tee had it.
The fix is to stop duplicating the RuntimeClass definition, having a
single template that renders one RuntimeClass (name, handler, overhead,
nodeSelectors).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When NFD is detected (deployed by the chart or existing in the cluster),
apply shim-specific nodeSelectors only for TEE runtime classes (snp,
tdx, and se).
Non-TEE shims keep existing behavior (e.g. runtimeClass.nodeSelector for
nvidia GPU from f3bba0885 is unchanged).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Per-arch images were failing publish-multiarch-manifest with 'X is a manifest
list' because Buildx now enables attestations by default, so each arch tag
became an image index. Use 'docker buildx imagetools create' instead of
'docker manifest create' so we can merge those indexes into the final
multi-arch manifest while keeping provenance and SBOM on per-arch images.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
We depend on GPU Operator v26.3 release, which is not out yet.
Although we have been testing with it, it's not yet publicly available,
which would break anyone actually trying to use the GPU runtime classes.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Enhance k8s-configmap.bats and k8s-credentials-secrets.bats to test that ConfigMap and Secret updates propagate to volume-mounted pods.
- Enhanced k8s-configmap.bats to test ConfigMap propagation
* Added volume mount test for ConfigMap consumption
* Added verification that ConfigMap updates propagate to volume-mounted pods
- Enhanced k8s-credentials-secrets.bats to test Secret propagation
* Added verification that Secret updates propagate to volume-mounted pods
Fixes#8015
Signed-off-by: Ajay Victor <ajvictor@in.ibm.com>
In containerd config v3 the CRI plugin is split into runtime and images,
and setting the snapshotter only on the runtime plugin is not enough for image
pull/prepare.
The images plugin must have runtime_platform.<runtime>.snapshotter so it
uses the correct snapshotter per runtime (e.g. nydus, erofs).
A PR on the containerd side is open so we can rely on the runtime plugin
snapshotter alone: https://github.com/containerd/containerd/pull/12836
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
In CI the default GPG keyring is often read-only or missing, so
'gpg --import' of the cached keyring fails and verification cannot
succeed. Use a temporary GNUPGHOME for import and verify so cached
gperf can be verified without writing to the system keyring.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The CC runtime classes kata-qemu-nvidia-gpu-snp and kata-qemu-nvidia-gpu-tdx
are mutually exclusive with kata-qemu-nvidia-gpu, as dictated by the gpu
cc mode setting. In order to properly support a cluster that has both CC and
non-CC nodes, we use a node selector so the scheduling is consistent with the
GPU mode. The GPU operator sets a label nvidia.com/cc.ready.state=[true, false]
to indicate the gpu mode setting
Fixes#12431
Signed-off-by: Joji Mekkattuparamban <jojim@nvidia.com>
genpolicy pulls image manifests from nvcr.io to generate policy and was
failing with 'UnauthorizedError' because it had no registry credentials.
Genpolicy (src/tools/genpolicy) uses docker_credential::get_credential()
in registry.rs, which reads from DOCKER_CONFIG/config.json. Add
setup_genpolicy_registry_auth() to create a Docker config with nvcr.io
auth (NGC_API_KEY) and set DOCKER_CONFIG before running genpolicy so it
can authenticate when pulling manifests.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add push-oras-tarball-cache workflow that runs on push to main when
versions.yaml changes (and on workflow_dispatch). It populates the
ghcr.io ORAS cache with gperf and busybox tarballs from versions.yaml.
Remove the push_to_cache call from download-with-oras-cache.sh since
it was never triggered in CI. Cache population is now done solely by the
new workflow and by populate-oras-tarball-cache.sh when run manually.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Separatly added hypervisor devices to cgroup to
omit not relevant warnings and fail if none of them
are available.
Also fix a testcase reload removed kernel modules to later testcases
and skip some tests on ARM because lack of virtualization support
Fixes#6656
Signed-off-by: Balint Tobik <btobik@redhat.com>
Add tests for the split_non_toml_header helper that strips Go template
directives before TOML parsing, and for every TOML operation (set, get,
append, remove, set_array) on files that start with {{ template "base" . }}.
Also converts the containerd version detection tests in manager.rs from
individual #[test] functions with helper wrappers to parametrized #[rstest]
cases, which is more readable and easier to extend.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
K3s docs (https://docs.k3s.io/advanced#configuring-containerd) say that the
right way to customize containerd is to extend the base template with
{{ template "base" . }} and append your own TOML blocks, rather than copying a
prerendered config.toml into the template file.
We were copying config.toml into config.toml.tmpl / config-v3.toml.tmpl, which
meant we were replacing the K3s defaults with a snapshot that gets stale as
soon as K3s is upgraded.
Now we create the template files with just the base directive and let our
regular set_toml_value code path append the Kata runtime configuration on top.
To make that work, the TOML utils learned to handle files that start with a
Go template line ({{ ... }}): strip it before parsing, put it back when writing.
This keeps the K3s/RKE2 path identical to every other runtime -- no special
append logic needed.
refs:
* k3s:: https://docs.k3s.io/advanced#configuring-containerd
* rke2: https://docs.rke2.io/advanced?_highlight=conyainerd#configuring-containerd
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The mirror introduced by #11178 still breaks quite often so apply this as a
quick fix.
A proper solution would probably be to load balance like in #12453.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
We've seen a lot of spurious issues when deploying the infra needed for
the tests. Let's give it a few tries before actually failing.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
The gperf-3.3 tarball frequently fails to download on my end with
cryptic error messages such as: "tar: This does not look like a tar
archive". This change tightens the download logic a bit: We fail at
the point in time when we're supposed to fail. This way we detect
rate limiting issues right away, and this way, the actual hashsum
and signature checks are effective, not only printouts.
This change also updates the key reference and allows for an array,
for instance, when a different signer was used for a cache vs
upstream version.
The change also makes it clear, that signature verification is only
implemented for the gperf tarball. Improvements can be made in a
subsequent change.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Custom runtimes whose base config lives under runtime-rs/ (e.g. dragonball,
cloud-hypervisor) were not found because the path was always built under
share/defaults/kata-containers/. Use get_kata_containers_original_config_path
for the handler so rust shim configs are read from .../runtime-rs/.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
On helm uninstall let's rely on a preStop hook to run kata-deploy
cleanup so each pod cleans its node before exiting.
We **must** keep RBAC (resource-policy: keep) so pods retain API access
during termination, and then can properly delete the NodeFeatureRules
and remove the labels from the nodes.
The post-delete hook Job, which runs on a single node, now is only
responsible for cleaning the kept RBAC (cluster-wide resource) after
uninstall, not leaving any resource or artefact behind.
The changes on this commit lead to a "resouerces were kept" message when
running `helm uninstall`, which document as being normal, as the
post-delete job will remove those.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When removing a node label, JSON merge patch semantics require setting
the key to null; omitting the key leaves it unchanged.
Fix label_node to send a patch with the label key set to null so the API
server actually removes katacontainers.io/kata-runtime.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Wait for SIGTERM after install and exit(0) so the container terminates
cleanly. If registering the SIGTERM handler fails, log a warning and
sleep forever instead of exiting with an error (fallback to the old
behaviour).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
After the move to Linux 6.17 and QEMU 10.2 from Kata,
k8s-sandbox-vcpus-allocation.bats started failing on TDX.
2026-02-10T16:39:39.1305813Z # pod/vcpus-less-than-one-with-no-limits created
2026-02-10T16:39:39.1306474Z # pod/vcpus-less-than-one-with-limits created
2026-02-10T16:39:39.1307090Z # pod/vcpus-more-than-one-with-limits created
2026-02-10T16:39:39.1307672Z # pod/vcpus-less-than-one-with-limits condition met
2026-02-10T16:39:39.1308373Z # timed out waiting for the condition on pods/vcpus-less-than-one-with-no-limits
2026-02-10T16:39:39.1309132Z # timed out waiting for the condition on pods/vcpus-more-than-one-with-limits
2026-02-10T16:39:39.1310370Z # Error from server (BadRequest): container "vcpus-less-than-one-with-no-limits" in pod "vcpus-less-than-one-with-no-limits" is waiting to start: ContainerCreating
A manual test without agent policies added it seems to work OK but disable
the test for now to get CI stable.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
- We don't use containerd.latest as the comment on it suggests
- We also don't have any references to `sriov-network-device`
so remove that and the plugins section.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
this will suppress yaml output only if the input is passed via
stdin. If {base64/raw}-out is passed in alongside a yaml file, the
encoded annotation or the policy data respectively will be printed
to stdout as before.
Fixes#12438
Signed-off-by: Spyros Seimenis <sse@edgeless.systems>
As s390x and ppc64 use a flat CPU topology without sockets and threads,
this commit skips the socket_id and thread_id properties for vCPU hotplug
on these architectures instead of aborting the operation.
This is the change in line with those from the Go runtime:
- isSocketIDSupported()
- isThreadIDSupported()
Fixes: #12155
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
- Trim trailing whitespace and ensure final newline in non-vendor files
- Add .editorconfig-checker.json excluding vendor dirs, *.patch, *.img,
*.dtb, *.drawio, *.svg, and pkg/cloud-hypervisor/client so CI only
checks project code
- Leave generated and binary assets unchanged (excluded from checker)
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
The runtime-rs shim was failing to load its configuration when deployed
via kata-deploy because it couldn't correctly parse the ConfigPath passed
by containerd. The previous implementation naively skipped the first 2
bytes of the options and interpreted the rest as a UTF-8 string, which
doesn't work since containerd passes a properly serialized protobuf
message of type runtimeoptions.v1.Options.
This change adds the runtimeoptions.proto definition to the protocols
crate and updates the load_config function to correctly deserialize the
protobuf message and extract the config_path field, matching how the Go
runtime handles this via typeurl.UnmarshalAny.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Update the enable_nvrc_trace() function to use the new drop-in
configuration mechanism instead of directly modifying the base
configuration file. The function now creates a 90-nvrc-trace.toml
drop-in file that properly combines existing kernel parameters
with the nvrc.log=trace setting.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When kata-deploy installs Kata Containers, the base configuration files
should not be modified directly. This change adds documentation explaining
how to use drop-in configuration files for customization, and prepends a
warning comment to all deployed configuration files reminding users to use
drop-in files instead.
The warning is added to both standard shim configurations and custom
runtime configurations. It includes a brief explanation of how drop-in
files work and points users to the documentation for more details.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add clear INFO-level messages when creating drop-in configuration
files, making it easy to understand what kata-deploy is doing during
installation:
- "Setting up runtime directory for shim: X"
- "Generating drop-in configuration files for shim: X"
- "Created drop-in file: <path>"
When DEBUG mode is enabled (via DEBUG=true environment variable),
also log the full content of each drop-in file to aid troubleshooting.
The log level is now automatically set to Debug when the DEBUG
environment variable is set, ensuring debug messages are visible.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Deduplicate the drop-in file generation logic between configure_shim_config
and install_custom_runtime_configs by extracting it into a shared
write_common_drop_ins helper function.
This ensures both standard and custom runtimes use the same code path
for generating drop-in configuration files.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add a combined drop-in file (30-kernel-params.toml) that handles all
kernel_params modifications. This approach reads the base kernel_params
from the original untouched config file and combines them with:
- Proxy settings (agent.https_proxy, agent.no_proxy)
- Debug settings (agent.log=debug, initcall_debug)
Using a single drop-in file for kernel_params avoids the TOML merge
behavior where scalar values are replaced rather than appended.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When debug mode is enabled, generate a drop-in configuration file
(20-debug.toml) with the boolean debug flags for hypervisor, runtime,
and agent sections.
Note: kernel_params for debug (agent.log=debug, initcall_debug) will
be handled by a separate combined kernel_params drop-in file to avoid
the TOML merge replacement behavior.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When the installation prefix differs from the default /opt/kata,
generate a drop-in configuration file (10-installation-prefix.toml)
with the adjusted paths instead of modifying the original config file.
This removes the need for adjust_installation_prefix and
adjust_qemu_cmdline functions which are now deleted along with
their tests.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Instead of modifying original config files directly, set up a per-shim
directory structure that uses symlinks to the original configs and
config.d/ directories for drop-in overrides.
This enables cleaner configuration management where the original files
remain untouched and all kata-deploy customizations are in separate
drop-in files that can be easily inspected and removed.
Directory structure:
{config_path}/runtimes/{shim}/
{config_path}/runtimes/{shim}/configuration-{shim}.toml -> symlink
{config_path}/runtimes/{shim}/config.d/
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There is code to disable this at runtime when confidential_guest
is enabled anyway[^1], but it will omit a warning every time. All
the touched configuration files set confidential_guest to true,
so we already know nvdimm isn't supported.
[^1]: 16a7ed6e14/src/runtime/virtcontainers/qemu_amd64.go (L144-L148)
Signed-off-by: Paul Meyer <katexochen0@gmail.com>
The number of workflows increased over 30 so we need to paginate them as
well as jobs. This commit extracts the existing pagination from jobs and
uses it for both jobs and workflows.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
The previous implementation failed to correctly propagate the network
multiqueue configuration, causing the effective queue number to remain
0.
It also mixed up "queue pairs" with "queue number", so tap devices were
opened without proper multiqueue initialization which causes Clh
netconfig validation failed.
This commit fixes the configuration mapping and initializes tap devices
with the correct multiqueue semantics, ensuring Cloud Hypervisor
receives a valid netconfig.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To make build with a configurable item of network queues, a dedicated
variable of DEFNETQUEUES is added.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit introduces a new annotation for users to easily set network
queues via "io.katacontainers.config.hypervisor.network_queues".
And the annotation will be mapped into `NetworkInfo.network_queues`
within the configuration.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This adds a basic configuration for editorconfig checker. The
supplied configuration checks against trailing whitespaces and
issues with newlines.
Example:
| tools/packaging/kernel/configs/fragments/x86_64/numa.conf:
| Wrong line endings or no final newline
| tools/packaging/release/generate_vendor.sh:
| 44: Trailing whitespace
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Update time to resolve CVE-2026-25727.
Note: this involved bumping the versions of slog-term and slog-json
and bumping the MSRV to 1.88.0 which time 0.3.47 requires.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Skip serializing anno/value regexes and the NVIDIA VFIO device type since they
are generation-time only.
Signed-off-by: Park.Jiyeon <jiyeonnn2@icloud.com>
- Moved VFIO-related config from "device_annotations" to a new "devices" section.
- Introduced structured "nvidia" subfield for NVIDIA-specific VFIO settings.
- Replaced hardcoded "nvidia.com/pgpu" with configurable "pgpu_resource_keys".
- Adjusted Rego rules and code to match new config schema.
Signed-off-by: Park.Jiyeon <jiyeonnn2@icloud.com>
Allow specifying multiple NVIDIA GPU resource keys via an explicit allowlist.
Keys are now configured under `device_annotations.vfio.nvidia_pgpu_resource_keys`
in genpolicy-settings.json. This removes the previous hardcoded reliance on
`nvidia.com/pgpu` and supports model-specific resource names.
Fixes#12322
Signed-off-by: Park.Jiyeon <jiyeonnn2@icloud.com>
This annotation was required for GPU cold-plug before using a
newer device plugin and before querying the pod resources API.
As this annotation is no longer required, cleaning it up.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
With enable_numa=true hypervisor will expose host NUMA topology as is:
map vm NUMA nodes to host 1:1 and bind vpus to relates CPUS.
Option "numa_mapping" allows to redefine NUMA nodes mapping:
- map each vm node to particular host node or several numa nodes
- emulate numa on host without numa (useful for tests)
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Co-authored-by: Zvonko Kaiser <zkaiser@nvidia.com>
Build a single kernel for both kernel and kernel-confidential on x86_64
and s390x. The kernel is built with TEE support (-x) on those arches only.
This helps to simplilfy and to maintain the code, and having a single
kernel was the original plan since forever.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Build a single kernel for both nvidia-gpu and nvidia-gpu-confidential,
simplifying and reducing code maintenance.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It will do following works in this commit:
(1) Rename pod_exec_with_retries() to pod_exec().
(2) Update implementation to call container_exec().
(3) Replace all usages of pod_exec_with_retries across tests
with pod_exec.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit aims to drop retries when kubectl exec a container:
(1) Rename container_exec_with_retries() to container_exec().
(2) Remove the retry loop and sleep backoff around kubectl exec.
Keep the same logging and container-selection logic and return
kubectl exec exit status directly.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
After the kata-agent "drain-after-exit" change, stdout/stderr EOF is
signaled by a successful ReadStdout/ReadStderr reply with empty Data
(len==0), instead of an RPC error. However, runtime-go currently
returns (0, nil) to io.CopyBuffer() when resp.Data is empty, which
violates Go io.Reader semantics and can cause `kubectl exec` to
hang after the command output is already printed.
To avoid exec hang:
In readProcessStream(), map an empty response (len(resp.Data)==0)
into (0, io.EOF). This allows the stdout/stderr copy goroutines to
terminate, closes exitIOch, and unblocks the wait path so exec can
complete normally.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The previous comment incorrectly implied that `biased` prevents data
loss and the exit notifier would never be polled before all buffered
data is read. And the detailed info can be seen from the document:
https://docs.rs/tokio/latest/src/tokio/macros/select.rs.html#67
Tokio's `biased` only makes polling order deterministic(top-to-bottom)
when multiple branches are ready in the same poll, and it makes fairness
the caller's responsibility. Output can still be truncated if the exit
notification becomes ready while `read_stream` is pending.
This change updates the comment to reflect the actual semantics and
caveats. No functional behavior change.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Short-lived processes (e.g., `kubectl exec echo`) in legacy-io mode
occasionally lose the last segments of their output.
The root cause is a race condition where the `term_exit_notifier`
triggers before the pipe buffers are fully drained. In the previous
implementation, once the exit notification was received, the agent
immediately returned an EOF, causing the runtime's `run_io_copy` to
terminate and drop any residual data in the pipe.
This patch introduces a "drain after exit" mechanism:
- Upon receiving an exit notification, the agent enters a 500ms window
for polling `read_streaim` to flush remaining data from the buffer.
- A true EOF is only returned if the stream is confirmed empty or the
timeout is reached.
This ensures reliable output delivery for transient exec tasks under
high concurrency.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Legacy IO uses shim polling via read_stdout/read_stderr. The agent
previously mapped pipe EOF (read() == 0) and term_exit_notifier to
errors ("read meet eof"/"eof"), which became ttrpc INTERNAL failures.
This caused runtime IO copy to abort early, leading to lost
stdout/stderr for short-lived exec (e.g."echo") and spurious failures.
Normalize EOF semantics: read_stream now returns Ok(empty) on EOF
instead of Err("read meet eof").
This makes legacy IO behave like a proper stream: data until EOF, no
INTERNAL errors for normal termination.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
We're introducing a root_complex to assign each
and every device to a NUMA node or to the default
root_complex="00" aka pcie.0. This patch introduces
the proper handling of the current qom path being
bus/device == "00/02" with NUMAA we need to extend it
with the root_complex/bus/device == "10/00/02".
We're defaulting to root_complex="00".
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Use OVMF path configuration for Intel TDX consistently:
$ git grep FIRMWARETD
src/runtime-rs/Makefile:FIRMWARETDXPATH := $(PREFIXDEPS)/share/ovmf/OVMF.inteltdx.fd
src/runtime-rs/Makefile:USER_VARS += FIRMWARETDXPATH
src/runtime-rs/config/configuration-qemu-tdx-runtime-rs.toml.in:firmware = "@FIRMWARETDXPATH@"
src/runtime/Makefile:FIRMWARETDVFPATH := $(PREFIXDEPS)/share/ovmf/OVMF.inteltdx.fd
Go runtime has used *TDVF* so just make runtime-rs to follow. This
keeps the behavior consistent when downstreams switch from Go runtime
to runtime-rs.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Introduce a new function to install additional packages into the
devkit flavor. With modprobe, we avoid errors on pod startup
related to loading nvidia kernel modules in the NVRC phase.
Note, the production flavor gets modprobe from busybox, see its
configuration file containing CONFIG_MODPROBE=y.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Remove the initramfs folder, its build steps, and use the kernel
based dm-verity enforcement for the handlers which used the
initramfs mode. Also, remove the initramfs verity mode
capability from the shims and their configs.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Read the kernel_verity_paramers from the shim config and adjust
the root hash for the negative test.
Further, improve some of the test logic by using shared
functions. This especially ensures we don't read the full
journalctl logs on a node but only the portion of the logs we are
actually supposed to look at.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Similar to the kernel_params annotation, add a
kernel_verity_params annotation and add logic to make these
parameters overwritable. For instance, this can be used in test
logic to provide bogus dm-verity hashes for negative tests.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Similar to the kernel_params annotation, add a
kernel_verity_params annotation and add logic to make these
parameters overwritable. For instance, this can be used in test
logic to provide bogus dm-verity hashes for negative tests.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This change introduces the kernel_verity_parameters knob to the
rust based shim, picking up dm-verity information in a new config
field (the corresponding build variable is already produced by
the shim build). The change extends the shim to parse dm-verity
information from this parameter and to construct the kernel command
line appropriately, based on the indicated initramfs or kernelinit
build variant.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This change introduces the kernel_verity_parameters knob to the
Go based shim, picking up dm-verity information in a new config
field (the corresponding build variable is already produced by
the shim build). The change extends the shim to parse dm-verity
information from this parameter and to construct the kernel command
line appropriately, based on the indicated initramfs or kernelinit
build variant.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
With dm-mod.create parameters using quotes, we remove the
backslashes used to escape these quotes from the output we
retrieve. This will enable attestation tests to work with the
kernelinit dm-verity mode.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Measured rootfs mode and CDH secure storage feature require the
cryptsetup-bin and e2fsprogs components in the guest.
This change makes this more explicity - confidential guests are
users of the CDH secure container image layer storage feature.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This change introduces the kernelinit dm-verity mode, allowing
initramfs-less dm-verity enforcement against the rootfs image.
For this, the change introduces a new variable with dm-verity
information. This variable will be picked up by shim
configurations in subsequent commits.
This will allow the shims to build the kernel command line
with dm-verity information based on the existing
kernel_parameters configuration knob and a new
kernel_verity_params configuration knob. The latter
specifically provides the relevant dm-verity information.
This new configuration knob avoids merging the verity
parameters into the kernel_params field. Avoiding this, no
cumbersome escape logic is required as we do not need to pass the
dm-mod.create="..." parameter directly in the kernel_parameters,
but only relevant dm-verity parameters in semi-structured manner
(see above). The only place where the final command line is
assembled is in the shims. Further, this is a line easy to comment
out for developers to disable dm-verity enforcement (or for CI
tasks).
This change produces the new kernelinit dm-verity parameters for
the NVIDIA runtime handlers, and modifies the format of how
these parameters are prepared for all handlers. With this, the
parameters are currently no longer provided to the
kernel_params configuration knob for any runtime handler.
This change alone should thus not be used as dm-verity
information will no longer be picked up by the shims.
systemd-analyze on the coco-dev handler shows that using the
kernelinit mode on a local machine, less time is spent in the
kernel phase, slightly speeding up pod start-up. On that machine,
the average of 172.5ms was reduced to 141ms (4 measurements, each
with a basic pod manifest), i.e., the kernel phase duration is
improved by about 18 percent.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This reverts commit 923f97bc66 in
order to re-instantiate the logic from commit
e4a13b9a4a.
The latter commit was previously reverted due to the NVIDIA GPU TEE
handler using an initrd, not an image.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Shift NVIDIA shim configurations to use an image instead of an initrd,
and remove trailing whitespaces from the configs.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Allow using an image instead of an initrd. For confidential
guests using images, the assumption is that the guest kernel uses
dm-verity protection, implicitly measuring the rootfs image via
the kernel command line's dm-verity information.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Convert the NGC_API_KEY from a regular Kubernetes secret to a sealed
secret for the CC GPU tests. This ensures the API key is only accessible
within the confidential enclave after successful attestation.
The sealed secret uses the "vault" type which points to a resource stored
in the Key Broker Service (KBS). The Confidential Data Hub (CDH) inside
the guest will unseal this secret by fetching it from KBS after
attestation.
The initdata file is created AFTER create_tmp_policy_settings_dir()
copies the empty default file, and BEFORE auto_generate_policy() runs.
This allows genpolicy to add the generated policy.rego to our custom
CDH configuration.
The sealed secret format follows the CoCo specification:
sealed.<JWS header>.<JWS payload>.<signature>
Where the payload contains:
- version: "0.1.0"
- type: "vault" (pointer to KBS resource)
- provider: "kbs"
- resource_uri: KBS path to the actual secret
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Increase the sleep time after kata-deploy deployment from 10s to 60s
to give more time for runtimes to be configured. This helps avoid
race conditions on slower K8s distributions like k3s where the
RuntimeClass may not be immediately available after the DaemonSet
rollout completes.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Merge the two E2E tests ("Custom RuntimeClass exists with correct
properties" and "Custom runtime can run a pod") into a single test, as
those 2 are very much dependent of each other.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Replace fail() calls with die() which is already provided by
common.bash. The fail() function doesn't exist in the test
infrastructure, causing "command not found" errors when tests fail.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We cannot overwrtie a binary that's currently in use, and that's the
reason that elsewhere we remove / unlink the binary (the running process
keeps its file descriptor, so we're good doing that) and only then we
copy the binary. However, we missed doing this for the
nydus-snapshotter deployment.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Clean up trailing whitespaces, making life easier for those who
have configured their IDE to clean these up.
Suggest to not add new code with trailing whitespaces etc.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add support for CRI-O annotations when fetching pod identifiers for
device cold plug. The code now checks containerd CRI annotations first,
then falls back to CRI-O annotations if they are empty.
This enables device cold plug to work with both containerd and CRI-O
container runtimes.
Annotations supported:
- containerd: io.kubernetes.cri.sandbox-name, io.kubernetes.cri.sandbox-namespace
- CRI-O: io.kubernetes.cri-o.KubeName, io.kubernetes.cri-o.Namespace
Signed-off-by: Pradipta Banerjee <pradipta.banerjee@gmail.com>
Clean up existing nydus-snapshotter state to ensure fresh start with new
version.
This is safe across all K8s distributions (k3s, rke2, k0s, microk8s,
etc.) because we only touch the nydus data directory, not containerd's
internals.
When containerd tries to use non-existent snapshots, it will
re-pull/re-unpack.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we have moved to use QEMU (and OVMF already earlier) from
kata-deploy, the custom tdx configurations and distro checks
are no longer needed.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Currently, a working TDX setup expects users to install special
TDX support builds from Canonical/CentOS virt-sig for TDX to
work. kata-deploy configured TDX runtime handler to use QEMU
from the distro's paths.
With TDX support now being available in upstream Linux and
Ubuntu 24.04 having an install candidate (linux-image-generic-6.17)
for a new enough kernel, move TDX configuration to use QEMU from
kata-deploy.
While this is the new default, going back to the original
setup is possible by making manual changes to TDX runtime handlers.
Note: runtime-rs is already using QEMUPATH for TDX.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
This Allows the updateStrategy to be configured for the kata-deploy helm
chart, this is enabling administrators to control the aggressiveness of
updates. For a less aggressive approach, the strategy can be set to
`OnDelete`. Alternatively, the update process can be made more
aggressive by adjusting the `maxUnavailable` parameter.
Signed-off-by: Nikolaj Lindberg Lerche <nlle@ambu.com>
Avoid redundant and confusing teardown_common() debug output for
k8s-policy-pod.bats and k8s-policy-pvc.bats.
The Policy tests skip the Message field when printing information about
their pods, because unfortunately that field might contain a truncated
Policy log - for the test cases that intentiocally cause Policy
failures. The non-truncated Policy log is already available from other
"kubectl describe" fields.
So, avoid the redundant pod information from teardown_common(), that
also included the confusing Message field.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Delete the pause_bundle directory before running the umoci unpack
operation. This will make builds idempotent and not fail with
errors like "create runtime bundle: config.json already exists in
.../build/pause-image/destdir/pause_bundle". This will make life
better when building locally.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Update Go from 1.24.11 to 1.24.12 to address security vulnerabilities
in the standard library:
- GO-2026-4342: Excessive CPU consumption in archive/zip
- GO-2026-4341: Memory exhaustion in net/url query parsing
- GO-2026-4340: TLS handshake encryption level issue in crypto/tls
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
1. Add disable_block_device_use to CLH settings file, for parity with
the already existing QEMU settings.
2. Set DEFDISABLEBLOCK := true by default for both QEMU and CLH. After
this change, Kata Guests will use by default virtio-fs to access
container rootfs directories from their Hosts. Hosts that were
designed to use Host block devices attached to the Guests can
re-enable these rootfs block devices by changing the value of
disable_block_device_use back to false in their settings files.
3. Add test using container image without any rootfs layers. Depending
on the container runtime and image snapshotter being used, the empty
container rootfs image might get stored on a host block device that
cannot be safely hotplugged to a guest VM, because the host is using
the same block device.
4. Add block device hotplug safety warning into the Kata Shim
configuration files.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Cameron McDermott <cameron@northflank.com>
Remove the initrd function and add the image function to align
with the actually existing functions in this file.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Confidential guests cannot use traditional IOMMU Group based VFIO.
Instead, they need to use IMMUFD. This is mainly because the group
abstraction is incompatible with a confidential device model.
If traditional VFIO is specified for a confidential guest, detect
the error and bail out early.
Fixes#12393
Signed-off-by: Joji Mekkattuparamban <jojim@nvidia.com>
in CI we are testing the latest kata-deploy, which requires the latest
helm chart. The previous query doesn't work anymore, but these days we
should be able to rely on the "0.0.0-dev" tag and on helm to print the
to-be-installed version into console.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
I keep struggling finding the debug images, let's include them in the
peer-pods-azure.sh script so people can find them easier.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
This comment was first introduced in e111093 with secure_join()
but then we forgot to remove it when we switched to the safe-path
lib in c0ceaf6
Signed-off-by: Qingyuan Hou <lenohou@gmail.com>
We want to enable local and remote CUDA repository builds.
Moving the cuda and tools repo to versions.yaml with a
unified build for both types.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Fix empty string handling in format conversion
When HELM_ALLOWED_HYPERVISOR_ANNOTATIONS, HELM_AGENT_HTTPS_PROXY, or
HELM_AGENT_NO_PROXY are empty, the pattern matching condition
`!= *:*` or `!= *=*` evaluates to true, causing the conversion loop
to create invalid entries like "qemu-tdx: qemu-snp:".
Add -n checks to ensure conversion only runs when variables are
non-empty.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Update the CI and functional test helpers to use the new
shims.disableAll option instead of iterating over every shim
to disable them individually.
Also adds helm repo for node-feature-discovery before building
dependencies to fix CI failures on some distributions.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Update the Helm chart README to document the new shims.disableAll
option and simplify the examples that previously required listing
every shim to disable.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Simplify the example values files by using the new shims.disableAll
option instead of listing every shim to disable.
Before (try-kata-nvidia-gpu.values.yaml):
shims:
clh:
enabled: false
cloud-hypervisor:
enabled: false
# ... 15 more lines ...
After:
shims:
disableAll: true
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add a new `shims.disableAll` option that disables all standard shims
at once. This is useful when:
- Enabling only specific shims without listing every other shim
- Using custom runtimes only mode (no standard Kata shims)
Usage:
shims:
disableAll: true
qemu:
enabled: true # Only qemu is enabled
All helper templates are updated to check for this flag before
iterating over shims.
One thing that's super important to note here is that helm recursively
merges user values with chart defaults, making a simple
`disableAll` flag problematic: if defaults have `enabled: true`, user's
`disableAll: true` gets merged with those defaults, resulting in all
shims still being enabled.
The workaround found is to use null (`~`) as the default for `enabled`
field. The template logic interprets null differently based on
disableAll:
| enabled value | disableAll: false | disableAll: true |
|---------------|-------------------|------------------|
| ~ (null) | Enabled | Disabled |
| true | Enabled | Enabled |
| false | Disabled | Disabled |
This is backward compatible:
- Default behavior unchanged: all shims enabled when disableAll: false
- Users can set `disableAll: true` to disable all, then explicitly
enable specific shims with `enabled: true`
- Explicit `enabled: false` always disables, regardless of disableAll
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add Bats tests to verify the custom runtimes Helm template rendering,
and that the we can start a pod with the custom runtime.
Tests were written with Cursor's help.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add functions to install and remove custom runtime configuration files.
Each custom runtime gets an isolated directory structure:
custom-runtimes/{handler}/
configuration-{baseConfig}.toml # Copied from base config
config.d/
50-overrides.toml # User's drop-in overrides
The base config is copied AFTER kata-deploy has applied its modifications
(debug settings, proxy configuration, annotations), so custom runtimes
inherit these settings.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add functions to configure custom runtimes in containerd and CRI-O.
Custom runtimes use an isolated config directory under:
custom-runtimes/{handler}/
Custom runtimes automatically derive the shim binary path from the
baseConfig field using the existing is_rust_shim() logic.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add support for parsing custom runtime configurations from a mounted
ConfigMap. This allows users to define their own RuntimeClasses with
custom Kata configurations.
The ConfigMap format uses a custom-runtimes.list file with entries:
handler:baseConfig:containerd_snapshotter:crio_pulltype
Drop-in files are read from dropin-{handler}.toml, if present.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's extract the common logic from configure_containerd_runtime and
configure_crio_runtime into reusable helper functions. This reduces
code duplication and prepares for adding custom runtime support.
For containerd:
- Add ContainerdRuntimeParams struct to encapsulate common parameters
- Add get_containerd_pluginid() to extract version detection logic
- Add get_containerd_output_path() to extract file path resolution
- Add write_containerd_runtime_config() to write common TOML values
For CRI-O:
- Add CrioRuntimeParams struct to encapsulate common parameters
- Add write_crio_runtime_config() to write common configuration
While here, let's also simplify pod_annotations to always use
"[\"io.katacontainers.*\"]" for all runtimes, as the NVIDIA specific
case has been removed from the shell script, but we forgot to do so
here.
No functional changes intended.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add -info flag handling to containerd-shim-kata-v2 (Rust version).
This outputs RuntimeInfo protobuf (name, version, revision) to stdout,
providing compatibility with containerd v2.0+ which queries runtime
information via this flag.
This is the runtime-rs counterpart to the Go implementation.
Fixes#12133
Signed-off-by: tak-ka3 <takumi.hiraoka@acompany-ac.com>
It aims to make QMP initialize robust by retrying QMP handshake with
global deadline to handle slow QEMU bring-up.
Qmp::new() used DEFAULT_QMP_READ_TIMEOUT as the effective deadline
for the QMP handshake read. When QEMU initialization is slow (e.g.
heavy host load, large memory/device init, slow storage, confidential
guests, etc.), the QMP greeting may not become readable within a small
per-read timeout (e.g. 250ms). This caused QMP init to fail with
"Resource temporarily unavailable (os error 11)" and spam
"couldn't initialise QMP", while subsequent retries might eventually
succeed once QEMU became ready.
To address this issue, keep a short per-read timeout to avoid
indefinite blocking, but add a global "wait for QMP ready" deadline
that retries the handshake with a small backoff. This improves startup
reliability under load and avoids unnecessary reconnect failures.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
HashMap cannot guarantee the order. The command line is always changed.
This commit change kv of get_agent_kernel_params to BTreeMap to make
sure the command line is not changed.
Fixes: #10977
Signed-off-by: Hui Zhu <teawater@antgroup.com>
It aims to address the issue:
"run_io_copy[Stdout]: failed to copy stream: Not a socket (os error 88)"
The `Not a socket (os error 88)` error was caused by incorrectly wrapping
a FIFO file descriptor in a `UnixStream`. The following changes:
(1) Refactor `open_fifo_write` to return `tokio::fs::File` (or a generic
async reader/writer) instead of `AsyncUnixStream`.
(2) Ensure IO copying logic treats stdout/stderr streams as file-like
objects rather than sockets.
This fix eliminates the "failed to copy stream" errors in the IO loop
and ensures reliable log forwarding for legacy-io.
Fixes: #12387
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Move the private closure out and make it a public method which is
responsible for clear O_NONBLOCK for an fd and turn it into blocking
mode.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This reverts commit c0d7222194.
Soon, guest components will switch to using a DB instead of
storing resources in the filesystem. Further, I don't see any
more indicators why kbs-client would struggle to set simple
resources.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add the necessary configuration and code changes to support QEMU
on arm64 architecture in runtime-rs.
Changes:
- Set MACHINETYPE to "virt" for arm64
- Add machine accelerators "usb=off,gic-version=host" required for
proper arm64 virtualization
- Add arm64-specific kernel parameter "iommu.passthrough=0"
- Guard vIOMMU (Intel IOMMU) to skip on arm64 since it's not supported
These changes align runtime-rs with the Go runtime's arm64 QEMU support.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
Add support for the -info flag that containerd v2.0+ passes to shims.
The flag outputs RuntimeInfo protobuf to stdout containing the shim
name and version information.
Fixes#12133
Signed-off-by: tak-ka3 <takumi.hiraoka@acompany-ac.com>
The enable_debug parameter was explicitly set to false rather than
being commented out (e.g., # enable_debug = true). As the previous
enabling method failed to account for this explicit setting, it was
rendered invalid. This commit updates the matching logic to correctly
handle and toggle the explicit false value.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It was observed that some kata-deploy cleanup steps could hang,
causing the workflow to never finish properly. In these cases,
a QEMU process was not cleaned up and kept printing debug logs
to the journal. Over time, this maxed out the runner’s disk
usage and caused the runner service to stop.
Set timeouts for the relevant cleanup steps to avoid this.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The verification job mounts a ConfigMap containing the pod spec for
the Kata runtime test. Previously, both the ConfigMap and the Job were
Helm hooks with different weights (-5 and 0 respectively).
On k3s, a race condition was observed where the Job pod would be
scheduled before the kubelet's informer cache had registered the
ConfigMap, causing a FailedMount error:
MountVolume.SetUp failed for volume "pod-spec": object
"kube-system"/"kata-deploy-verification-spec" not registered
This happened because k3s's lightweight architecture schedules pods
very quickly, and the hook weight difference only controls Helm's
ordering, not actual timing between resource creation and cache sync.
By making the ConfigMap a regular chart resource (removing hook
annotations), it is created during the main chart installation phase,
well before any post-install hooks run. This guarantees the ConfigMap
is fully propagated to all kubelets before the verification Job starts.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The verification job needs to list nodes to check for the
katacontainers.io/kata-runtime label and list events to detect
FailedCreatePodSandBox errors during pod creation.
This was discovered when testing with k0s, where the service account
lacked the required cluster-scope permissions to list nodes.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Remove k0s-worker and k0s-controller from
RUNTIMES_WITHOUT_CONTAINERD_DROP_IN_SUPPORT and always return true for
k0s in is_containerd_capable_of_using_drop_in_files since k0s auto-loads
from containerd.d/ directory regardless of containerd version.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add microk8s case to get_containerd_paths() method and remove microk8s
from RUNTIMES_WITHOUT_CONTAINERD_DROP_IN_SUPPORT to enable dynamic
containerd version checking.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Introduce ContainerdPaths struct and get_containerd_paths() method to
centralize the complex logic for determining containerd configuration
file paths across different Kubernetes distributions.
The new ContainerdPaths struct includes:
- config_file: File to read containerd version from and write to
- backup_file: Backup file path before modification
- imports_file: File to add/remove drop-in imports from (Option<String>)
- drop_in_file: Path to the drop-in configuration file
- use_drop_in: Whether drop-in files can be used
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The JSONPath parser was incorrectly splitting on escaped dots (\.)
causing microk8s detection to fail. Labels like "microk8s.io/cluster"
were being split into ["microk8s\", "io/cluster"] instead of being
treated as a single key.
This adds a split_jsonpath() helper that properly handles escaped dots,
allowing the automatic microk8s detection via the node label to work
correctly.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The kata-deploy test was using helm_helper which made it hard to debug
failures (die() calls would cause "Executed 0 tests" errors) and added
unnecessary complexity.
The test now calls helm directly like a user would, making it simpler
and more representative of real-world usage. The verification job status
is explicitly checked with proper failure detection instead of relying
on helm --wait.
Timeouts are configurable via environment variables to account for
different network speeds and image sizes:
- KATA_DEPLOY_TIMEOUT (default: 600s)
- KATA_DEPLOY_DAEMONSET_TIMEOUT (default: 300s)
- KATA_DEPLOY_VERIFICATION_TIMEOUT (default: 120s)
Documentation has been added to explain what each timeout controls and
how to customize them.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The verification job now supports configurable timeouts to accommodate
different environments and network conditions. The daemonset timeout
defaults to 1200 seconds (20 minutes) to allow for large image downloads,
while the verification pod timeout defaults to 180 seconds.
The job now waits for the DaemonSet to exist, pods to be scheduled,
rollout to complete, and nodes to be labeled before creating the
verification pod. A 15-second delay is added after node labeling to
allow kubelet time to refresh runtime information.
Retry logic with 3 attempts and a 10-second delay handles transient
FailedCreatePodSandBox errors that can occur during runtime
initialization. The job only fails on pod errors after a 30-second
grace period to avoid false positives from timing issues.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The retry loop in helm_helper had two bugs:
1. Counter initialized to 10 instead of 0, causing immediate failure
2. Exit condition used -eq instead of -ge, incorrect for loop logic
These bugs would cause helm_helper to fail immediately on the first
retry attempt instead of properly retrying up to max_tries times.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When looking into stale bot more for issues, I realised that our existing
stale job would need permissions to work. Unfortunately the behaviour
of the actions without these permissions is to log, but still finish as successful.
This means it was hard to spot we had an issue.
Add the required permissions to get this working again and improve the message
Also add concurrency rule to make zizmor happy
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
We've had a couple of occasions that Cargo.lock has been out of sync
with Cargo.toml, so try and extend our rust check to pick this up in the CI.
There is probably a more elegant way than doing `cargo check` and
checking for changes, but I'll start with this approach
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Downstream builders at Red Hat complain that `Cargo.lock` doesn't match
`Cargo.toml`.
Run `cargo check` to refresh `Cargo.lock`.
`git bisect` shows that 7cfb97d41b is the first commit where
`cargo check` has an effect in `src/agent`.
Signed-off-by: Greg Kurz <groug@kaod.org>
Add run_bats_tests() function to common.bash that provides consistent
test execution and reporting across all test suites (k8s, nvidia,
kata-deploy).
This removes duplicated test runner code from run_kubernetes_tests.sh,
run_kubernetes_nv_tests.sh, and run-kata-deploy-tests.sh.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The NVIDIA GPU test runner script was not generating test reports,
causing the report_tests() function in gha-run.sh to have nothing
to display. This aligns the script with run_kubernetes_tests.sh by:
- Adding set -o pipefail for proper pipeline error handling
- Creating a reports directory with timestamped subdirectory
- Capturing test output to files with ok-/not_ok- prefixes
- Adding --timing flag to bats for timing information
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's just point to the official documentation rather than explaining
exactly how to deploy (and the current text was very outdated).
Removing fluentd / minikube examples is out of context of this commit.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The runk tool hasn't been supported for a few years, with no maintainers
since ManaSugi stopped being involved in the project and the CI was
disabled in 2024.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This reverts commit 6130d7330f, as we're
officially swithcing to the rust version of kata-deploy.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
a2534e7bc8 introduced the logic to also
release a kata-tools tarball, but it missed allowing
KATA_TOOLS_STATIC_TARBALL env var to be passed to the release script,
leading to the following error during the release process:
```
ERROR: Invalid environment variable "KATA_TOOLS_STATIC_TARBALL"
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
In startVM(), for VMMs without hotplug support (e.g., Firecracker or
QEMU microvm), the runtime runs prestart hooks but misses rescanning
the network namespace. This causes VMs to boot with uninitialized
network configs, as updates from CNI plugins are not captured.
This patch adds a network rescan via AddEndpoints after prestart hooks
for the non-hotplug path, ensuring correct network info is passed to
the VMM configuration before the VM starts.
Fixes#11500
Signed-off-by: XanderC <xanderc@qq.com>
The virtio-9p is not supported for a long time, specially within
the runtime-rs, we have no such plan to support it. Removal of the
related items is reasonable.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As Memory Agent feature is not used within CoCo(TDX/SNP) scenarios,
with this fact, it's better to just remove the related sections.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It aims to introduce some related items within Makefile to enable
Intel SNP settings in configuration when do make build. And make it
possible to generate the rendered qemu-snp-runtime-rs configuration
based on the *.in template.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To make it work well on the SEV-SNP platforms for qemu-runtime-rs with
coco, a dedicated SEV-SNP configuration should be introduced to help
prepare related CVM resources.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Enable measured rootfs within configuration when make build. And add
some other important items to make the configuration work well.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It aims to introduce some related items within Makefile to enable
Intel TDX settings in configuration when do make build. And make it
possible to generate the rendered qemu-tdx-runtime-rs configuration
based on the *.in template.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To make it work well on the TDX platforms for qemu-runtime-rs with
coco, a dedicated TDX configuration should be introduced to help
prepare related CVM resources.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Systemd-managed cgroups use the slice:prefix:name format, which is
not a filesystem path. Calling MoveTo() on such paths fails with
"invalid group path" and can abort cleanup before Delete() runs.
In some cases, this causes pod teardown delays.
Skip MoveTo for systemd-formatted sandbox/overhead cgroup paths when
sandbox_cgroup_only is true; systemd moves tasks on unit deletion.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
With cold-plug becoming by design the only supported mode with the
update of NVRC to v0.1.1, resolving references to hot-plug.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Enable post-install verification in kata-deploy CI tests. When
HELM_VERIFY_DEPLOYMENT is set, a simple verification pod is created
that runs with the Kata runtime to confirm deployment succeeded.
The verification pod prints kernel info and exits - success indicates
the Kata runtime is properly configured and functional.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add optional verification that runs after kata-deploy installation.
When a pod spec is provided via --set-file verification.pod=<file>,
a verification job runs after install/upgrade to validate deployment.
The user is fully responsible for the verification pod content:
- Pod name, runtimeClassName, annotations, and verification logic
- Pod must exit 0 on success, non-zero on failure
The verification job simply:
1. Waits for kata-deploy DaemonSet to be ready
2. Applies the user-provided pod spec
3. Waits for the pod to complete
4. Shows logs and cleans up
Usage:
helm install kata-deploy ... \
--set-file verification.pod=/path/to/your-pod.yaml
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
To unlock the release, move the job to publish kata payload after push to an alternate runner(IBM owned) for ppc64le.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
The new NVRC version works for CC and non-CC use cases,
no --feature confidential needed anymore.
Bump versions.yaml and adjust deployment instructions.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Disable NVDIMM. When using GPU passthrough, using NVDIMM would create
a r/o file-backed memory region. When using a GPU, QEMU tries to DMA-
map guest memory for the device, resulting in a mapping error:
memory listener initialization failed: Region mem0:
vfio_container_dma_map ... -22 (Invalid argument).
For the CC configs, NVDIMM is disabled by default in qemu_amd64.go
with a warning, but we also explicitly disable the setting in the
shim configuration file.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
We don't need to store the kernel headers anymore. We do need to store
the kernel modules, instead.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We've done some bad file based driver determination,
now with versions.yaml there is a single source of truth.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We need to package the build modules for the rootfs
to be able to consume it. We package the whole
/lib/modules/$(uname -r) directory strip=2.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We want to have deterministic behaviour and only
one valid driver version acceptable via versions.yaml
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We actually never installed yq to the kernel build,
there are some path that use yq but were never hit,
for the GPU use-case we need to read values from versions.yaml
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
In preparation for coco v0.18.0, bump the version of image-rs we use in
agent-ctl to match what we have in versions.yaml.
Drop the snapshotter-overlayfs feature. This was dropped from image-rs
when we removed enclave-cc support.
Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
Before cutting the Kata release that will be used with CoCo v0.18.0,
let's bump the versions of Trustee and guest-components to latest.
Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
This is needed as the 580 driver doesn't build against 6.18.x, and the
590 driver is not yet fully working for our case, thus we stick to the
previous version that worked before.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Bump both the kernel and kernel-confidential versions from v6.12.x and
v6.16.x to v6.18.4, aligning with the new LTS release.
Kernel 6.18 introduced several configuration changes that required
updates to our kernel config fragments:
* CRYPTO_FIPS dependencies changed:
- In 6.12: depended on !CRYPTO_MANAGER_DISABLE_TESTS
- In 6.18: now depends on CRYPTO_SELFTESTS (which requires EXPERT)
Added CONFIG_EXPERT=y and CONFIG_CRYPTO_SELFTESTS=y to crypto.conf
to satisfy the new dependency chain.
* CONFIG_EXPERT is a naughty one, as it disables / enables a bunch
of things behind ones back, probably just to prove a point that
it is for experts ;-) ... regardless, a reasonable amount of
options had to be re-added in order to make sure anything ends
up broken.
* Legacy iptables support:
Kernel 6.18 requires explicit legacy xtables/iptables configs for
IP_NF_* options. Added CONFIG_NETFILTER_XTABLES_LEGACY,
CONFIG_IP_NF_IPTABLES_LEGACY, and CONFIG_IP6_NF_IPTABLES_LEGACY
to netfilter.conf.
* Module signing dependencies:
Added CONFIG_MODULES=y and other required dependencies to
module_signing.conf to ensure MODULE_SIG can be properly enabled.
* Whitelist updates:
- Added CONFIG_NF_CT_PROTO_DCCP (removed in 6.18+)
- Added CONFIG_CRYPTO_SELFTESTS, CONFIG_NETFILTER_XTABLES_LEGACY,
CONFIG_IP_NF_IPTABLES_LEGACY, CONFIG_IP6_NF_IPTABLES_LEGACY
(added in 6.18+, not present in older kernels like 6.12)
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
A few minor changes to the Zensical config that makes navigation easier. Also
fixed a couple of bugs with local serving and added some quality of life
features to Zensical.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
This commit adds a Github workflow for building a Github Pages site for the markdown
files in the docs/ directory. Zensical is a new markdown-based static site generation
framework built by the creators of Material for Mkdocs. https://zensical.org/
This commit does not clean the doc structure, so site navigation is initially going to
be messy.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
Remove the agent hotplug timeout parameter from the kernel
command line. Having shifted to VFIO cold-plug, this parameter is
no longer needed.
Remove the no longer required parameter for TDX and thus align the
SNP and TDX configurations.
Add a parameter to avoid the kernel to mount the /dev tmpfs. NVRC
and later on kata-agent attempt this. While kata-agent does not
panic when mounting /dev fails, NVRC makes mounting /dev a hard
requirement.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
set_container_command() previously appended command arguments
one-by-one with
'.command += [...]'. This makes the helper non-idempotent and can
lead to unexpected command arrays when invoked multiple times.
Update the helper to set the full command array in a single yq v4
expression and print the target YAML path plus the command being
applied to simplify debugging when tests fail.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The pod config file created by new_pod_config() was generated via
mktemp using the template "pod-config.yaml.in.XXX", which produces
filenames that do not end with ".yaml" (e.g. pod-config.yaml.in.ABC).
If the random combination of special suffix with ".Csv" or ".Xml", etc.
the following operations with yq will fail.
Some helpers and tooling assume the config path ends with ".yaml".
Switch the mktemp template to place the random suffix before the
extension so the returned path always ends with ".yaml".
Fixes: #12268, #12319
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This is a suggestion from Choi, so we can easily test with a specific
kubectl version and also easily understand which kubectl version is
being used in case of failure.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This image will be used by our helm charts to verify that a
kata-containers deployment is correct.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Enhance the wait_for_migration implementation to reliably wait for
QEMU migration completion and avoid the previous `sleep(280ms)`
delay.
(1) Add an initial fast-path query to return immediately if
migration is already completed/failed/cancelled.
(2) Use a hard deadline to enforce timeouts deterministically.
(3) Implement adaptive polling with backoff and a maximum interval
to reduce QMP load while keeping responsiveness.
(4) Unify migration status handling and return clear errors on
failed/cancelled states.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Return information about current migration process. And the input
and output as below:
{ 'command': 'query-migrate', 'returns': 'MigrationInfo' }
But note that the Qemu API is valid within qapi-rs(v0.15+)
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The detailed information about the updated versions as below:
```
qapi = { version = "0.15", features = ["qmp", "async-tokio-all"] }
qapi-spec = "0.3.2"
qapi-qmp = "0.15.0"
```
and it will correct some corresonding structures.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Change the secure_storage_integrity option's default value to true.
With this, integrity protection for encrypted block device contents
will be requested from the confidential data hub by default, see the
agent's cdh_handler_trusted_storage function in rpc.rs.
This behavior can be disabled by explicitly setting the
agent.secure_storage_integrity parameter to 0 or false via kernel
command line parameters.
This will affect the trusted storage implementation for the guest-pull
mechanism, and it will affect future implementations using this code
path, such as implementations for ephemeral secure storage.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
In some builds we are seeing:
```
error: could not create temp file /opt/rustup/tmp/r2xu46kwuyc7k2kr_file: Permission denied (os error 13)
```
in the agent-ctl build, so try and port a fix from #12313 to the tools build
to try and resolve this.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Fixes deploying kata-containers using k3s. The deploy script fails with /opt/kata-artifacts/scripts/kata-deploy.sh: line 397: [: too many arguments
Signed-off-by: Federico A. Corazza <git@facorazza.com>
yamllint complains that there is only one space before the comment,
so add a second to prevent this annoying message showing up.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Create a new page for a reference implementation for Kubernetes
using QEMU, the go shim and an NVIDIA rootfs. The new page
contains information on:
- components involved in the NVIDIA (TEE) GPU scenario
- orchestration flow for GPU passthrough scenarios
- deployment guidance
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
- Apply a few structural/grouping changes and improve flow
- Group build sections together
- Move usage examples to last section
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
The following error was observed during virtiofsd static build:
```
error: could not create temp file /opt/rustup/tmp/p44enysfaxwdbvw4_file:
Permission denied (os error 13)
```
This occurs because RUSTUP_HOME and CARGO_HOME were initialized by the
root user during `docker build`, but `cargo build` is executed as a
non-root user via 'docker run --user'.
Ensure these directories are writable by adjusting the permission after
the toolchain installation is complete.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
OVMF build for Intel TDX (aka "TDVF") was disabled in favor of Ubuntu/
CentOS pre-upstream releases of Intel TDX.
See 4292c4c3b1.
It's time to re-enable the build and move runtime configurations to
use it (the latter will be done in a later commit).
This is a partial revert of 4292c4c3b with the following changes:
- Stop calling OVMF for Intel TDX "TDVF" and follow the naming distros
use for TDX enabled build: OVMF.inteltdx.fd.
- Single binary OVMF.inteltdx.fd is supported using -bios QEMU param.
- Secure Boot infrastructure is disabled since Kata does not support it.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Actually this method is indeed called, just add attribute of
`#[allow(dead_code)]` to allow UT pass. And the warning looks like:
warning: method `send_message_with_payload` is never used
|
224 | impl<R: Req> Endpoint<R> {
| ------------------------ method in this implementation
...
522 | pub fn send_message_with_payload<T: Sized, P: Sized>(
| ^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
warning: unused `std::result::Result` that must be used
-->
src/dragonball/dbs_virtio_devices/src/vhost/vhost_user/net.rs:679:9
|
679 | / VirtioDevice::<Arc<GuestMemoryMmap<()>>, QueueSync,
GuestRegionMmap>::write_config(
680 | | &mut dev, 0, &config,
681 | | );
| |_________^
|
= note: this `Result` may be an `Err` variant, which should be
handled
= note: `#[warn(unused_must_use)]` on by default
help: use `let _ = ...` to ignore the resulting value
|
679 | let _ = VirtioDevice::<Arc<GuestMemoryMmap<()>>,
QueueSync, GuestRegionMmap>::write_config(
| +++++++
warning: unused `std::result::Result` that must be used
-->
src/dragonball/dbs_virtio_devices/src/vhost/vhost_user/net.rs:683:9
|
683 | / VirtioDevice::<Arc<GuestMemoryMmap<()>>, QueueSync,
GuestRegionMmap>::read_config(
684 | | &mut dev, 0, &mut data,
685 | | );
| |_________^
|
= note: this `Result` may be an `Err` variant, which should be
handled
help: use `let _ = ...` to ignore the resulting value
|
683 | let _ = VirtioDevice::<Arc<GuestMemoryMmap<()>>,
QueueSync, GuestRegionMmap>::read_config(
| +++++++
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
the WARNING looks like as:
...
warning: variable does not need to be mutable
--> src/dragonball/dbs_virtio_devices/src/vsock/csm/txbuf.rs:217:13
|
217 | let mut tmp: Vec<u8> = vec![0; TxBuf::SIZE - 2];
| ----^^^
| |
| help: remove this `mut`
|
= note: `#[warn(unused_mut)]` on by default
...
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Till k8s 1.34 we could grep by "Started containerd". From k8s 1.35
onwards the event message changed and we should, instead, grep by
"Container started".
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
QEMU v10.2.0 was released on December 24th, 2025.
The experimental GPU SNP / TDX are also pointing to v10.2.0 release with
their gpu-{snp,tdx}-20260107 branch.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
sha2 0.9.3 includes the use of cpuid-bool, which was renamed to cpufeatures
around 5 years ago. Try moving to a workspace dependency of sha2
and bumping to the latest version to remediate RUSTSEC-2021-0064
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
While the use-case of Intel QuickAssist (QAT) accelerated crypto
and/or compression with k8s and Kata Containers is still valid,
the setup instructions are outdated:
Starting with Intel Xeon Gen4 (Sapphire Rapids), QAT driver
stack moved to in-tree drivers without a separete SR-IOV VF
driver.
Drop all the setup instructions but keep the use-cases doc
for reference. Users wanting to enable the use-case, should consult
with Intel QAT Device plugins or Intel QAT DRA driver authors.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
The nontee job (run-k8s-tests-coco-nontee) for qemu-coco-dev-runtime-rs
is running well and it's time to make it required when the CI runs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Using the built in size_of_val is easier to read and less error-prone
than doing this calculation manually
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
#[cfg(feature = "cargo-clippy")] has been deprecated for years,
so should be replaced with `#[cfg(clippy)]`
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
There are many, many null pointer dereferences in the bindgen code
when moving between rust 1.85.1 and 1.86 and no docs of the source
that it was generated from, so try and skip
these test from running until an SME can look at them @lifupan
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
runtime-rs crates are pulled into kata-ctl and some of these have
bumped recently, so update these in kata-ctl as well
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so ensure our docs include this
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Clippy is recommending that format args are inlined for
better clarity, so update our code to remove these warnings
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In #12151 the version was bumped in cargo.toml, but the update not
done, so run `cargo update -p container-device-interface` to apply it
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Since #12204 was merged, the following error has been observed:
```
bats warning: Executed 1 instead of expected 2 tests
[run_kubernetes_tests.sh:162] ERROR: Tests FAILED from suites: k8s-empty-dirs.bats
```
The cause is that `pod_logs_file` is re-declared as a local variable
in the second test before skipping, which makes it inaccessible
in `teardown()` and leads to an error.
This commit removes the re-declaration of the variable.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The Rust kata-deploy binary calls list_runtimeclasses() during NFD
setup, but the ClusterRole only granted get and patch permissions.
Add the list verb to the runtimeclasses resource permissions to fix
the RBAC error:
runtimeclasses.node.k8s.io is forbidden: User
\"system:serviceaccount:kube-system:kata-deploy-sa\" cannot list
resource \"runtimeclasses\" in API group \"node.k8s.io\" at the
cluster scope
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
KVM is not available in our ARM runners, let's skip those tests
accordingly, while making the rest test cases remain tested on machines
with KVM present and access to KVM device.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
There are test cases require interaction with KVM device, introduce
skip_if_kvm_unaccessable macro to skip them.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Changes in NIM/RAG samples:
- update image references
- update memory requirements, timeouts, model name
- sanitize some of the probes and print-out
Further refinements can be made in the future.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
cargo test was trying to evaluate the documentation comment and failing,
so try and make the comment explicitly text to avoid this
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
A few structs in genpolicy are never constructed, so add
`#[allow(dead_code)]` to prevent this clipped warning
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In unicode you can have multi-byte characters, so it's better to
user char_indices than enumerate the bytes
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
VirtioBlkCcwDeviceHandler and VirtioBlkCcwHandler
are only constructed on s390x, so add #[cfg(target_arch = "s390x")]
to all the code
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
We can use the new Error::other options rather than
Error:new(Error:Kind:Other and drop our own macro that did this mapping
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Fix the warning throw up:
```
warning: hiding a lifetime that's elided elsewhere is confusing
--> /root/go/src/github.com/kata-containers/kata-containers/src/libs/kata-types/src/utils/u32_set.rs:50:17
|
50 | pub fn iter(&self) -> Iter<u32> {
| ^^^^^ --------- the same lifetime is hidden here
| |
| the lifetime is elided here
|
= help: the same lifetime is referred to in inconsistent ways, making the signature confusing
= note: `#[warn(mismatched_lifetime_syntaxes)]` on by default
help: use `'_` for type paths
|
50 | pub fn iter(&self) -> Iter<'_, u32> {
| +++
```
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Update virtiofsd to its latest release.
Here we also need to update the alpine version used by the builder as we
need a version of musl-dev new enough to have wrappers for pread2 and
pwrite2. As bumping, bump to the latest.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add two attestation tests. The first one sets a resource policy that
requires CPU0 to have an affirming trust level. This is a negative test
which can run on any platform. Setting this policy without setting any
reference values should result in an attestation failure.
Next, a second test will set the same policy, but this time it will use
the journal log to find the QEMU command line from the previous test and
calculate the expected reference values. Currently this is only
supported on SNP using the sev-snp-measure tool, but the same flow
should work on other platforms.
Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
The five tests are set to the same vhost socket path, which could lead
to racing with one another. Use unique name to avoid this.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Let's bump experimental {tdx,snp} QEMU to the tags created Today in the
Confidential Containers repo, which match with QEMU 10.2.0-rc3.
This bump is mostly for early testing what will become 10.2.0, which
will be bumped everywhere then.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It will address the issue:
"# bats warning: Executed 0 instead of expected 1 tests"
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As each case need such preparation of get_pod_config_dir,
a better method is directly move it into the setup_common method.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To measure the duration for journal, we need clearly print the journal
start time and end time for each case which helps to ensure the journal
log is for the specified period for the case.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
For failure cases within CI, we need dump the kata log to help
address issues, but currently large log messages cause partial
log we can see.
We remove initdata log output and increase log level to reduce
log output.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Currently policy_settings_dir is created only when
BATS_TEST_NUMBER == "1",
but delete_tmp_policy_settings_dir "${policy_settings_dir}" is
called in teardown() for every test. This means that for tests
after the first one teardown() may attempt to delete a directory
that was already removed by a previous test, or rely on a value
that does not belong to the current test execution.
Adjust teardown logic so that policy_settings_dir is only deleted
for the first test case (BATS_TEST_NUMBER == "1") and ignored for
subsequent tests. This keeps the original optimization of running
genpolicy only once, while avoiding unnecessary or confusing cleanup
attempts in later test cases.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
the previous pod_name is set as local which can not be captured
within the teardown() function, causing failure.
This commit just remove the `local pod_name` to make it a global
variable.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Otherwise we may hit a `no space left on device` when building the rust
kata-deploy binary.
This happens mostly because of the muli-staging build used to generate a
distroless final container.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There we ensure labels are added to better deal with ownership of the
runtimeclasses. It's not strictly needed here as helm does take care of
the ownership, but also doesn't hurt to follow what seems to be a common
practice.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's shamelessly duplicate the nightly job to have at least nightly
runs using the rust implementation of kata-deploy.
The reason for doing that is to be pragmatic, as pragmatic as possible,
and avoid switching away of the scripts before 3.24.0 release, while
still testing both ways till the switch happens.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Differently than the scripts, which are called as `bash -c ...`, the
kata-deploy rust binary must be invoked directly we do not even have
shell in its container.
For now, the rust version is used in the used image has the "-rust"
suffix, which will help us to have both ways being used / tested for a
little while.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
kata-deploy shell script is not THAT bad and, to be honest, it's quite
handy for quick hacks and quick changes. However, it's been
increasingly becoming harder to maintain as it's grown its scope from a
testing tool to the proper project's front door, lacking unit tests, and
with an abundacy of complex regular expressions and bashisms to be able
to properly parse the environment variables it consumes.
Morever, the fact it is a Frankstein's monster glued together using
python packages, golang binaries, and a distro dependent container makes
the situation VERY HARD to use it from a distroless container (thus,
avoiding security issues), preventing further integration with
components that require a higher standard of security than we've been
requiring.
With everything said, with the help of Cursor (mostly on generating the
tests cases), here comes the oxidized version of the script, which runs
from a distroless container image.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The ORAS cache helper needs PUSH_TO_REGISTRY to be set to 'yes' to
push new artifacts to the cache. However, this environment variable
was not being passed to the Docker container during agent, tools, and
busybox builds.
Moreover, for ghcr.io authentication, add support for using GH_TOKEN and
GITHUB_ACTOR as fallbacks when explicit credentials
(ARTEFACT_REGISTRY_USERNAME/PASSWORD) are not provided.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The GPG key used for gperf was incorrectly set to the busybox
maintainer's key (Denis Vlasenko) instead of the gperf maintainer's
key (Marcel Schaible).
Wrong key (busybox): C9E9416F76E610DBD09D040F47B70C55ACC9965B
Denis Vlasenko <vda.linux@googlemail.com>
Correct key (gperf): EDEB87A500CC0A211677FBFD93C08C88471097CD
Marcel Schaible <marcel.schaible@studium.fernuni-hagen.de>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
kata-remote is a runtime class that cloud-api-adaptor relies on to work.
kata-remote by itself does nothing, and that's the reason it's disabled
by default. We're only adding it here so cloud-api-adaptor charts can
simply do something like `--set shims.remote.enabled=true`.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When updating ephemeral storages, MS_REMOUNT is explicitly passed as,
for instance, `/dev/shm` should be remounted after memory is hotplugged.
Till now Kata Containers has been explicitly ignoring such updates,
leading to the containers' `/dev/shm` having the size of "half of the
memory allocated, during the startup time", which goes against the
expected behaviour.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
We're only releasing those for amd64 as that's the only architecture
we've been building the packages for.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure we can create a specific "tools" tarball, which will help
those who only need to pull those either for testing or production
usage.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
After runtime-rs workspace merged into root workspace, features passed
when building runtime-rs needs to be refactored to be correctly
propagated. Taking dragonball for example, runtime-rs requires runtimes
to depend on virt_conttainers feature, and virt_containers needs to
handle hypervisor features specifically.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
After the workspace integration of runtime-rs, now the output of
runtime-rs is under the repo root, instead of src/runtime-rs. Change the
TARGET_PATH accordingly to tell Makefile where to lookup output.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Some cases in dragonball crates requires interaction with KVM module to
complete, which requires root privilege. Skip those tests under non-root
user.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
MMIODeviceInfo inside the test module of dbs_boot on aarch64 is used for
testing purpose, but `pub` attribute requires it to have documentation.
Since this is used only for testing purpose, let's allow missing_docs
for it.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Test set of dbs_utils's tap module is missing test attribute, which
makes dev-dependencies unusable. Marking tests of tap as test module.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
This is a follow-up of 3fbe693.
Remove runtime-rs from exclude list, and make it as a member of root
workspace.
Specify shim and shim-ctl as the binary of runtime-rs package, make
runtime-rs and all its members into root workspace.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Make runtime-rs a package produces shim and shim-ctl as its binary
product, which enables Makefile to work after it's incorporated into
root workspace.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Init the storage options with original rootfs options.
Addition: XFS, append nouuid to the mount options if not exist.
Signed-off-by: shezhang.lau <shezhang.lau@antgroup.com>
To protect against upstream download failures for gperf and busybox,
implement ORAS-based caching to GHCR.
This adds:
- download-with-oras-cache.sh: Core helper for downloading with cache
- populate-oras-tarball-cache.sh: Script to manually populate cache
- warn() function to lib.sh for consistency
Modified build scripts to:
- Try ORAS cache first (from ghcr.io/kata-containers/kata-containers)
- Fall back to upstream download on cache miss
- Automatically push to cache when PUSH_TO_REGISTRY=yes
The cache is automatically populated during CI builds, and parallel
architecture builds check for existing versions before pushing to avoid
race conditions.
Forks benefit from upstream cache but can override with their own:
ARTEFACT_REPOSITORY=myorg/kata make agent-tarball
Generated-By: Cursor IDE with Claude
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The runtime handles the fsGroup field of the pod security context by
adding a mount option to the generated storage object [1]. This commit
changes genpolicy to expect this option.
Instead of passing another side input to
yaml::get_container_mounts_and_storages, we pass the entire PodSpec.
This reduces the necessary changes in the pod-generating resources and
allows for possible future use of other PodSpec fields.
[1]: https://github.com/kata-containers/kata-containers/blob/0c6fcde1/src/runtime/virtcontainers/kata_agent.go#L1620-L1625Fixes: #11934
Signed-off-by: Markus Rudy <mr@edgeless.systems>
I've seen this happening with the GPU SNP CI every now and then, but I
don't really understand how this was not caught by the TDX / SNP CI
themselves before.
In any case, the error seen is:
```
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"nfd.k8s-sigs.io/v1alpha1\",\"kind\":\"NodeFeatureRule\",\"metadata\":{\"annotations\":{},\"name\":\"amd64-tee-keys\"},\"spec\":{\"rules\":[{\"extendedResources\":{\"sev-snp.amd.com/esids\":\"@cpu.security.sev.encrypted_state_ids\"},\"labels\":{\"amd.feature.node.kubernetes.io/snp\":\"true\"},\"matchFeatures\":[{\"feature\":\"cpu.security\",\"matchExpressions\":{\"sev.snp.enabled\":{\"op\":\"Exists\"}}}],\"name\":\"amd.sev-snp\"},{\"extendedResources\":{\"tdx.intel.com/keys\":\"@cpu.security.tdx.total_keys\"},\"labels\":{\"intel.feature.node.kubernetes.io/tdx\":\"true\"},\"matchFeatures\":[{\"feature\":\"cpu.security\",\"matchExpressions\":{\"tdx.enabled\":{\"op\":\"Exists\"}}}],\"name\":\"intel.tdx\"}]}}\n"}}}
to:
Resource: "nfd.k8s-sigs.io/v1alpha1, Resource=nodefeaturerules", GroupVersionKind: "nfd.k8s-sigs.io/v1alpha1, Kind=NodeFeatureRule"
Name: "amd64-tee-keys", Namespace: ""
for: "/opt/kata-artifacts/node-feature-rules/x86_64-tee-keys.yaml": error when patching "/opt/kata-artifacts/node-feature-rules/x86_64-tee-keys.yaml": nodefeaturerules.nfd.k8s-sigs.io "amd64-tee-keys" is forbidden: User "system:serviceaccount:kube-system:kata-deploy-sa" cannot patch resource "nodefeaturerules" in API group "nfd.k8s-sigs.io" at the cluster scope
```
And the fix is as simple as allowing patching and updating a
nodefeaturerule in our service account RBAC.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Since the CI issue for s390x was resolved on Dec 5th,
the nightly test result has gone green for 10 consecutive days.
This commit puts the e2e tests for s390x again into the required job list.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Let's remove the deprecated features that were marked for removal
after Kata Containers 3.23.0:
kata-deploy.sh:
- Remove non-arch-specific variable fallbacks (SHIMS, DEFAULT_SHIM,
SNAPSHOTTER_HANDLER_MAPPING, ALLOWED_HYPERVISOR_ANNOTATIONS,
PULL_TYPE_MAPPING, EXPERIMENTAL_FORCE_GUEST_PULL). Each arch now
has its own default value.
- Remove CREATE_RUNTIMECLASSES and CREATE_DEFAULT_RUNTIMECLASS
variables and associated functions (create_runtimeclasses,
delete_runtimeclasses, adjust_shim_for_nfd). RuntimeClasses are
now managed by Helm chart, not the daemonset script.
- Unsupported architectures now fail with an error instead of
falling back to non-arch-specific defaults.
Helm chart:
- Remove all deprecated env values (createRuntimeClasses,
createDefaultRuntimeClass, debug, shims, shims_*, defaultShim,
defaultShim_*, allowedHypervisorAnnotations, snapshotterHandlerMapping,
snapshotterHandlerMapping_*, agentHttpsProxy, agentNoProxy,
pullTypeMapping, pullTypeMapping_*, _experimentalSetupSnapshotter,
_experimentalForceGuestPull, _experimentalForceGuestPull_*).
- Remove backward compatibility code from _helpers.tpl that checked
for legacy env values.
- Remove legacy env.shims check from runtimeclasses.yaml.
- Remove CREATE_RUNTIMECLASSES and CREATE_DEFAULT_RUNTIMECLASS env
vars from kata-deploy.yaml and post-delete-job.yaml.
- Update RBAC to only include runtimeclasses get/patch permissions
(needed for NFD patching), removing create/delete/list/update/watch.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
- Replace generic errors in sandbox operations with typed SandboxError variants (InvalidContainerId, InitProcessNotFound, InvalidExecId).
- This enables the kata shim to handle specific failure cases differently.
Fixes#12120
Signed-off-by: Adeet Phanse <adeet.phanse@mongodb.com>
Add better error handling to runtime rs to handle when the sandbox itself is killed and recreated.
- Update the kill_process function to skip sending a signal when the process is stopped.
- Always set ProcessStatus::Stopped even when wait_process fails
- In state_process return synthetic state for sandbox container when using Sandbox API
Fixes#12120
Signed-off-by: Adeet Phanse <adeet.phanse@mongodb.com>
Align with other test logic - declare the KATA_HYPERVISOR in the
run bash script, then declare the RUNTIME_CLASS_NAME variable in
the bats files.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Now that we have a more restrictive resource policy for KBS, let
us start adopting it across all NVIDIA test cases. This policy was
previously introduced by the NVIDIA attestation test.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
It aims to upgrade rtnetlink to mitigate netlink log noise.
This commit upgrades the `rtnetlink` dependency (and corresponding
libraries like `netlink-packet-route`) to address excessive and
unnecessary netlink-related logging during sandbox startup.
Problem:
The previously used `rtnetlink v0.16` (depending on `netlink-proto
v0.11.3`) generates a high volume of DEBUG/INFO level netlink messages
during sandbox initialization. This noise:
1. Overloads the logging system, often leading to warnings like
"slog-async: logger dropped messages due to channel overflow."
2. Interferes with effective troubleshooting by distracting developers
from legitimate Kata errors.
Solution:
We upgrade to `rtnetlink v0.19` (and `netlink-proto v0.12`), as testing
confirms that the latest versions have correctly elevated the verbosity
of these netlink internal events to the TRACE level.
This change significantly enhances the log analysis experience by
suppressing unnecessary network-related logs during startup.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
With these changes, we create pod security policies when running
against NVIDIA TEE GPU handlers where AUTO_GENERATE_POLICY is set.
For the non-TEE GPU tests, the added functions bail out by design.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Following existing patterns, we adapt the common policy settings
for NVIDIA GPU CI platforms. For instance, for our CI runners, we
use containerd 2.x.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Enable auto-generate policy for qemu-nvidia-gpu-* if the user
didn't specify an AUTO_GENERATE_POLICY value.
Setting this in run_kubernetes_nv_tests.sh is too late as
gha-run.sh calls into run_tests, setup.sh, and then into
create_common_genpolicy_settings() where the rules.rego and
genpolicy-settings file are being copied to the right locations.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add one valid test case with 2 GPUs with proper VFIO device
entries and CDI annotations.
Add seven test cases with invalid combinations of VFIO device
entries and CDI annotations.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add rules for vfio passthrough GPUs. When creating the security
policy document, parse GPU resource limits and derive CDI
annotation patterns and VFIO device entries.
With various values for CDI annotations and device paths being
runtime-dependent, use regular expressions.
For now, this enables passthrough of NVIDIA GPUs, but the changes
are designed to allow for other VFIO device types.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add the block device specific annotations which is dedicated within
runtime-rs for num_queues and queue_sie to the document to help
users set the two parameters.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit introduces the capability to dynamically configure
`queue_size` and `num_queues` parameters via Pod annotations.
Currently, `kata-runtime` allows for static configuration of
`queue_size` and `num_queues` for block devices through its config
file. However, a critical issue arises when a Pod is allocated fewer
CPU cores than the statically configured `num_queues` value. In such
scenarios, the Pod fails to start, leading to operational instability
and limiting flexibility in resource allocation.
To address this, this feature enables users to override the default
queue_size and num_queues parameters by specifying them in Pod
annotations.This allows for fine-grained control and dynamic adjustment
of these parameters based on the specific resource allocation of a Pod.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The runner is down for a few weeks. I may end up bringing in my personal
runner, but I'm not confident I can easily do this before the holidays,
thus I'm skipping the tests for now.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Set the attestation policy for GPU0 to affirming. This requires
the GPU, for instance, to have production properties, such as
properly signed VBIOS firmware.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
As some reasons that this CI is continuously failed, we'd like to
temporarily skip it for the s390x platform. And it will be enabled
when we addressed related issues.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As the default enable_annotations in runtime-rs is different with
runtime-go, we should make it align with configuration in runtime-go.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit refactors the vCPU resource management within runtime's
`CpuResource` structure and related calculation logic to use
floating-point numbers (`f32`) instead of integers (`u32`).
This migration is necessary to fully support the fractional vCPU
allocation introduced in the `kata-types` library, ensuring better
precision in:
1.Allocation Tracking: `current_vcpu` now tracks the precise
fractional value (e.g., 1.5 vCPUs).
2.Resource Calculation: `calc_cpu_resources` now returns a precise
`f32` sum of container vCPU requests, including normalization logic
based on the maximum period, removing the previous integer rounding
steps in the calculation.
3.Hypervisor Interaction: The integer vCPU requirement for the
hypervisor remains, so `ceil()` is now explicitly applied only when
interacting with the hypervisor or agent APIs
(`do_update_cpu_resources`, `current_vcpu`, `online_cpu_mem`).
And key changes as below:
1. `CpuResource::current_vcpu` updated from `u32` to `f32`.
2. `calc_cpu_resources` return type changed from `u32` to `f32`.
3. CPU hotplug logic now uses `f32` for the target vCPU count and applies
4. `ceil()` before calling `hypervisor.resize_vcpu()`.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Refactors `LinuxContainerCpuResources` and `LinuxSandboxCpuResources`
to track calculated vCPU allocation using `f64` (fractional float)
instead of `u64` (milliseconds).
This ensures more precise resource calculation (`quota / period`) and
aggregation by avoiding rounding errors inherent in millisecond-based
integer tracking.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit updates the non-TEE tests to disable two specific test
cases: `k8s-number-cpus.bats` and `k8s-sandbox-vcpus-allocation.bats`.
These tests are designed to cover CPU elasticity/dynamic scaling
capabilities. In the non-TEE scenario, we are enforcing the disabling of
this capability by setting the default configuration to
`static_sandbox_resource_mgmt=true`.
Although the tests currently pass, allowing them to run is logically
inconsistent with the intended non-TEE configuration. Therefore, we are
disabling them for all non-TEE runtimes, specifically targeting:
- `qemu-coco-dev`
- `qemu-coco-dev-runtime-rs`
This change ensures that our non-TEE CI accurately reflects the static
resource management policy and prevents misleading test results.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As runtime-rs doesn't support block device hotplug in s390 arch,
with this fact, we just disable or skip the test when it is the
s390.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To support such feature, the item in Makefile should be enabled,
and it can be set true when make build, just like this:
`DEFSTATICRESOURCEMGMT_QEMU := false`
When users don't want this feature, they can set it with true via
the configuration.toml.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Enable the cpu hotplug tests within the k8s-number-cpus.bats for both
cloud-hypervisor and qemu-runtime-rs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
We have support cpu hotplug features within dragonball and clh, this
commit is to enable the test within the CI.
Fixes: #8660
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As previous failure within the case, we choose to skip it, but now
the cpu hotplug has been corrected, and it's time to re-enable it.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Adding additional cases for the IOMMUFDID method to check for
non-IOMMUFD paths are passed. The method should do the right
thing.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
Logging the QMP commands gives us a lot of flexibility to
troubleshoot issues with what is being sent to QEMU.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
An import cycle was introduced because of a mutual need
for the constant that describes the prefix of IOMMUFD files.
We need to extract this out into a higher-level package.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
The QMP commands sent to QEMU did not properly set up
IOMMUFD objects in the codepath that handles VFIO device
hot-plugging. This is mainly relevant in the Kubernetes
use-case where the VFIO devices are not available when
QEMU is first launched.
Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
The function assumes that the runner is a Ubuntu machine, which so far
has been true as part of our CI.
However, the new ARM runner is running on Debian, and those mirror
additions would simply break.
With this in mind, for any distro that's not ubuntu, let's just make
sure to inform the owner of the system to have bats already installed as
part of the environment provided.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This reverts commit 5a81b010f2, as we now
have all the infrastructure properly set up as part of our CI node.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Remove the existing containerd guest pull stability tests workflow
as we're going to rebuild all the VMs used for testing and introduce
new, more focused stability tests for nydus-snapshotter.
The new tests will be added soon, as part of another PR.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Now that we've bumped to QEMU 10.2.0-rc1, we can take advantage of a fix
that's present there, which fixes the double memory allocation for the
cases where GPUs are being cold-plugged.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We've made the pods require a ridiculous amount of memory, just for the
sake of getting them running.
Now that those are running, tests are passing, CI is required, let's
work to lower the amount of mmemory needed as everything else is working
as expected.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Clean-up shellcheck warnings:
SC2030 (info): Modification of cmd_out is local (to subshell caused by (..) group).
SC2031 (info): cmd_out was modified in a subshell. That change might be lost.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Clean-up shellcheck warnings:
SC2250 (style): Prefer putting braces around variable references even
when not strictly required.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Let's add a simple backup and restore logic for the CDI configuration
file nvidia.com-pgpu.yaml in the k8s-nvidia-*.bats and
k8s-confidential-attestation.bats test files.
Althought not optimal, this is a temporary workaround needed until
NVIDIA releases what's needed for the GPU Operator to properly deal with
cold plugged devices for the Confidential Containers cases, which is
work in progress right now.
After that's released, we can revert/drop this patch.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's bump experimental {tdx,snp} QEMU to the tags created Today in the
Confidential Containers repo, which match with QEMU 10.2.0-rc1.
This bump is specially beneficial for us, as we can get rid of QEMU's
double memory allocation when **cold plugging** a GPU.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
If the sandbox has cold-plugged a IOMMUFD device but the
device-plugins sends us a /dev/vfio/<NUM> device we need to
check if the IOMMUFD device and the VFIO device are the same
We have the sibling.BDF we now need to extract the BDF of the
devPath that is either /dev/vfio/<NUM> or /dev/vfio/devices/vfio<NUM>
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Bump the github.com/sirupsen/logrus version to 1.9.3
across our components where it is back-level to bring us
up-to-date and resolve high severity CVE-2025-65637
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Add the attestation bats test case to the NVIDIA CI and provide a
second pod manifest for the attestation test with a GPU. This will
enable composite attestation in a subsequent step.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Bump to pull in a fix for composite attestation with GPUs. The new
commit ID corresponds to the fix (change for default GPU policy),
currently being the top commit of the main branch.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This brings two fixes:
- use the test_key variable to check against the aatest value.
- properly check the run command invocation (run w/o bash does not
seem to like the pipe which leads to ALWAYS evaluating the
status result to 1. With this, the deny-all test would ALWAYS
succeed regardless of whether aatest was actually returned or not.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
When running these tests repeatedly locally, the default policy is not
being reset after the test completes, then subsequent runs fail.
Similar to k8s-sealed-secrets.bats, we set the default policy in an if
condition.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This allows setting a GPU0 resource policy, enabling GPU
attestation tests to not use the default resource policy.
For now, the policy requires attestation's ear status to
not be contraindicated. In a future change we will require
this to be affirming once our CI runners' vBIOS version is
properly configured.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This enables attestation tests to figure out whether composite
attestation with a GPU can be executed.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add the NVIDIA TEE hypervisors. With this, attestation tests can be run
against the NVIDIA handlers, for instance.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
This reverts commit e4a13b9a4a, as it
caused some issues with the GPU workflows.
Reverting it is better, as it unblocks other PRs.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
vfio-ap passthrough has been introduced for runtime-rs,
requiring that the existing test verify this new functionality.
This commit adds:
- containerd config specific to runtime-rs
- extensions to the existing test functions to cover vfio-ap
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The following have been made for the enablement:
1. Make `MediatedPci` and `MediatedAp` in `VfioDeviceType`
2. Make HostDevice without BDF for `MediatedAp`
3. Add `CCW` to VFioBusMode and set it to VfioConfig as `bus_type`
4. Return `vfio-ap` driver type for `CCW` bus type
5. Set `bus_mode` for `VfioDevice` based on `bus_type`
6. Set `vfio-ap` to the agent device's `field_type`
7. Prepare a different argument for `vfio-ap` for QMP command
8. Set None to all PCI relevant fields
Please keep in mind that `vfio-ap` does not belong to any
types of port togologies like PCI (e.g., root or switch)
because devices on s390x are controlled by CCW.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Until now, we relied on `VMROOTFSDRIVER` to determine
whether a system uses a native CCW bus.
However, this method is not canonical and can be error-prone
depending on the configuration.
This commit introduces a new function that checks
for the presence of CCW bus infrastructure in sysfs
and verifies that native mainframe drivers are available.
It replaces all previous uses of the old detection method.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Add the small and normal variants of the qemu-runtime-rs
tests to the required-tests list now that they are stable.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
An oci-spec can be passed to the runtime without annotations
(e.g., `ctr run`). In this case, runtime panics with:
```
src/runtime-rs/crates/runtimes/src/manager.rs:391: called `Option::unwrap()` on a `None` value
```
This commit checks if the annotation is None, and instantiates
the hashmap as an empty map if it is missing. It also adds a None
check for `netns`.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Currently, the protection device configuration is constructed
automatically even if `confidential_guest` is not set.
This commit puts a condition to check the flag and allows the
construction accordingly.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Updates to the shim-v2 build and the binaries.sh script.
Makeing sure that both variants "confidential" AND
"nvidia-gpu-confidential" are handled.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Create an initial version of our toolchain policy as agreed in
Architecture Committee meetings and the PTG
Fixes: #9841
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
As tags are mutable and digests are not, lets pin our image
by digest to give our CI a better chance of stability
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- Swap out the hard-coded nginx registry and verisons for reading
the test image details for version.yaml
which can also ensure that the quay.io mirror is used
rather than the docker hub versions which can hit pull limits
- Try setting imagePullPoliycy Always to fix issues with the arm CI
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Using make tarball targets for tools locally, binaries may exist
for both debug and release builds. In this case, cryptic errors
are shown as we try to install multiple binaries.
This change require exactly one binary to be found and errors out
in other cases.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
When tests regress, the CI wait time can increase significantly
with the current kubectly_retry attempt logic. Thus, align with
other tests and remove kubectl_retry invocations. Instead, rely on
proper timeouts.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
SEV-SNP machine is failing due to nydus not being deployed in the
machine.
We cannot easily contact the maintainers due to the US Holidays, and I
think this should become a criteria for a machine not be added as
required again (different regions coverage).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
So far we've only been building the initrd for the nvidia rootfs.
However, we're also interested on having the image beind used for a few
use-cases.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We hit a case that gatekeeper was failing due to thinking the WIP check
had failed, but since it ran the PR had been edited to remove that from
the title. We should listen to edits and unlabels of the PR to ensure that
gatekeeper doesn't get outdated in situations like this.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
When using the multiInstallSuffix we must be cautelous on using the shim
name, as qemu-nvidia-gpu* doesn't actually have a matching QEMU itself,
but should rather be mapped to:
qemu-nvidia-gpu -> qemu
qemu-nvidia-gpu-snp -> qemu-snp-experimental
qemu-nvidia-gpu-tdx -> qemu-tdx-experimental
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Fixes: #12123
`include` in #12069, introduced to choose a different runner
based on component, leads to another set of redundant jobs
where `matrix.command` is empty.
This commit gets back to the `runs-on` solution, but makes
the condition human-readable.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Containerd configuration syntax (`config.toml`) varies across versions,
requiring per-version logic for fields like `runtime`.
However, testing confirms that containerd LTS (1.7.x) and newer
versions fully support the v3 schema for the nydus remote snapshotter.
This commit changes the previous containerd v1 settings in `config.toml`.
Instead, it introduces a unified v3-style configuration for nydus, which
can be vailid for lts and active containerds.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
In the CoCo tests jobs @wainersm create a report tests step
that summarises the jobs, so they are easier to understand and
get results for. This is very useful, so let's roll it out to all the bats
tests.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In order to have a better way to set things up using a toml editor, we
should take the containerd approach and actually have everything
uncommnted. This will help us to unify how we deal with such values in
the future from the kata-deploy POV.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We need to ensure that we do not blindly append nor blindly override the
kernel parameters set by default, but rather modify the values in case
they exist, and append in case they do not.
Now we're actually making golang and rust runtime behave the same, as so
far they were behaving differently, each version wrong in its own way.
:-p.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
QEMU netdev_add QMP command requires the 'mq' (multi-queue) argument
to be of boolean type (`true` / `false`). In runtime-rs the virtio-net
device hotplug logic currently passes a string value (e.g. "on"/"off"),
which causes QEMU to reject the command:
```
Invalid parameter type for 'mq', expected: boolean
```
This patch modifies `hotplug_network_device` to insert 'mq' as a proper
boolean value of `true . This fixes sandbox startup failures when
multi-queue is enabled.
Fixes#12136
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Allow users to build the Kata Agent using INIT_DATA=no to disable the
detect_initdata_device() code loop and associated debug log output.
Future additional improvements related to Init Data are tracked by #11532.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
On 69c4fc4e76, I've mistakenly changed the
nvidia-gpu podOverhead while I should only have changed the TEE
nvidia-gpu ones.
Let's move it back to its original value.
Reported-by: Joji Mekkattuparamban <jojim@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This settings is not needed anymore with Ubuntu 25.10
and the newest QEMU releases for TDX by Ubuntu.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
With issue 11777 being resolved, this commit enables openvpn
policy testing. The remaining work on the security policy
required to successfully run this test case was to enable UDP
ports for Service kinds and to use the mount path's last component
instead of the volume name to construct the expected storage
source path.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Use the mount path's last component instead of the volume name to
construct the expected storage source path. Example: Name of a
volumeMount is 'openvpn-config' and its mountPath is
'/etc/openvpn/'. Without this change, we use 'openvpn-config' to
calculate the expected storage source path. However, we need to
use 'openvpn', because the shim uses the basename of the
destination path as the source suffix and not the volume name.
For reference, see 'fs_hsare_linux.go"'s 'ShareFile' function
where the filename variable uses 'filepath.Base(m.Destionation))'.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
For Service kinds using the UDP protocol as port. An example is
the openvpn-server-service.yaml file part of the openvpn CI test.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
We've added logic to properly do the book keeping of the TEE keys when
using NFD **AND** creating the runtime classes. However, we need to also
take into consideration the case where the runtimeclasses are being
created by the helm template, and in that case we just update what helm
has deployed.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Remove the nvrc.smi.srs=1 parameter from the kernel command line.
In CC use cases, the attestation agent is expected to set the GPU
ready state. For the CUDA vectorAdd case where attestation agent
is not being used, we set the ready state by adding the kernel
command line parameter through an annotation.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Add an allow-all policy for the CC GPU tests and ensure the init-data
device is being created (hypervisor annotations).
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
The add_allow_all_policy_to_yaml in tests_common.sh needs some
improvements so that this function can support pod manifests with
different resource kinds. For now, moving the Secret definition
to the bottom so that we can create a default policy for the Pod.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
The qemu-nvida-gpu handlers should not cause is_aks_cluster to
return 1. Otherwise, CI logic will assume these hypervisors run on
AKS hosts, see the following message in CI w/o this change:
INFO: Adapting common policy settings for AKS Hosts
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
We currently start a pod that does a `wget` to the KBS address, and
fails after 5 seconds.
By the time it fails and reports back, we can see that KBS is actually
running, but the workflow failed as the checker failed. :-/
Let's give it more time for the KBS to show up, and the flakeness should
go away.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Utilize Kubelet's Pod Resource API to determine device allocations
for the Pod during sandbox creation. Use CDI files to translate the device
IDs to corresponding device paths and perform device injection.
Fixes#12009
Signed-off-by: Joji Mekkattuparamban <jojim@nvidia.com>
Use the pod name variable so that kubectl wait finds the pod. Currently,
kubectl waits for nvidia-nim-llama-3-2-nv-embedqa-1b-v2, not for
nvidia-nim-llama-3-2-nv-embedqa-1b-v2-tee
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Introduce a new devkit parameter which will produce a rootfs
without chisselling. This results in a larger rootfs with various
packages and binaries being included, for instance, enabling the
use of the debug console.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
There are rust packages being cloned and built inside
tools/packaging/kata-deploy/local-build/build folder, which may mislead
those packages to think they are part of the kata root workspace.
Exclude the directory to avoid that.
Reported-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
The person who introduced the check, someone named Fabiano Fidêncio,
forgot a `$` in a variable assignment.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The snp CI has not been required for a while and has recently been
broken, so comment it out from the list of required jobs.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The run-nydus tests are not stable and blocking PRs, so make them
non-required temporarily until they can be looked at
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Enable auto-generate policy on cbl-mariner Hosts for
qemu-coco-dev-runtime-rs if the user didn't specify an
AUTO_GENERATE_POLICY value.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
We will re-enable this one later on once the changes to properly cold
plug multi GPUs are merged.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's just move the podOverhead to a gigantic value, as we do need pod
snadboxes as big as that, and we've noticed QEMU being OOM killed with
smaller overheads.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Those need to pull the models inside the guest, and the guest has 50% of
its memory "allowed" to be used as tmpfs, so, we gotta usa the RAM that
we have.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Yes, we're dealing with a combination of large images and image-rs
concurrent image layers being not optimal.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We cannot use the same format used for docker, as it includes username
and password, while what's expected when using Trustee does not.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Now that we've bumped Trustee to a version that supports the NVIDIA
remote verifier, let's re-enable the tests.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Right now we have only been passing the env var to the deployment
script, but we really need to pass it to the tests script as well.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Try and reduce the page limit of each job request to avoid the chances of
us tripping over github's 10s api limit.
All credit to @burgerdev for the investigation and suggestion!
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Add the related block queue_size and num_queues in volumes based on
block devices, This very important for IO performance.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Previous Clh's settings with disk queue_size and num_queues are
hardcodes, they should be configurable with user-defined values.
This commit is to address such issue via passing these settings.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Usually, we pass the related block config via BlockConfig, and to reach
the goal of user-friendly setting queue_size and num_queues for users,
the queue_size and num_queues are introduced in BlockConfig.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add two fields of queue_size and num_queues in BlockDeviceInfo to allow
users to set the related items via configurations
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add related items for block device queue size and num queues in
configurations. And users can set the related items by configurations.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The current implementation causes issues with the Agent Policy
nontee CI tests, as Kata-Agent does not allow any configuration
for `count(Linux.Resources.Devices) == 0`.
This commit ensures that Linux.Resources.Devices, including all its
values, is completely cleared from the OCI Runtime Specification before
being passed to the Kata-Agent.
This addresses the CI failure by enforcing the required empty state for
the Devices cgroup configuration.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Previously, CopyFile implementation attempted to reuse existing guest
paths for subsequent containers within the same Pod. This prevented
correct bind mounting of shared configurations (e.g., ConfigMaps,
Service Accounts) into the later containers within a multi-containers
pod, as they lacked their own allocated guest path.
This commit modifies the logic to create a unique guest path for every
container that requires file propagation.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Crates with no workspace setup would think themselves are in the root
workspace, which our root workspace is not ready for them. Excluding
them for now.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Add Cargo.toml at repo root, use this root workspace for as many as
possible Rust components of Kata Containers. This would enable us to
share a common Cargo.lock file, and reduce the noise from dependabot.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Similar to #12075, bump-backtrace to 0.3.76 to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
As a side effect this brought in loads of other crate changes, which I think are due
to it bumping the local dependencies that this package builds on.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Similar to #12075, bump-backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Similar to #12075, bump flate2 and backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Similar to #12075, bump flate2 and backtrace to remove the dependency
on adler, which is unmaintained - contributing to mitigating RUSTSEC-2025-0056
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Since the network device hotplug is an asynchronous operation,
it's possible that the hotplug operation had returned, but
the network device hasn't ready in guest, thus it's better to
retry on this operation to wait until the device ready in guest.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This makes the user experience better, as the admin can deploy Kata
Containers without having to download / set up any additional file.
Of course, if the admin wants something more specific, examples are
provided.
Tests and documentation are updated to reflect this change.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The current format of genpolicy request logs looks a bit like JSON, but
it does not parse out of the box and needs post-processing with sed, for
example.
This commit changes the log format to jsonlines[1], which is basically
newline-delimited compact JSON values. Compared to standard JSON, this
allows streaming output. The resulting file can be converted and
processed programmatically, for example with `jq -s`.
The fields are also adjusted to match the field names of TestRequest, so
that the logged requests can be used immediately in tests.
[1]: https://jsonlines.org/
Signed-off-by: Markus Rudy <mr@edgeless.systems>
This should allow keeping future diffs minimal.
The files were formatted with `jq -S`, which should be used after future
updates to the test case files.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Storing the request type outside the request object has two benefits:
* The request JSON passed to the Rego engine matches more closely what
would be passed by the agent (no `type` field).
* If we want to update the requests, it's easier to insert them into a
dedicated field, rather than inserting them and amending the type
field.
This is a first step towards programmatic updates of testcase files.
This commit also adds the 'Request' suffix to the test case enum, such
that we can use the 'ep' input for allow_request directly.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Add comments to make the "EnableIOThreads" flag as a switch
for virtio-blk(based on IndepIOThreads) driver.
Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
Make hotplug virtio-blk device attach to Independent IOThread 0 as default
when enabled the EnableIOThreads and IndepIOThreads.
Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
Qemu already support the device_add with iothread args.
Make KATA have ability to hotplug PCI device with IOThreads.
Currently, just support QEMU as the hypervisor, not sure it
works for stratovirt.
Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
Make the original virtio-scsi iothread and the new independent
iothread to a dedicated method for handing the related logics.
Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
Introduce independent IOThread framework for Kata container.
What is the indep_iothreads:
This new feature introduce a way to pre-alloc IOThreads
for QEMU hypervisor (maybe other hypervisor can support too).
Independent IOThreads enables IO to be processed in a separate thread.
To generally improve the performance of each module, avoid them
running in the QEMU main loop.
Why need indep_iothreads:
In Kata container implementation, many devices based on hotplug
mechanism. The real workload container may not sync the same
lifecycle with the VM. It may require to hotplug/unplug new disks
or other devices without destroying the VM. So we can keep the
IOThread with the VM as a IOThread pool(some devices need multi iothreads
for performance like virtio-blk vq-mapping), the hotplug devices
can attach/detach with the IOThread according to business needs.
At the same time, QEMU also support the "x-blockdev-set-iothread"
to change iothreads(but it need stop VM for data secure).
Current QEMU have many devices support iothread, virtio-blk,
virtio-scsi, virtio-balloon, monitor, colo-compare...etc...
How it works:
Add new item in hypervisor struct named "indep_iothreads" in toml.
The default value is 0, it reused the original "enable_iothreads" as
the switch. If the "indep_iothreads" != 0 and "enable_iothreads" = true
it will add qmp object -iothread indepIOThreadsPrefix_No when VM startup.
The first user is the virtio-blk, it will attach the indep_iothread_0
as default when enable iothread for virtio-blk.
Thanks
Chen
Signed-off-by: zhangchen.kidd <zhangchen.kidd@jd.com>
In commit 1f95d9401b
runtime-rs: change representation of default_vcpus from i32 to f32,
When the vCPU number is less than 1.0, directly converting an integer to
a floating-point number will automatically convert it to 0. Therefore,
it needs to be rounded up before converting it back to an integer.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Update the `cpath` variable in the policy template to support the
optional `/passthrough` subpath used by runtime-rs. This ensures
that mount source path validation works correctly for both runtime
implementations.
By changing `cpath` to include the `(?:/passthrough)?` regular
expression fragment, we make the `/passthrough` segment optional.
The updated `cpath`:
`/run/kata-containers/shared/containers(?:/passthrough)?`
This single regex pattern now correctly matches both:
1.`/run/kata-containers/shared/containers/<sandbox-id>/...`
(runtime-go)
2.`/run/kata-containers/shared/containers/passthrough/<sandbox-id>/...`
(runtime-rs)
This elegantly resolves the compatibility issue without needing to add
separate or conditional logic to the policy rules, making the policy
more robust and maintainable.
Fixes: #12063
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add three example values files to make it easier for users to try out
different Kata Containers configurations:
- try-kata.values.yaml: Enables all available shims
- try-kata-tee.values.yaml: Enables only TEE/confidential computing shims
- try-kata-nvidia-gpu.values.yaml: Enables only NVIDIA GPU shims
These files use the new structured configuration format and serve as
ready-to-use examples for common deployment scenarios.
Also update the README.md to document these example files and how to use them.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Update the helm_helper function in gha-run-k8s-common.sh to use the
new structured configuration format instead of the legacy env.* format.
All possible settings have been migrated to the structured format:
- HELM_DEBUG now sets root-level 'debug' boolean
- HELM_SHIMS now enables shims in structured format with automatic
architecture detection based on shim name
- HELM_DEFAULT_SHIM now sets per-architecture defaultShim mapping
- HELM_EXPERIMENTAL_SETUP_SNAPSHOTTER now sets snapshotter.setup array
- HELM_ALLOWED_HYPERVISOR_ANNOTATIONS now sets per-shim allowedHypervisorAnnotations
- HELM_SNAPSHOTTER_HANDLER_MAPPING now sets per-shim containerd.snapshotter
- HELM_AGENT_HTTPS_PROXY and HELM_AGENT_NO_PROXY now set per-shim agent proxy settings
- HELM_PULL_TYPE_MAPPING now sets per-shim forceGuestPull/guestPull settings
- HELM_EXPERIMENTAL_FORCE_GUEST_PULL now sets per-shim forceGuestPull/guestPull
The test helper automatically determines supported architectures for
each shim (e.g., qemu-se supports s390x, qemu-cca supports arm64,
qemu-snp/qemu-tdx support amd64, etc.) and applies per-shim settings
to the appropriate shims based on HELM_SHIMS.
Only HELM_HOST_OS remains in legacy env.* format as it doesn't have
a structured equivalent yet.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add comprehensive documentation for the new structured configuration
format, including:
- Migration guide from legacy env.* format
- List of deprecated fields with removal timeline (2 releases)
- Examples of the new structured format
- Explanation of key benefits
- Backward compatibility notes
The documentation makes it clear that the legacy format is deprecated
but will continue to work during the transition period.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This commit adds backward compatibility support to ensure existing
configurations using the legacy env.* format continue to work.
The helper functions now check for legacy env.* values first, and
only fall back to the new structured format if legacy values are
not set. This allows for gradual migration without breaking
existing deployments.
Backward compatibility is maintained for:
- env.shims, env.shims_* (per architecture)
- env.defaultShim, env.defaultShim_* (per architecture)
- env.allowedHypervisorAnnotations
- env.snapshotterHandlerMapping_* (per architecture)
- env.pullTypeMapping_* (per architecture)
- env.agentHttpsProxy, env.agentNoProxy
- env._experimentalSetupSnapshotter
- env._experimentalForceGuestPull_* (per architecture)
- env.debug
Legacy env vars (SHIMS, DEFAULT_SHIM, etc.) are still set in the
DaemonSet when using the old format to maintain full compatibility.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This commit introduces a new structured configuration format for
configuring Kata Containers shims in the Helm chart. The new format
provides:
- Per-shim configuration with enabled/supportedArches
- Per-shim snapshotter, guest pull, and agent proxy settings
- Architecture-aware default shim configuration
- Root-level debug and snapshotter setup configuration
All shims are disabled by default and must be explicitly enabled.
This provides better type safety and clearer organization compared
to the legacy env.* string-based format.
The templates are updated to use the new structure exclusively.
Backward compatibility will be added in a follow-up commit.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As the some of the global vars can be empty, we should actually check
their _FOR_ARCH version instead.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we're making the values.yaml more user friendly, we actually have to
handle the https_proxy and no_proxy entries per shim, instead of having
this globally available, as this will only affect images being pulled
inside the guest (as in, when using TEE variations of the shims).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Adds a practical set of kernel config used by docker-in-docker and kind
for network bridging and filtering. It also includes the matching IPv6
support to allow tools like kind that require IPv6 network policies to
work out of the box.
This support includes:
- nftables reject and filtering support for inet/ipv4/ipv6
- Bridge filtering for container-to-container traffic
- IPv6 NAT, filtering, and packet matching rules for network policies
- VXLAN and IPsec crypto support for network tunneling
- TMPFS POSIX ACL support for filesystem permissions
The configs are organized across fragment files:
- common/fs.conf: TMPFS ACL support
- common/crypto.conf: IPsec/VXLAN crypto algorithms
- common/network.conf: VXLAN, IPsec ESP, nftables bridge/ARP/netdev
- common/netfilter.conf: IPv6 netfilter stack and nftables advanced features
Fixes: #11886
Signed-off-by: Simon Kaegi <simon.kaegi@gmail.com>
Re-enable AUTO_GENERATE_POLICY for coco-dev Hosts, unless PULL_TYPE is
"experimental-force-guest-pull", or the caller specified a different
value for AUTO_GENERATE_POLICY.
Auto-generated Policy has been disabled accidentally and recently for
these Hosts, by a GHA workflow change.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Don't skip anymore parsing the pause container image when using the
recently updated AKS pause container handling - i.e. when
pause_container_id_policy == "v2".
This was the easiest CI fix for guest pull + new AKS given the *current*
tests. When adding *new* UID/GID/AdditionalGids tests in the future,
these workarounds might need additional updates.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is negligible, because flate2 is only used for initdata
decompression, which is limited to a couple of MiB anyway.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is acceptable for a developer tool.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
The github API suggestions that `Authorization: Bearer <YOUR-TOKEN>`
is the way to set the auth token, but it also mentioned that `token`
should work, so it's unclear if this will help much, but it shouldn't harm.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The formatting wasn't quite right, so the `qemu-coco-dev-runtime-rs`
hypervisor wasn't skipping this test
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Introduce a flag `DEFSTATICRESOURCEMGMT_COCO` for setting static sandbox
resource management with default true. And then set it to the item of
`static_sandbox_resource_mgmt` in configuration.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
When testing this branch, on several occasions the Delete
AKS cluster step has hung for multiple hours, so add a timeout
to prevent this.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Since it aligns with the create_container_timeout definition in
runtime-go, we need to set the value in configuration.toml in seconds,
not milliseconds. We must also convert it to milliseconds when the
configuration is loaded for request_timeout_ms.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Create non-tee runtime class for runtime-rs qemu CoCo development
without requiring TEE hardware. Based on the qemu-runtime-rs
config, but with updated guest image, kernel and shared_fs
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The new environment of Power runners for agent checks is causing two test case failures
w.r.to selinux and inode which needs further understanding and is mostly an issue
due to environemnt change and not to do with the agent.
Fall back to running agent checks on original ppc64le self hosted runners.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
As the arm 22.04 runner isn't working at the moment, let's test the
24.04 version to see if that is better.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The fact that we were not explicitly setting the VMM was leading to us
testing with the default runtime class (qemu). :-/
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
By doing this, the ones interested on RISC-V support can still have a
ood visibility of its state, without the extra noise in our CI.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have had those tests broken for months. It's time to get rid of
those.
NOTE that we could easily revert this commit and re-add those tests as
soon as we find someone to maintain and be responsible for such
integration.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As stratovirt CI was removed in #12006 we should remove the
jobs from required.
Also the docker tests have been commented out for months, and
we are considering removing them, so clean this file up.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
sometimes it's hard to enumerate all blacklisted namespaces, lets add a
regular expression based only filter to allow specifying namespaces that
should be mutated.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Previous set for the Mount.type with `bind` is wrong, and for local
storage, the type of Mount should be `local`.
This commit aims to correct the type with "local".
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As the disable_guest_empty_dir order is wrong which causes
the bool value is not correct and it got a wrong result.
This commit aims to correct the parameters order.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This is a bump pre-release, which brings several fixes and some
improvements related to initData, and NVIDIA's remote verifier.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The test case designed to verify policy failures due to an "unexpected
capability" was misconfigured. It was using "CAP_SYS_CHROOT" as the
unexpected capability to be added.
This configuration was flawed for two main reasons:
1.Incorrect Syntax: Kubernetes Pod specs expect capability names without
the "CAP_" prefix (e.g., "SYS_CHROOT", not "CAP_SYS_CHROOT").
This made the test case's premise incorrect from a K8s API perspective.
2.Part of Default Set: "SYS_CHROOT" is already included in the
`default_caps` list for a standard container. Therefore, adding it would
not trigger a policy violation, defeating the purpose of the
"unexpected capability" test.
Furthermore, a related issue was observed where a malformed capability
like "CAP_CAP_SYS_CHROOT" was being generated, causing parsing failures
in the `oci-spec-rs` library. This was a symptom of incorrect string
manipulation when handling capabilities.
This commit corrects the test by selecting "SYS_NICE" as the unexpected
capability. "SYS_NICE" is a more suitable choice because:
- It is a valid Linux capability.
- It is relatively harmless.
- It is **not** part of the default capability set defined in
`genpolicy-settings.json`.
By using "SYS_NICE", the test now accurately simulates a scenario where
a Pod requests a legitimate but non-default capability, which the policy
(generated from a baseline Pod without this capability) should correctly
reject. This change fixes the test's logic and also resolves the
downstream `oci-spec-rs` parsing error by ensuring only valid capability
names are processed.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Detected a format mismatch in OCI Spec Capabilities fields between
`runtime-rs` (no `CAP_` prefix) and `runtime-go` (with `CAP_` prefix).
This introduces a normalization of caps in match_caps(p_caps, i_caps).
This ensures robust and consistent processing of Capabilities regardless
of whether the OCI Spec originates from `runtime-rs` or `runtime-go`.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Currently, the initdata module only detects virtio-blk devices
(/dev/vd*) when searching for the initdata block device. However,
when using virtio-scsi, the devices appear as /dev/sd* in the
guest, causing the initdata detection to fail.
This commit extends the device detection logic to support both
device types:
- virtio-blk devices: /dev/vda, /dev/vdb, etc.
- virtio-scsi devices: /dev/sda, /dev/sdb, etc.
This commits aims to address issue of theinitdata device not being
found when using virtio-scsi
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Thankfully there's only one piece that's still SNP specific (for the
supported TEEs). Let's adjust it so we can have an easy and smooth
execution when adding a TDX CI machine.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There are several changes needed in order to get this test working with
CC, and yet we still are skipping it.
Basically, we need to:
* Pull an authenticated image inside the guest, which requires:
* Using Trustee to release the credential
* We still depend on a PR to be merged on Trustee side
* https://github.com/confidential-containers/trustee/pull/1035
* We still depend on a Trustee bump (including the PR above) on our
side
Apart from those changes, I ended up "duplicating" the tests by adding a
"-tee" version of those, which already have:
* The proper kbs annotations set up
* Dropped host mounts
* Increases the memory needed
Last but not least, as "bats" probably means "being a terrible script",
I had to re-arrange a few things otherwise the tests would not even run
due to bats-isms that I am sincerely not able to pin-point.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We added the tests using virtio-9p as we knew it'd require incremental
changes to be able to use any kind of guest-pull method.
Now, as in the coming commits we'll be actually ensuring that guest-pull
works and is in use, we can enforce the experimental_force_guest_pull
usage for the nvidia cases.
Note: We're using experimental_force_guest_pull instead of
nydus-snapshotter due to stability concerns with the snapshotter.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It was just missed when adding those configurations.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It takes either a shim name or "", but we were treating this (thankfully
only in this specific file) as a boolean.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Adjust output to the setup_file and teardown_file behavior.
With this, we will be able to observe relevant logging rather than
adding to the output variable.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Previous commit enabled getting the physical address reduction from
processor but just stored it for later use. This commit adds handling
of the value to ProtectionDevice and enables the QEMU driver to use it.
Signed-off-by: Pavel Mores <pmores@redhat.com>
An implementation of cbitpos acquisition is supplied that was missing
so far. We also get the physical address reduction value from the same
source (CPUID Fn8000_001f function). This has been hardcoded at 1 so far,
following the Go runtime example, but it's better to get it from the
processor.
Signed-off-by: Pavel Mores <pmores@redhat.com>
- version.rs gets generated from version.rs.in
- version.rs.in contains values read from VERSION
- so version.rs (and maybe other Agent files too) must be
re-generated when the VERSION file changes
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
The new image reference has changed to mcr.microsoft.com/oss/v2/kubernetes/pause:3.6
from mcr.microsoft.com/oss/kubernetes/pause:3.6.
The new image uses by default UID=0, GID=0 while the older. The older image had:
UID=65535, GID=65535.
There is a new pause_container_id_policy field in genpolicy-settings.json, informing
genpolicy about the way AdditionalGids gets updated - "v1" for the older behavior
and "v2" for the newer AKS version:
- When using v1, the default value of AdditionalGids is {65535}.
- When using v2, the default value of AdditionalGids is {}.
UID=65535 and GID=65535 are still hard-coded by default in genpolicy-settings.json.
We might be able to remove/ignore these fields in the future, if we'll stop relying
on policy::KataSpec::get_process_fields to use these fields.
A new CI function adapt_common_policy_settings_for_aks() changes the pause container
UID, GID, pause_container_id_policy, and image ref settings values when testing on
AKS Hosts - i.e., when testing coco-dev or mariner Hosts.
The genpolicy workarounds for the unexpected behavior with guest pull enabled have
been improved to use the current container's GID instead of hard-coding GID=0 as the
guest pull default. Also, AdditionalGids gets updated when the current container's GID is
changing, instead of always changing the AdditionalGids at the very end of
policy::AgentPolicy::get_container_process(), when the relevant evolution of the GID
value was no longer available.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Make it easier to understand the source of the UID/GID/AdditionalGids
values from the container in the auto-generated policy.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Parallelize busybox builds to build a bit faster and create the
build directory prior to Docker execution, which on my
environment, helps with permission issues when building busybox
without the kata-containers/build directory existing beforehand.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Correct the hardcoded value of disable_guest_empty_dir, instead,
we use the real value of it which comes from the configuration.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
A sandbox annotation that determines if it should create Kubernetes
emptyDir mounts on the guest filesystem.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It acts as if it should create Kubernetes emptyDir mounts on the
guest filesystem. If enabled, the runtime will not create Kubernetes
emptyDir mounts on the guest filesystem.Instead, emptyDir mounts will
be created on the host and shared via virtio-fs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Let's ensure Trustee is deployed as some of the tests rely images that
live behind authentication. /o\
The approach taken here to deploy Trustee is exactly the same one taken
on the other CoCo tests, apart from an env var passed to ensure we're
using the NVIDIA remote verifier (which will be in handy very very
soon).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This isn't really related to remote hypervisor though it was useful for
its debugging. It's a small helper I've been using regularly during
development for quite some time that I think might be useful more broadly.
Signed-off-by: Pavel Mores <pmores@redhat.com>
The remote hypervisor launches no VM, it just instructs the Cloud API
Adaptor to do so, therefore it has no need for an image or initrd to boot
from and should be exempt from the mandate for one or the other to be
specified.
Signed-off-by: Pavel Mores <pmores@redhat.com>
The go runtime's .proto file - which is also used by the Cloud API
Adaptor - puts the Hypervisor service into the "hypervisor" package.
runtime-rs has to do the same to avoid an "unimplemented" error.
Signed-off-by: Pavel Mores <pmores@redhat.com>
With the change made to the matrix when the CC GPU runner was added,
there was a change in the job name (@sprt saw that coming, but I
didn't).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Same deal as the previous commut, just enabling the tests here, with the
same list of improvements that we will need to go through in order to
get is working in a perfect way.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
While the primary goal of this change is to detect regressions to the
NVIDIA SNP GPU scenario, various improvements to reflect a more
realistic CC setting are planned in subsequent changes, such as:
* moving away from the overlayfs snapshotter
* disabling filesystem sharing
* applying a pod security policy
* activating the GPUs only after attestation
* using a refined approach for GPU cold-plugging without requiring
annotations
* revisiting pod timeout and overhead parameters (the podOverhead value
was increased due to CUDA vectorAdd requiring about 6Gi of
podOverhead, as well as the inference and embedqa requiring at least
12Gi, respectively, 14Gi of podOverhead to run without invoking the
host's oom-killer. We will revisit this aspect after addressing
points 1. and 2.)
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
For the nvidia-gpu-snp and nvidia-gpu-tdx we must set containerd to
allow the CDI annotation to be passed to down.
This solution may become obsolete soon enough, but the cleanest way to
have it properly working is by adding it here (even if we remove it
before the next release).
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It's been noticed that as more RAM is needed to run the CC tests, we
also need to update the podOverhead of the NVIDIA CC runtime classes to
avoid getting OOM Killed.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Since there's something wrong with the cpu hotplug
on qemu-runtime-rs, thus disable this test temporally.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
It should do nothing instead of return an error when
hot-unplug the memory to the size smaller than static
plugged memory size.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Since the nerdctl's network hook would call pselect6 syscall
by xtables-nft-multi, thus we'd better add it to the seccomp's
whitelist.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Let's add a new NVIDIA machine, which later on will be used for CC
related tests.
For now the current tests are skipped in the CC capable machine.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's now make sure that we don't add duplicated values to any of our
entries, making the script as sane as possible for sequential runs.
Vibed with Cursor's help!
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's add some helper functions, not yet used, to avoid adding
duplicated items.
This idea is an expansion of Choi's idea to avoid setting duplicated
items, and it'll help on making the whole script idempotent on
sequential runs.
Vibed with Cursor's help!
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
I know, this is not simplifying much things for now, but it has a good
intent in the background and will serve as base for making the
kata-deploy helm chart more user friendly.
With that said, let's add ALLOWED_HYPERVISOR_ANNOTATIONS per arch, while
adding support to set something like "qemu:foo,bar clh:bar foobar
barfoo". Why? Because in the future we'll have a better way to set this
per shim (and the shim is per arch ...).
More details of what we'll do in the future are being discussed here:
https://github.com/kata-containers/kata-containers/issues/12024
Anyways, the variables are **DELIBERATELY** not exposed to the chart for
now, as those will be later on when addressing the issue mentioned
above.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When the runtimeClasses were added, as part of 7cfa826804, the
firecracker runtimeClass ended up missing from the dictionary.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The Firecracker installation docs had an outaded containerd configuration for the devmapper plugin.
This commit updates the instructions so that they are compatible with more recent versions of containerd.
Signed-off-by: Anton Ippolitov <anton.ippolitov@datadoghq.com>
When added, I've mistakenly used the wrong test-type name, which is now
fixed and should be enough to trigger the tests correctly.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
On IBM actionspz P/Z runners, the following error was observed during
runtime tests:
```
host system doesn't support vsock: stat /dev/vhost-vsock: no such file or directory
```
Since loading the vsock module on the fly is not permitted, this commit
moves the runtime tests back to self-hosted runners for P/Z.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
On IBM actionspz Z runners, the following error occurs when running
`modprobe`:
```
modprobe: FATAL: Module bridge not found in directory /lib/modules/6.8.0-85-generic
```
Additionally, there are no files under `/lib/modules`, for example:
```
total 0
drwxr-xr-x 1 root root 0 Aug 5 13:09 .
drwxr-xr-x 1 root root 2.0K Oct 1 22:59 ..
```
This commit skips the `test_load_kernel_module` test if the module is
not found or if running `modprobe` is not permitted.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
On IBM actionspz Z runners, write operations on network interfaces
are not allowed, even for the root user.
This commit skips the `add_update_addresses` test if the operation
fails with EACCES (-13, permission denied).
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
On IBM actionspz Z runners, the ioctl system call is not allowed even
for the root user. There is likely an additional security mechanism
(such as AppArmor or seccomp) in place on Ubuntu runners.
This commit introduces a new helper, `is_permission_error()`,
which skips the test if ioctl operations in `reseed_rng()` are not
permitted.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The IBM actionspz Z runners mount /dev as tmpfs, while other systems
use devtmpfs. This difference causes an assertion failure for
test_already_baremounted.
This commit sets the detected filesystem for bare-mounted points
as the expected value.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The root filesystem for IBM actionspz Z runners is `btrfs` instead of `ext4`.
The error message differs when an unprivileged user tries to perform a bind mount.
This commit adjusts the handling of error messages based on the detected root
filesystem type.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Since the qemu & cloud-hypervisor support the cpu & memory
hotplug now, thus disable the static resource management
for qemu and cloud-hypervisor by default.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Since qemu-coco-dev-runtime-rs and qemu-coco-dev had disabled the
cpu&memory hotplug by enable static_sandbox_resource_mgmt, thus
we should disable the cpu hotplug test for those two runtime.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Since the qemu, cloud-hypervisor and dragonball had supported the
cpu hotplug on runtime-rs, thus enable the cpu hotplug test in CI.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This commit introduces the configuration flag `disable_guest_empty_dir`
to control the placement of Kubernetes emptyDir volumes.
By default, the value is set to `false`, maintaining the current
behavior of creating emptyDirs within the guest VM
When set to `true`, emptyDirs will be created on the host filesystem.
This is essential for scenarios where users need to share data between
the host and the guest VM via an emptyDir.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
When handling a memory-based emptyDir, the runtime creates a tmpfs
mount inside the guest VM. The previous implementation just supports
mount options with only "rbind", which does not explicitly guarantee
the desired mount propagation behavior.
This commit hardens the mounting process by explicitly adding the
`rprivate` and `rw` mount flags.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit introduces the 'local' volume, which is specifically
designed to create and manage Kubernetes emptyDir volumes directly
within the VM's sandbox directory.
The core functionality ensures that local volume can be handled
correctly in handle volume procedure.
This capability is essential for allowing containers to leverage the
storage backend for shared volumes.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit implements the new 'local' storage type, enabling Kubernetes
emptyDir volumes to be created and managed directly inside the Kata VM
(in the sandbox directory).
The 'local' type instructs the kata-agent to provision the empty
directory within the VM.
This approach allows containers to share storage inside VM, Specially
useful within CoCo emptyDir scenarios.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Separated the checks for tmpfs and disk-based emptyDirs from an
`if-else if` block into two distinct `if` statements. This clarifies
the logic by treating each volume type detection as an independent task.
Additionally, updated the type for disk-based emptyDirs to the more
semantically accurate `KATA_K8S_LOCAL_STORAGE_TYPE`. This allows for
more specific handling downstream, distinguishing them from generic
host path mounts.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
In fact, emptyDir is not usually found in the proc mounts with the
previous logic and then it failed with the previous implementation.
Based on the related implementation within runtime-go,related
implementation within
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This introduces a new storage type: local. Local storage type will
tell kata-agent to create an empty directory with LocalStorgae handler
in the sandbox directory within the VM.
And it also makes it align with runtime-go `KataLocalDevType = "local"`.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Pod annotations from the outer runtime are being used for cold-plugging
CDI devices. We need to ensure that these annotations don't leak into
the inner runtime for which specific container (sibling) annotations
are being created. Without this change, the inner runtime receives both
annotations, leading to failing CDI injection as an outer runtime
annotation observed in the guest translates to an unresolvable CDI
device, for example, cdi.k8s.io/gpu: "nvidia.com/pgpu=0".
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Stratovirt has been failing for a considerable amount of time, with no
sign of someone watching it and being actively working on a fix.
With this we also stop building and shipping stratovirt as part of our
release as we cannot test it.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
A few weeks ago we've tested nydus-snapshotter with this approach, and
we DID find issues with it.
Now, let's also test this with `experimental_force_guest_pull`.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It's just a follow-up on the previous commit where we move away from the
runtimeClass creation inside the script, and instead we do it using the
chart itself.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This reverts commit be05e1370c, which is
not a problem as we never released such option.
Conflicts:
tools/packaging/kata-deploy/helm-chart/README.md
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We had this logic inside the script when we didn't use the helm chart.
However, this only makes the shim script more convoluted for no reason.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
In order to fix:
```
=== Running govulncheck on containerd-shim-kata-v2 ===
Vulnerabilities found in containerd-shim-kata-v2:
=== Symbol Results ===
Vulnerability #1: GO-2025-4015
Excessive CPU consumption in Reader.ReadResponse in net/textproto
More info: https://pkg.go.dev/vuln/GO-2025-4015
Standard library
Found in: net/textproto@go1.24.6
Fixed in: net/textproto@go1.24.8
Vulnerable symbols found:
#1: textproto.Reader.ReadResponse
Vulnerability #2: GO-2025-4014
Unbounded allocation when parsing GNU sparse map in archive/tar
More info: https://pkg.go.dev/vuln/GO-2025-4014
Standard library
Found in: archive/tar@go1.24.6
Fixed in: archive/tar@go1.24.8
Vulnerable symbols found:
#1: tar.Reader.Next
Vulnerability #3: GO-2025-4013
Panic when validating certificates with DSA public keys in crypto/x509
More info: https://pkg.go.dev/vuln/GO-2025-4013
Standard library
Found in: crypto/x509@go1.24.6
Fixed in: crypto/x509@go1.24.8
Vulnerable symbols found:
#1: x509.Certificate.Verify
#2: x509.Certificate.Verify
Vulnerability #4: GO-2025-4012
Lack of limit when parsing cookies can cause memory exhaustion in net/http
More info: https://pkg.go.dev/vuln/GO-2025-4012
Standard library
Found in: net/http@go1.24.6
Fixed in: net/http@go1.24.8
Vulnerable symbols found:
#1: http.Client.Do
#2: http.Client.Get
#3: http.Client.Head
#4: http.Client.Post
#5: http.Client.PostForm
Use '-show traces' to see the other 9 found symbols
Vulnerability #5: GO-2025-4011
Parsing DER payload can cause memory exhaustion in encoding/asn1
More info: https://pkg.go.dev/vuln/GO-2025-4011
Standard library
Found in: encoding/asn1@go1.24.6
Fixed in: encoding/asn1@go1.24.8
Vulnerable symbols found:
#1: asn1.Unmarshal
#2: asn1.UnmarshalWithParams
Vulnerability #6: GO-2025-4010
Insufficient validation of bracketed IPv6 hostnames in net/url
More info: https://pkg.go.dev/vuln/GO-2025-4010
Standard library
Found in: net/url@go1.24.6
Fixed in: net/url@go1.24.8
Vulnerable symbols found:
#1: url.JoinPath
#2: url.Parse
#3: url.ParseRequestURI
#4: url.URL.Parse
#5: url.URL.UnmarshalBinary
Vulnerability #7: GO-2025-4009
Quadratic complexity when parsing some invalid inputs in encoding/pem
More info: https://pkg.go.dev/vuln/GO-2025-4009
Standard library
Found in: encoding/pem@go1.24.6
Fixed in: encoding/pem@go1.24.8
Vulnerable symbols found:
#1: pem.Decode
Vulnerability #8: GO-2025-4008
ALPN negotiation error contains attacker controlled information in
crypto/tls
More info: https://pkg.go.dev/vuln/GO-2025-4008
Standard library
Found in: crypto/tls@go1.24.6
Fixed in: crypto/tls@go1.24.8
Vulnerable symbols found:
#1: tls.Conn.Handshake
#2: tls.Conn.HandshakeContext
#3: tls.Conn.Read
#4: tls.Conn.Write
#5: tls.Dial
Use '-show traces' to see the other 4 found symbols
Vulnerability #9: GO-2025-4007
Quadratic complexity when checking name constraints in crypto/x509
More info: https://pkg.go.dev/vuln/GO-2025-4007
Standard library
Found in: crypto/x509@go1.24.6
Fixed in: crypto/x509@go1.24.9
Vulnerable symbols found:
#1: x509.CertPool.AppendCertsFromPEM
#2: x509.Certificate.CheckCRLSignature
#3: x509.Certificate.CheckSignature
#4: x509.Certificate.CheckSignatureFrom
#5: x509.Certificate.CreateCRL
Use '-show traces' to see the other 27 found symbols
Vulnerability #10: GO-2025-4006
Excessive CPU consumption in ParseAddress in net/mail
More info: https://pkg.go.dev/vuln/GO-2025-4006
Standard library
Found in: net/mail@go1.24.6
Fixed in: net/mail@go1.24.8
Vulnerable symbols found:
#1: mail.AddressParser.Parse
#2: mail.AddressParser.ParseList
#3: mail.Header.AddressList
#4: mail.ParseAddress
#5: mail.ParseAddressList
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
not ok 1 Successful replication controller with auto-generated policy in 123335ms
ok 2 Policy failure: unexpected container command in 14601ms
ok 3 Policy failure: unexpected volume mountPath in 14443ms
ok 4 Policy failure: unexpected host device mapping in 14515ms
ok 5 Policy failure: unexpected securityContext.allowPrivilegeEscalation in 14485ms
ok 6 Policy failure: unexpected capability in 14382ms
ok 7 Policy failure: unexpected UID = 1000 in 14578ms
After this change:
not ok 1 Successful replication controller with auto-generated policy in 17108ms
ok 2 Policy failure: unexpected container command in 14427ms
ok 3 Policy failure: unexpected volume mountPath in 14636ms
ok 4 Policy failure: unexpected host device mapping in 14493ms
ok 5 Policy failure: unexpected securityContext.allowPrivilegeEscalation in 14554ms
ok 6 Policy failure: unexpected capability in 15087ms
ok 7 Policy failure: unexpected UID = 1000 in 14371ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
not ok 1 Successful pod with auto-generated policy in 94852ms
ok 2 Policy failure: unexpected device mount in 17807ms
After this change:
not ok 1 Successful pod with auto-generated policy in 35194ms
ok 2 Policy failure: unexpected device mount in 21355ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
not ok 1 Logs empty when ReadStreamRequest is blocked in 102257ms
After this change:
not ok 1 Logs empty when ReadStreamRequest is blocked in 17339ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
not ok 1 Successful job with auto-generated policy in 107111ms
ok 2 Policy failure: unexpected environment variable in 7920ms
ok 3 Policy failure: unexpected command line argument in 7874ms
ok 4 Policy failure: unexpected emptyDir volume in 7823ms
ok 5 Policy failure: unexpected projected volume in 7812ms
ok 6 Policy failure: unexpected readOnlyRootFilesystem in 7903ms
ok 7 Policy failure: unexpected UID = 222 in 7720ms
After this change:
not ok 1 Successful job with auto-generated policy in 10271ms
ok 2 Policy failure: unexpected environment variable in 8018ms
ok 3 Policy failure: unexpected command line argument in 7886ms
ok 4 Policy failure: unexpected emptyDir volume in 7621ms
ok 5 Policy failure: unexpected projected volume in 7843ms
ok 6 Policy failure: unexpected readOnlyRootFilesystem in 7632ms
ok 7 Policy failure: unexpected UID = 222 in 7619ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
ok 1 Successful sc deployment with auto-generated policy and container image volumes in 14769ms
ok 2 Successful sc with fsGroup/supplementalGroup deployment with auto-generated policy and container image volumes in 8384ms
not ok 3 Successful sc deployment with security context choosing another valid user in 136149ms
ok 4 Successful layered sc deployment with auto-generated policy and container image volumes in 8862ms
ok 5 Policy failure: unexpected GID = 0 for layered securityContext deployment in 7941ms
ok 6 Policy failure: malicious root group added via supplementalGroups deployment in 11612ms
After:
ok 1 Successful sc deployment with auto-generated policy and container image volumes in 15230ms
ok 2 Successful sc with fsGroup/supplementalGroup deployment with auto-generated policy and container image volumes in 9364ms
not ok 3 Successful sc deployment with security context choosing another valid user in 11060ms
ok 4 Successful layered sc deployment with auto-generated policy and container image volumes in 9124ms
ok 5 Policy failure: unexpected GID = 0 for layered securityContext deployment in 7919ms
ok 6 Policy failure: malicious root group added via supplementalGroups deployment in 11666ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
During the ${wait_time} for an expected condition, if
CreateContainerRequest was NOT expected to fail: detect possible
CreateContainerRequest failures early and abort the wait.
For example, before this change:
not ok 1 Successful pod with auto-generated policy in 110801ms
not ok 2 Able to read env variables sourced from configmap using envFrom in 94104ms
not ok 3 Successful pod with auto-generated policy and runtimeClassName filter in 95838ms
not ok 4 Successful pod with auto-generated policy and custom layers cache path in 110712ms
ok 5 Policy failure: unexpected container image in 8113ms
ok 6 Policy failure: unexpected privileged security context in 7943ms
ok 7 Policy failure: unexpected terminationMessagePath in 11530ms
ok 8 Policy failure: unexpected hostPath volume mount in 7970ms
ok 9 Policy failure: unexpected config map in 7933ms
not ok 10 Policy failure: unexpected lifecycle.postStart.exec.command in 112677ms
ok 11 RuntimeClassName filter: no policy in 2302ms
not ok 12 ExecProcessRequest tests in 93946ms
not ok 13 Successful pod: runAsUser having the same value as the UID from the container image in 94003ms
ok 14 Policy failure: unexpected UID = 0 in 8016ms
ok 15 Policy failure: unexpected UID = 1234 in 7850ms
After:
not ok 1 Successful pod with auto-generated policy in 12182ms
not ok 2 Able to read env variables sourced from configmap using envFrom in 10121ms
not ok 3 Successful pod with auto-generated policy and runtimeClassName filter in 11738ms
not ok 4 Successful pod with auto-generated policy and custom layers cache path in 26592ms
ok 5 Policy failure: unexpected container image in 7742ms
ok 6 Policy failure: unexpected privileged security context in 7949ms
ok 7 Policy failure: unexpected terminationMessagePath in 7789ms
ok 8 Policy failure: unexpected hostPath volume mount in 7887ms
ok 9 Policy failure: unexpected config map in 7818ms
not ok 10 Policy failure: unexpected lifecycle.postStart.exec.command in 9120ms
ok 11 RuntimeClassName filter: no policy in 2081ms
not ok 12 ExecProcessRequest tests in 9883ms
not ok 13 Successful pod: runAsUser having the same value as the UID from the container image in 9870ms
ok 14 Policy failure: unexpected UID = 0 in 11161ms
ok 15 Policy failure: unexpected UID = 1234 in 7814ms
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
We've seen a few cases where we fail the test due to timeout and when we
print the pods we just see that they've been created.
With that in mind, let's just increase the timeout a little bit.
Example:
```
not ok 1 Parallel jobs in 6250ms
(in test file k8s-parallel.bats, line 41)
`kubectl wait --for=condition=Ready --timeout=$timeout pod -l jobgroup=${job_name}' failed
No resources found in kata-containers-k8s-tests namespace.
[bats-exec-test:71] INFO: k8s configured to use runtimeclass
job.batch/process-item-test1 created
job.batch/process-item-test2 created
job.batch/process-item-test3 created
NAME STATUS COMPLETIONS DURATION AGE
process-item-test1 Running 0/1 0s
process-item-test2 Running 0/1 0s
process-item-test3 Running 0/1 0s
error: no matching resources found
No resources found in kata-containers-k8s-tests namespace.
No resources found in kata-containers-k8s-tests namespace.
DEBUG: system logs of node 'aks-nodepool1-25989463-vmss000000' since test start time (2025-11-01 16:39:03)
-- No entries --
job.batch "process-item-test1" deleted
job.batch "process-item-test2" deleted
job.batch "process-item-test3" deleted
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we'll face issues like:
```
Error: found in Chart.yaml, but missing in charts/ directory: node-feature-discovery
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we're failing on the uninstall, which seems related to a bug on NFD
itself, but I don't have access to a s390x machine to debug, let's skip
the enablement for now and enable it back once we've experimented it
better on s390x.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we're failing to install NFD on CBL Mariner, let's skip the
enablement there, and enable it once we've experimented it better there.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we have the ability to deploy NFD as a sub-chart of our chart, let's
make sure we test it during our CI.
We had to increase the timeout values, where we had timeouts set, to
deploy / undeploy kata, as now NFD is also deployed / undeployed.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure that we add NFD as a weak dependency of the kata-deploy
helm chart.
What we're doing for now is leaving it up to the user / admin to enable
it, and if enabled then we do a explicit check for virtualization
support (x86_64 only for now).
In case NFD is already deployed, we fail the installation (in case it's
enabled on the kata-deploy helm chart) with a clear error message to the
user.
While I know that kata-remote **DOES NOT** require virtualization, I've
left this out (with a comment for when we add a peer-pods dependency on
kata-deploy) in order to simplify things for now, as kata-remote is not
a deployed shim by default.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As Kata Containers can be consumed by other helm-charts, hard coding the
default runtime class name to `kata` is not optimal.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
All the options that take a specific shim as an argument MUST have
specific per arch settings, as not all the shims are available for all
the arches, leading to issues when setting up multi-arch deployments.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure that we consume NVRC releases straight from GitHub instead
of building the binaries ourselves.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have here either /dev/vfio/<num> or /dev/vfio/devices/vfio<num>,
for IOMMUFD format /dev/vfio/devices/vfio<num>, strip "vfio" prefix
/dev/vfio/123 - basename "123" - vfioNum = "123" - cdi.k8s.io/vfio123
/dev/vfio/devices/vfio123 - basename "vfio123" - strip - vfioNum = "123" - cdi.k8s.io/vfio123
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
This allows us to test privileged containers when using the webhook.
We can do this because kata-deploy sets privileged_without_host_devices = true for kata runtime by default.
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
We have 2 tests running on GitHub provided runners:
* devmapper
* CRI-O
- devmapper situation
For devmapper, we're currently testing devmapper with s390x as part of
one of its jobs.
More than that, this test has been failing here due to a lack of space
in the machine for quite some time, and no-action was taken to bring it
back either via GARM or some other way.
With that said, let's rely on the s390x CI to test devmapper and avoid
one extra failure on our CI by removing this one.
- cri-o situation
CRI-O is being tested with a fixed version of kubernetes that's already
reached its EOL, and a CRI-O version that matches that k8s version.
There has been attempts to raise issues, and also to provide a PR that
does at least part of the work ... leaving the debugging part for the
maintainers of the CI. However, there was no action on those from the
maintainers.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
- Explained the concept and benefits of VM templating
- Provided step-by-step instructions for enabling VM templating
- Detailed the setup for using snapshotter in place of VirtioFS for template-based VM creation
- Added performance test results comparing template-based and direct VM creation
Signed-off-by: ssc <741026400@qq.com>
- init: initialize the VM template factory
- status: check the current factory status
- destroy: clean up and remove factory resources
These commands provide basic lifecycle management for VM templates.
Signed-off-by: ssc <741026400@qq.com>
Use `ioctl_with_mut_ref` instead of `ioctl_with_ref` in the
`create_device` method as it needs to write to the `kvm_create_device`
struct passed to it, which was released in v0.12.1.
Signed-off-by: Siyu Tao <taosiyu2024@163.com>
Fix the cargo fmt issues and then we can make the libs tests required
again to avoid this regression happening again.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
kata-deploy helm chart is *THE* way to deploy kata-containers on
kubernetes environments, and kubernetes environments is basically the
only reliably tested deployment we have.
For now, let's just drop documentation that is outdated / incorrect, and
in the future let's ensure we update the linked docs, as we work on
update / upgrade for the helm chart.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The libs in question were added when moving to developer.nvidia.com
but switching back to ubuntu only based builds they are not needed.
Remove them to keep the rootfs as minimal as possible.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
In the case of CC we need additional libraries in the rootfs.
Add them conditionally if type == confidential.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Add build_vm_from_template() that flips boot_from_template flag,
wires factory.template_path/{memory,state} into the hypervisor config,
and returns ready-to-use hypervisor & agent instances.
When factory.template is enabled, VirtContainer bypasses normal creation
and directly boots the VM by restoring the template through incoming migration,
completing the "create → save → clone" loop.
Fixes: #11413
Signed-off-by: ssc <741026400@qq.com>
Introduced factory::FactoryConfig with init/destroy/status commands to manage template pools.
Added template::Template to fetch, create and persist base VMs.
Introduced vm::{VM, VMConfig} exposing create, pause, save, resume, stop,
disconnect and migration helpers for sandbox integration.
Extended QemuInner to executes QMP incoming migration, pause/resume and status tracking.
Fixes: #11413
Signed-off-by: ssc <741026400@qq.com>
Added new fields in Hypervisor struct to support VM template creation,
template boot, memory and device state paths, shared path, and store
paths. Introduced a Factory struct in config to manage template path,
cache endpoint, cache number, and template enable flag. Integrated
Factory into TomlConfig for runtime configuration parsing.
Fixes: #11413
Signed-off-by: ssc <741026400@qq.com>
This change enables to run the Cloud Hypervisor VMM using a non-root user
when rootless flag is set true in configuration.
Fixes: #11414
Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
Pass the file descriptors of the tuntap device to the Cloud Hypervisor VMM process
so that the process could open the device without cap_net_admin
Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
There's no reason to keep the env var / input as it's never been used
and now kata-deploy detects automatically whether NFD is deployed or
not.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When the NodeFeatureRule CRD is detected kata-deploy will:
* Create the specific NodeFeatureRules for the x86_64 TEEs
* Adapt the TEEs runtime classes to take into account the amount of keys
available in the system when spawning the podsandbox.
Note, we still do not have NFD as sub-dependency of the helm chart, and
I'm not even sure if we will have. However, it's important to integrate
better with the scenarios where the NFD is already present.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Change NIM bats file logic to allow skipping test cases which
require multiple GPUs. This can be helpful for test clusters where
there is only one node with a single GPU, or for local test
environments with a single-node cluster with a single GPU.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Temporarily disables the new runners for building artifacts jobs. Will be re-enabled once they are stable.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
This partially reverts 8dcd91c for the s390x because the
CI jobs are currently blocking the release. The new runners
will be re-introduced once they are stable and no longer
impact critical paths.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
This allows us to do a full multi-arch deployment, as the user can
easily select which shim can be deployed per arch, as some of the VMMs
are not supported on all architectures, which would lead to a broken
installation.
Now, passing shims per arch we can easily have an heterogenous
deployment where, for instance, we can set qemu-se-runtime-rs for s390x,
qemu-cca for aarch64, and qemu-snp / qemu-tdx for x86_64 and call all of
those a default kata-confidential ... and have everything working with
the same deployment.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The previous procedure failed to reliably ensure that all unused Device
Cgroups were completely removed, a failure consistently verified by CI
tests.
This change introduces a more robust and thorough cleanup mechanism. The
goal is to prevent previous issues—likely stemming from improper use of
Rust mutable references—that caused the modifications to be ineffective
or incomplete.
This ensures a clean environment and reliable CI test execution.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Build only from Ubuntu repositories do not mix with developer.nvidia.com
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Update tools/osbuilder/rootfs-builder/nvidia/nvidia_chroot.sh
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Migrate the k8s job to a different runner and use a long running cluster
instead of creating the cluster on every run.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
This will help immensely projects consuming the kata-deploy helm chart
to use configuration options added during the development cycle that are
waiting for a release to be out ... allowing very early tests of the
stack.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
By default, `kubectl exec` inherits some capabilities from the
container, which could pose a security risk in a confidential
environment.
This change modifies the agent policy to strictly enforce that any
process started via `ExecProcessRequest` has no Linux capabilities.
This prevents potential privilege escalation within an exec session,
adhering to the principle of least privilege.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As in CoCo cases, the ApparmorProfile setting within runtime-go is set with None,
we should align it with runtime-go.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Temporarily disable the auto-generated Agent Policy on Mariner hosts,
to workaround the new test failures on these hosts.
When re-enabling auto-generated policy in the future, that would be
better achieved with a tests/integration/kubernetes/gha-run.sh change.
Those changes are easier to test compared with GHA YAML changes.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
If a ConfigMap has more than 8 files it will not be mounted watchable
[1]. However, genpolicy assumes that ConfigMaps are always mounted at a
watchable path, so containers with large ConfigMap mounts fail
verification.
This commit allows mounting ConfigMaps from watchable and non-watchable
directories. ConfigMap mounts can't be meaningfully verified anyway, so
the exact location of the data does not matter, except that we stay in
the sandbox data dirs.
[1]: 0ce3f5fc6f/docs/design/inotify.md (L11-L21)Fixes: #11777
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Every now and then, in case a failure happens, helm leaves the secret
behind without cleaning it up, leading to issues in the consecutive
runs.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Aurélien has moved to a reliable mirror for our tests, but we missed
that our tools Dockerfiles could benefit from the same change, which is
added now.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Although we saw this happening, we expected it to NOT happen ...
As the kernel is not signed, but we expect it to be (the cached
version), then we're bailing. :-/
Let's ensure a full rebuild of kernels happen and we'll be good from
that point onwards.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add TDX QGS quote-generation-socket TDX QEMU object params for
attestation to work in NVGPU+TDX environment.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
By doing this we can ensure that more than one instance of
nydus-snapshotter can be running inside the cluster, which is super
useful for doing A-B "upgrades" (where we install a new version of
kata-containers + nydus on B, while A is still running, and then only
uninstall A after making sure that B is working as expected).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We've been wrongly trying to set up the `${shim}` (as the qemu-snp, for
instance) as the hypervisor name in the kata-containers configuration
file, leading to an `tomlq` breaking as all the .hypervisors.qemu* shims
are tied to the `qemu` hypervisor, and it happens regardless of the shim
having a different name, or the hypervisor being experimental or not.
```sh
$ grep "hypervisor.qemu*" src/runtime/config/configuration-*
src/runtime/config/configuration-qemu-cca.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-coco-dev.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu-snp.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu-tdx.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-se.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-snp.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-tdx.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu.toml.in:[hypervisor.qemu]
$ grep "hypervisor.qemu*" src/runtime-rs/config/configuration-*
src/runtime-rs/config/configuration-qemu-runtime-rs.toml.in:[hypervisor.qemu]
src/runtime-rs/config/configuration-qemu-se-runtime-rs.toml.in:[hypervisor.qemu]
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
`tests` module inside `memcg` module should be gated behind `test`, add
`[#cfg(test)]` to make those tests work properly.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Some tests from mem-agent requires root privilege, use
`skip_if_not_root` to skip those tests if they were not executed under
root user.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Prefixing with `#[allow(clippy::type_complexity)]` to silence this
warning, the return type is documented in comments.
```console
error: very complex type used. Consider factoring parts into `type` definitions
--> mem-agent/src/mglru.rs:184:6
|
184 | ) -> Result<HashMap<String, (usize, HashMap<usize, MGenLRU>)>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#type_complexity
= note: `-D clippy::type-complexity` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::type_complexity)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Manually fix `redundant_field_names ` clippy warning by testing equality
against 0 as suggested by rust 1.85.1, since `mem-agent` is now a member
of `libs` workspace.
```console
error: this comparison involving the minimum or maximum element for this type contains a case that is always true or always false
--> mem-agent/src/psi.rs:62:8
|
62 | if reader
| ________^
63 | | .read_line(&mut first_line)
64 | | .map_err(|e| anyhow!("reader.read_line failed: {}", e))?
65 | | <= 0
| |____________^
|
= help: because `0` is the minimum value for this type, the case where the two sides are not equal never occurs, consider using `reader
.read_line(&mut first_line)
.map_err(|e| anyhow!("reader.read_line failed: {}", e))? == 0` instead
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#absurd_extreme_comparisons
= note: `#[deny(clippy::absurd_extreme_comparisons)]` on by default
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Manually fix `redundant_field_names` clippy warning as suggested by rust
1.85.1, since `mem-agent` is now a member of `libs` workspace.
```console
error: redundant field names in struct initialization
--> mem-agent/src/memcg.rs:441:13
|
441 | numa_id: numa_id,
| ^^^^^^^^^^^^^^^^ help: replace it with: `numa_id`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#redundant_field_names
= note: `-D clippy::redundant-field-names` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::redundant_field_names)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Manually fix `manual_strip` clippy warning as suggested by rust 1.85.1,
since `mem-agent` is now a member of `libs` workspace.
```console
error: stripping a prefix manually
--> mem-agent/src/mglru.rs:284:29
|
284 | u32::from_str_radix(&content[2..], 16)
| ^^^^^^^^^^^^^
|
note: the prefix was tested here
--> mem-agent/src/mglru.rs:283:13
|
283 | let r = if content.starts_with("0x") {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#manual_strip
= note: `-D clippy::manual-strip` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::manual_strip)]`
help: try using the `strip_prefix` method
|
283 ~ let r = if let Some(<stripped>) = content.strip_prefix("0x") {
284 ~ u32::from_str_radix(<stripped>, 16)
|
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Manually fix `field_reassign_with_default` clippy warning as suggested
by rust 1.85.1, since `mem-agent` is now a member of `libs` workspace.
```console
error: field assignment outside of initializer for an instance created with Default::default()
--> mem-agent/src/memcg.rs:874:21
|
874 | numa_cg.numa_id = numa;
| ^^^^^^^^^^^^^^^^^^^^^^^
|
note: consider initializing the variable with `memcg::CgroupConfig { numa_id: numa, ..Default::default() }` and removing relevant reassignments
--> mem-agent/src/memcg.rs:873:21
|
873 | let mut numa_cg = CgroupConfig::default();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#field_reassign_with_default
= note: `-D clippy::field-reassign-with-default` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::field_reassign_with_default)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `redundant_pattern_matching` clippy warning as suggested by rust
1.85.1, since `mem-agent` is now a member of `libs` workspace.
```console
error: redundant pattern matching, consider using `is_some()`
--> mem-agent/src/memcg.rs:595:40
|
595 | ... if let Some(_) = config_map.get_mut(path) {
| -------^^^^^^^--------------------------- help: try: `if config_map.get_mut(path).is_some()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#redundant_pattern_matching
= note: `-D clippy::redundant-pattern-matching` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::redundant_pattern_matching)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `needless_bool` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: this if-then-else expression returns a bool literal
--> mem-agent/src/memcg.rs:855:17
|
855 | / if configs.is_empty() {
856 | | true
857 | | } else {
858 | | false
859 | | }
| |_________________^ help: you can reduce it to: `configs.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_bool
= note: `-D clippy::needless-bool` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::needless_bool)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `for_kv_map` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: you seem to want to iterate on a map's keys
--> mem-agent/src/memcg.rs:822:43
|
822 | for (single_config, _) in &secs_map.cgs {
| ^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#for_kv_map
help: use the corresponding method
|
822 | for single_config in secs_map.cgs.keys() {
| ~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `into_iter_on_ref` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: this `.into_iter()` call is equivalent to `.iter_mut()` and will not consume the `Vec`
--> mem-agent/src/memcg.rs:1122:27
|
1122 | for info in infov.into_iter() {
| ^^^^^^^^^ help: call directly: `iter_mut`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#into_iter_on_ref
= note: `-D clippy::into-iter-on-ref` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::into_iter_on_ref)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `legacy_numeric_constants` clippy warning as suggested by rust
1.85.1, since `mem-agent` is now a member of `libs` workspace.
```console
error: usage of a legacy numeric constant
--> mem-agent/src/compact.rs:132:47
|
132 | if self.config.compact_force_times == std::u64::MAX {
| ^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#legacy_numeric_constants
help: use the associated constant instead
|
132 | if self.config.compact_force_times == u64::MAX {
| ~~~~~~~~
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `single_component_path_imports` clippy warning as suggested by rust
1.85.1, since `mem-agent` is now a member of `libs` workspace.
```console
error: this import is redundant
--> mem-agent/src/mglru.rs:345:5
|
345 | use slog_term;
| ^^^^^^^^^^^^^^ help: remove it entirely
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#single_component_path_imports
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `from_str_radix_10` clippy warning as suggested by rust 1.85.1,
since `mem-agent` is now a member of `libs` workspace.
```console
error: this call to `from_str_radix` can be replaced with a call to `str::parse`
--> mem-agent/src/mglru.rs:29:14
|
29 | let id = usize::from_str_radix(words[1], 10)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: try: `words[1].parse::<usize>()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#from_str_radix_10
= note: `-D clippy::from-str-radix-10` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::from_str_radix_10)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `needless_borrow` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: this expression creates a reference which is immediately dereferenced by the compiler
--> mem-agent/src/memcg.rs:1100:52
|
1100 | self.run_eviction_single_config(infov, &config)?;
| ^^^^^^^ help: change this to: `config`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `ptr_arg` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: writing `&PathBuf` instead of `&Path` involves a new object where a slice will do
--> mem-agent/src/memcg.rs:367:19
|
367 | psi_path: &PathBuf,
| ^^^^^^^^ help: change this to: `&Path`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg
= note: requested on the command line with `-D clippy::ptr-arg`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `crate_in_macro_def` clippy warning as suggested by rust 1.85.1,
since `mem-agent` is now a member of `libs` workspace.
```console
error: `crate` references the macro call's crate
--> mem-agent/src/misc.rs:12:22
|
12 | slog::error!(crate::misc::sl(), "{}", format_args!($($arg)*))
| ^^^^^ help: to reference the macro definition's crate, use: `$crate`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#crate_in_macro_def
= note: `-D clippy::crate-in-macro-def` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::crate_in_macro_def)]`
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `len_zero` clippy warning as suggested by rust 1.85.1, since
`mem-agent` is now a member of `libs` workspace.
```console
error: length comparison to zero
--> mem-agent/src/memcg.rs:225:61
|
225 | let (keep, moved) = vec.drain(..).partition(|c| c.numa_id.len() > 0);
| ^^^^^^^^^^^^^^^^^^^ help: using `!is_empty` is clearer and more explicit: `!c.numa_id.is_empty()`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#len_zero
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Fix `bool_assert_comparison` clippy warning as suggested by rust 1.85.1,
since `mem-agent` is now a member of `libs` workspace.
```console
error: used `assert_eq!` with a literal bool
--> mem-agent/src/memcg.rs:1378:9
|
1378 | assert_eq!(m.get_timeout_list().len() > 0, true);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#bool_assert_comparison
= note: `-D clippy::bool-assert-comparison` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::bool_assert_comparison)]`
help: replace it with `assert!(..)`
|
1378 - assert_eq!(m.get_timeout_list().len() > 0, true);
1378 + assert!(m.get_timeout_list().len() > 0);
|
```
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
`mem-agent` now does not ship example binaries and serves as a library
for `agent` to reference, so we move it into `libs` to better manage it.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Auto-generate policy in k8s-optional-empty-secret.bats, now that
genpolicy suppprts optional secret-based volumes.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
We've recently added support for:
* deploying and setting up a snapshotter, via
_experimentalSetupSnapshotter
* enabling experimental_force_guest_pull, via
_experimentalForceGuestPull
However, we never updated the documentation for those, thus let's do it
now.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Downloading Go from storage.googleapis.com fails intermittently with a 403
(see error below) so we switch to go.dev as referenced at
https://go.dev/dl/.
/tmp/install-go-tmp.Rw5Q4thEWr ~/work/kata-containers/kata-containers
/usr/bin/go
[install_go.sh:85] INFO: removing go version go1.24.9 linux/amd64
[install_go.sh:94] INFO: Download go version 1.24.6
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 298 100 298 0 0 2610 0 --:--:-- --:--:-- --:--:-- 2614
[install_go.sh:97] INFO: Install go
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
[install_go.sh:99] ERROR: sudo tar -C /usr/local/ -xzf go1.24.6.linux-amd64.tar.gz
https://github.com/kata-containers/kata-containers/actions/runs/18602801597/job/53045072109?pr=11947#step:5:17
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This change ensures that the NVIDIA package repository for nvidia-imex
and libnvidia-nspc is being used as source.
The NVIDIA repository does not publish these packages with a -580
version suffix, which made us fall back to the packages from the
Ubuntu repository.
These two packages were recently updated by Ubuntu to depend on
nvidia-kernel-common-580-server (this happened from version
580.82.07-0ubuntu1 to version 580.95.05-0ubuntu1). This conflicts
with nvidia-kernel-common-580 which gets installed by
nvidia-headless-no-dkms-580-open, thus causing a build failure.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
in CI helm is not yet installed and we don't have root access. Let's use
the current dir, which should be writable, and --no-sudo option to
install it.
Note when helm is installed it should not change anything and simply use
the syste-wide installation.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
While the local-build's folder's Makefile dependencies for the
confidential nvidia rootfs targets already declare the pause image
and coco-guest-components dependencies, the actual rootfs
composition does not contain the pause image bundle and relevant
certificates for guest pull. This change ensure the rootfs gets
composed with the relevant files.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
TOML was chosen for initdata particularly for the ability to include
policy docs and other configuration files without mangling them. The
default TOML encoding renders string values as single-line,
double-quoted strings, effectively depriving us of this feature.
This commit changes the encoding to use `to_string_pretty`, and includes
a test that verifies the desirable aspect of encoding: newlines are kept
verbatim.
Fixes: #11943
Signed-off-by: Markus Rudy <mr@edgeless.systems>
After supporting the Arm CCA, it will rely on the kernel kvm.h headers to build the
runtime. The kernel-headers currently quite new with the traditional one, so that we
rely on build the kernel header first and then inject it to the shim-v2 build container.
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
Co-authored-by: Seunguk Shin <seunguk.shin@arm.com>
The new initdata variants of the tests are failing on the tdx
runner, so as discussed, skip them for now: Issue #11945
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now we have wider coverage of initdata testing in
k8s-guest-pull-image-signature.bats then remove
the old testing.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Our current set of authenticated registry tests involve setting
kernel_params to config the image pull process, but as of
kata-containers#11197
this approach is not the main way to set this configuration and the agent
config has been removed. Instead we should set the configuration in the
`cdh.toml` part of the initdata, so add new test cases for this. In future, when
we have been through the deprecation process, we should remove the old tests
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Our current set of signature tests involve setting kernel_parameters to
config the image pull process, but as of
https://github.com/kata-containers/kata-containers/pull/11197
this approach is not the main way to set this configuration and the agent
config has been removed. Instead we should set the configuration in the
`cdh.toml` part of the initdata, so add new test cases for this. In future, when
we have been through the deprecation process, we should remove the old tests
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Create a shared get_initdata method that injects a cdh image
section, so we don't duplicate the initdata structure everywhere
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
One problem that we've been having for a reasonable amount of time, is
containerd not behaving very well when we have multiple snapshotters.
Although I'm adding this test with my "CoCo" hat in mind, the issue can
happen easily with any other case that requires a different snapshotter
(such as, for instance, firecracker + devmapper).
With this in mind, let's do some stability tests, checking every hour a
simple case of running a few pre-defined containers with runc, and then
running the same containers with kata.
This should be enough to put us in the situation where containerd gets
confused about which snapshotter owns the image layers, and break on us
(or not break and show us that this has been solved ...).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
With this change we namespace the stage one rootfs tarball name
and use the same name across all uses. This will help overcome
several subtle local build problems.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
We fix the following error:
```
thread 'sandbox::tests::add_and_get_container' panicked at src/sandbox.rs:901:10:
called `Result::unwrap()` on an `Err` value: Create cgroupfs manager
Caused by:
0: fs error caused by: Os { code: 17, kind: AlreadyExists, message: "File exists" }
1: File exists (os error 17)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
by ensuring that the cgroup path is unique for tests run in the same millisecond.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Use CDI exclusively from crates.io and not from a GH repository.
Cargo can easily check if a new version is available and we can
far more easier bump it if needed.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
with the shellcheck fixes we accidentally quoted the "-n NAMESPACE"
argument where we should have used array instead, which lead to oc
considering this as a pod name and returning error.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
We need to ensure that any change on the Dockerfile (and its dir) leads
to the build being retriggered, rather than using the cached version.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We are seeing more protoc related failures on the new
runners, so try adding the protobuf-compiler dependency
to these steps to see if it helps.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- copy default-initdata.toml in create_tmp_policy_settings_dir, so it can be modified by other tests if needed
- make auto_generate_policy use default-initdata.toml by default
- add auto_generate_policy_no_added_flags, so it may be used by tests that don't want to use default-initdata.toml by default
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
On commit 9602ba6ccc, from February this
year, we've introduced a check to ensure that the files needed for
signing the kernel build are present. However, we've noticed last week
that there were a reasonable amount of wrong assumptions with the
workflow. :-)
Zvonko fixed the majority of those, but this bit was left and it'd cause
breakages when using kernel that was cached ... although passing when
building new kernels.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This is needed to the kernel setup picks up the correct
config values from our fragments directories.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We need to make sure that the kernel we're using has the
correct configs set, otherwise the module signing will not work.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Now that we have added the ability to deploy kata-containers with
experimental_force_guest_pull configured, let's make sure we test it to
avoid any kind of regressions.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we have no way to differentiate running tests on qemu-coco-dev
with different snapshotters.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
What was done in the past, trying to set the env var on the same step
it'd be used, simply does not work.
Instead, we need to properly set it through the `env` set up, as done
now.
We're also bumping the kata_config_version to ensure we retrigger the
kernel builds.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have some scalable s390x and ppc runners, so
start to use them for build and test, to improve
the throughput of our CI
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Co-authored-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
For some reason we didn't have the "Report tests" step as part of the
TEE jobs. This step immensely helps to check which tests are failing and
why, so let's add it while touching the workflow.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There's no reason to have the code duplication between the SNP / TDX
tests for CoCo, as those are basically using the same configuration
nowadays.
Note that for the TEEs case, as the nydus-snapshotter is deployed by the
admin, once, instead of deploying it on every run ... I'm actually
removing the nydus-snapshotter steps so we make it clear that those
steps are not performed by the CI.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As virtio-scsi has been set the default block device driver, the
runtime also need to correctly handle the virtio-scsi info, specially
the SCSI address required within kata-agent handling logic.
And getting and assigning the scsi_addr to kata agent device id
will be enough. This commit just do such work.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Since runtime-rs support the block device hotplug with
creating new containers, and the device would also be
removed when the container stopped, thus add the block
device unplug for clh.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This commit introduces support for selecting `virtio-scsi` as the
block device driver for QEMU during initial setup.
The primary goal is to resolve a conflict in non-TEE environments:
1. The global block device configuration defaults to `virtio-scsi`.
2. The `initdata` device driver was previously designed and hardcoded
to `virtio-blk-pci`.
3. This conflict prevented unified block device usage.
By allowing `virtio-scsi` to be configured at cold boot, the `initdata`
device can now correctly adhere to the global setting, eliminating the
need for a hardcoded driver and ensuring consistent block device
configuration across all supported devices (excluding rootfs).
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The implementation of the seccomp feature in Dragonball currently has a basic framework.
But the actual restriction rules are empty.
This pull request includes the following changes:
- Modifiy configuration files to relevant configuration files.
- Modifiy seccomp framework to support different restrictions for different threads.
- Add new seccomp rules for the modified framework.
This commit primarily implements the changes 1 and 3 for runtime-rs.
Fixes: #11673
Signed-off-by: wangxinge <wangxinge@bupt.edu.cn>
For DGX like systems we need additional binaries and libraries,
enable the Kata AND CoCo use-case.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Update tools/osbuilder/rootfs-builder/nvidia/nvidia_rootfs.sh
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Fix all instances of template injection by using environment variables as
recommended by Zizmor, instead of directly injecting values into the
commands.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
The two ignored cases are strictly necessary for the CI to work today, and we
have various security mitigations in place.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
There are 62 such warnings and addressing them would take quite a bit of
time so just disable them for now.
help[undocumented-permissions]: permissions without explanatory comments
--> ./.github/workflows/release.yaml:71:7
|
71 | packages: write
| ^^^^^^^^^^^^^^^ needs an explanatory comment
72 | id-token: write
| ^^^^^^^^^^^^^^^ needs an explanatory comment
73 | attestations: write
| ^^^^^^^^^^^^^^^^^^^ needs an explanatory comment
|
= note: audit confidence → High
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
We can't test this PR because the workflow needs this trigger, so adding
this will allow testing future PRs.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
For those who are not willing to use the nydus-snapshotter for pulling
the image inside the guest, let's allow them setting the
experimetal_force_guest_pull, introduced by Edgeless, as part of our
helm-chart.
This option can be set as:
_experimentalForceGuestPull: "qemu-tdx,qemu-coco-dev"
Which would them ensure that the configuration for `qemu-tdx` and
`qemu-coco-dev` would have the option enabled.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As the kata-deploy helm chart has been the only way we've been testing
kata-containers deployment as part of our CI, it's time to finally get
rid of the kustomize yamls and avoid us having to maintain two different
methods (with one of those not being tested).
Here I removed:
* kata-deploy yamls and kustomize yamls
* kata-cleanup yamls and kustomize yamls
* kata-rbac yals and kustomize yamls
* README.md for the kustomize yamls was removed
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
erofs-snapshotter can be used to leverage sharing the image from the
host to the guest without the need of a shared filesystem (such as
virtio-fs or virtio-9p).
This case is ideal for Confidential Computing enabled on Kata
Containers, and we can immensely benefit from this snapshotter, thus
let's test it as soon as possible so we can find issues, report bugs,
and ask for enhancement requests.
There are at least a few things that we know for sure to be problematic
now:
* Policy has to be adjusted to the erofs-snapshotter
* There is no support for signed nor encrypted images
* Tests that use the KBS are disabled for now
Even with the limitations, I do believe we should be testing the
snapshoitter, so we can team up and get those limitations addressed.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As done in the previous commit, let's expand the vanilla k8s deployment
to also allow the erofs host side configuration.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We already have support for deploying a few flavours of k8s that are
required for different tests we perform.
Let's also add the ability to deploy vanilla k8s, as that will be very
useful in the next commits in this series.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The active version is 2.1.x, and the latest is 2.2.0-beta.0.
The latest is what we'll be using to test if the "to be released"
version of containerd works well for our use-cases.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's make sure that we can get non-official releases as well, otherwise
we won't be able to test a coming release of containerd, to know whether
it solves issues that we face or not, before it's actually released.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
NVRC introduced the confidential feature flag and we
haven't updated the rootfs build to accomodate.
If rootfs_type==confidential user --feature=confidential
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Canonical TDX release is not needed for vanilla Ubuntu 25.10 but
GRUB_CMDLINE_LINUX_DEFAULT needs to contain `nohibernate` and
`kvm_intel.tdx=1`
Signed-off-by: Szymon Klimek <szymon.klimek@intel.com>
Use grep_pod_exec_output to retry possible failing "kubectl exec"
commands. Other tests have been hitting such errors during CI in
the past.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
This adds an integration test to verify that privileged containers work
properly when deploying Kata with kata-deploy.
This is a follow-up to #11878.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Let's rely on kata-deploy setting up the nydus snapshotter for us,
instead of doing this with external code.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This allows us to stop setting up the snapshotter ourselves, and just
rely con kata-deploy to do so.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's expose the EXPERIMENTAL_SETUP_SNAPSHOTTER script environment
variable to our chart, allowing then users of our helm chart to take
advantage of this experimental feature.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We may deploy in scenarios where we want to have both snapshotters set
up, sometimes even for simple test on which one behaves better.
With this in mind, let's allow EXTERNAL_SETUP_SNAPSHOTTER to receive a
comma separated list of snapshotters, such as:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="erofs,nydus"
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Similarly to what's been done for the nydus-snapshotter, let's allow
users to have erofs-snapshotter set up by simply passing:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="erofs".
```
Mind that erofs, although a built-in containerd snapshotter, has system
depdencies that we will *NOT* install and it's up to the admin to do so.
These dependencies are:
* erofs-utils
* fsverity
* erofs module loaded
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
In the previous commit we added the assumption that the
nydus-snapshotter version should be the same in two different places.
Now, with this test, we ensure those will always be in sync.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's introduce a new EXPERIMENTAL_SETUP_SNAPSHOTTER environemnt
variable that, when set, allows kata-deploy to put the nydus snapshotter
in the correct place, and configure containerd accordingly.
Mind, this is a stop gap till the nydus-snapshotter helm chart is ready
to be used and behaving well enough to become a weak dependency of our
helm chart. When that happens this code can be deleted entirely.
Users can have nydus-snapshotter deployed and configured for the
guest-pull use case by simply passing:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="nydus"
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we'd end up adding a the file several times, which could lead
to problems when removing the entry, leading to containerd not being
able to start due to an import file not being present.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
The code, how it was, would lead to the following broke command:
`--header "Authorization: Bearer: "`
Let's only expand that part of the command if ${GH_TOKEN} is passed,
otherwise we don't even bother adding it.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Document that privileged containers with
privileged_without_host_devices=false are not generally supported.
When you try the above, the runtime will pass all the host devices to Kata
in the OCI spec, and Kata will fail to create the container for various
reasons depending on the setup, e.g.:
- Attempting to hotplug uninitialized loop devices.
- Attempting to remount /dev devices on themselves when the agent had
already created them as default devices (e.g. /dev/full).
- "Conflicting device updates" errors.
- And more...
privileged_without_host_devices was originally created to support
Kata [1][2] and lots of people are having issues when it's set to
false [3].
[1] https://github.com/kata-containers/runtime/issues/1568
[2] https://github.com/containerd/cri/pull/1225
[3] https://github.com/kata-containers/kata-containers/issues?q=is%3Aissue%20%20in%3Atitle%20privileged
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
We have noticed in the CI that the `gen_init_cpio ...` was returning 255
and breaking the build. Why? I am not sure.
When chatting with Steve, he suggested to split the command, so it'd be
easier to see what's actually breaking. But guess what? There's no
breakage when we split the command.
So, let's try it out and see whether the CI passes after it.
If someone is willing to educate us on this one, please, that would be
helpful! :-)
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Moving the CUDA repo to the top for all essential packages
and adding a repo priority favouring NVIDIA based repos.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Introduce new test case which verifies that openvpn clients and servers
can run as Kata pods and can successfully establish a connection.
Volatile certificates and keys are generated by an initialization
container and injected into the client and server containers.
This scenario requires TUN/TAP support for the UVM kernel.
Signed-off-by: Manuel Huber <mahuber@microsoft.com>
Co-authored-by: Manuel Huber <manuelh@nvidia.com>
No need to die when a Kind that does not require a policy annotation is
found in a pod manifest. Print an informational message instead.
Signed-off-by: Manuel Huber <mahuber@microsoft.com>
Currently, use of openvpn clients/servers is not possible in Kata UVMs.
Following error message can be expected:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)
To support opevpn scenarios using bridging and TAP, we enable various
kernel networking config options.
Signed-off-by: Manuel Huber <mahuber@microsoft.com>
Manually added "hostPath" to main.txt then regenerated the dictionary
with `./kata-spell-check.sh make-dict`.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This change crystallizes and simplifies the current handling of /dev
hostPath mounts with virtually no functional change.
Before this change:
- If a mount DESTINATION is in /dev and it is a non-regular file on the HOST,
the shim passes the OCI bind mount as is to the guest (e.g.
/dev/kmsg:/dev/kmsg). The container rightfully sees the GUEST device.
- If the mount DESTINATION does not exist on the host, the shim relies on
k8s/containerd to automatically create a directory (ie. non-regular file) on
the HOST. The shim then also passes the OCI bind mount as is to the guest. The
container rightfully sees the GUEST device.
- For other /dev mounts, the shim passes the device major/minor to the guest
over virtio-fs. The container rightfully sees the GUEST device.
After this change:
- If a mount SOURCE is in /dev and it is a non-regular file on the HOST,
the shim passes the OCI bind mount as is to the guest. The container
rightfully sees the GUEST device.
- The shim does not anymore rely on k8s/containerd to create missing mount
directories. Instead it explicitely handles missing mount SOURCES, and
treats them like the previous bullet point.
- The shim no longer uses virtio-fs to pass /dev device major/minor to the
guest, instead it passes the OCI bind mount as is.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
test_add_one_arp_neighbor modifies the root network namespace, so we
should ensure that it does not interfere with normal network setup.
Adding an IP to a device results in automatic routes, which may affect
routing to non-test endpoints. Thus, we change the addresses used in the
test to come from TEST-NET-1, which is designated for tests and usually
not routable.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
list_routes and test_add_one_arp_neighbor have been flaky in the past
(#10856), but it's been hard to tell what exactly is going wrong.
This commit adds debug information for the most likely problem in
list_routes: devices being added/removed/modified concurrently.
Furthermore, it adds the exit code and stderr of the ip command, in case
it failed to list the ARP neighborhood.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
The previous code only checked the result of with_nix_path(), not statfs(),
thus leading to an uninitialized memory read if statfs() failed.
No functional change otherwise.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
85f3391bc added the support for TDX QGS port=0 but missed
defaultQgsPort in the default config. defaultQgsPort overrides
user provided tdx_quote_generation_service_socket_port=0.
After this change, defaultQgsPort is not needed anymore since
there's no default: any positive integer is OK and negative or
unset value becomes a parse error.
QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT in the Makefile is used
to provide a sane default when tdx_quote_generation_service_socket_port
gets set in the configuration.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
genpolicy is a developer tool that should be usable on MacOS. Adding it
to the darwin CI job ensures that it can still be built after changes.
On an Apple M2, the output of `uname -m` is `arm64`, which is why a new
case is needed in the arch_to_* functions.
We're not going to cross-compile binaries on darwin, so don't install
any additional Rust targets.
Fixes: #11635
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Most of the kata-types code is reusable across platforms. However, some
functions in the mount module require safe-path, which is Linux-specific
and can't be used on other platforms, notably darwin.
This commit adds a new feature `safe-path` to kata-types, which enables
the functions that use safe-path. The Linux-only callers kata-ctl and
runtime-rs enable this feature, whereas genpolicy only needs initdata
and does not need the functions from the mount module. Using a feature
instead of a target_os restriction ensures that the developer experience
for genpolicy remains the same.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
This commit adds changes to enable fs sharing between host/guest
using virtio-fs when booting a pod VM for testing. This primarily
enables sharing container rootfs for testing container lifecycle
commands.
Summary of changes is as below:
- adds minimal virtiofsd code to start userspace daemon (based on
`runtime-rs/crates/resource/src/share_fs`)
- adds the virtiofs device to the test vm
- prepares and mounts the container rootfs on host
- modifies container storage & oci specs
Signed-off-by: Sumedh Alok Sharma <sumsharma@microsoft.com>
Fixing the shellcheck issues first so that they are not coupled to the
subsequent commit introducing Darwin support to the script.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Auto-generate policy for nginx-deployment pods, instead of hard-coding
the "allow all" policy.
Note that the `busybox_pod` - created using `kubectl run` - still
doesn't have an Init Data annotation, so it is using the default policy
built into the Kata Guest rootfs image file.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Auto-generate agent policy in k8s-liveness-probes.bats, instead of using
the non-confidential "allow all" policy.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Auto-generate the agent policy for pod-secret-env.yaml, using
"genpolicy -c inject_secret.yaml".
Support for passing Secret specification files as "-c" arguments of
genpolicy has been added when fixing #10033 with PR #10986.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Since we cannot build all components with libc=musl and
static RUSTFLAG we still need to ship libcc for AA or other guest
components.
Without this change the guest components do not work and we see
/usr/local/bin/attestation-agent: error while loading shared
libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
This fixes that error everywhere by adding a `name:` field to all jobs that
were missing it. We keep the same name as the job ID to ensure no
disturbance to the required job names.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
-o pipefail in particular ensures that exec_host() returns the right exit
code.
-u is also added for good measure. Note that $BATS_TEST_DIRNAME is set by
bats so we move its usage inside the function.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
As a consequence of moving away from Advanced Security for Zizmor, it now
checks the entire codebase and will error out on this PR and future.
To be reverted once we address all Zizmor findings in a future PR.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This PR fixes a test that failed on platforms like ppc64le due to a hardcoded mount option length.
* Test was failing on ppc64le due to larger system page size (e.g., 65536 bytes)
* Original test used a hardcoded 4097-byte string assuming 4KB page size
* Replaced with *MAX_MOUNT_PARAM_SIZE + 1 to reflect actual system limit
* Ensures test fails correctly across all architectures
Fixes: #11852
Signed-off-by: shwetha-s-poojary <shwetha.s-poojary@ibm.com>
The Hadolint warning DL3007 (pin the version explicitly) is no
longer applicable.
We have updated the base image to use a specific version
digest, which satisfies the linter's requirement for reproducible
builds. This commit removes the corresponding inline ignore comment.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
This change enables to run the QEMU VMM using a non-root user when rootless flag is set true in the configuration.
Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
This commit introduces generic support for running the VMM in rootless mode in runtime-rs:
1.Detect whether the VMM is running in rootless mode.
2.Before starting the VMM process, create a non-root user and launch the VMM with that user’s UID and GID; also add the KVM user's group ID to the VMM process's supplementary groups so the VMM process can access /dev/kvm.
3.Add the setup of the rootless directory located in the dir /run/user/<uid> directory, and modify some path variables to be functions that return the path with the rootless directory prefix when running in rootless mode.
Fixes: #11414
Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
We recently hit the following error during build:
```
RUN ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -P ""
OpenSSL version mismatch. Built against 3050003f, you have 30500010
```
This happened because `alpine:latest` moved forward and the `ssh-keygen`
binary in the base image was compiled against a newer OpenSSL version
that is not available at runtime.
Pinning the base image to the stable release (3.20) avoids the mismatch
and ensures consistent builds.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
This change fixes clean up logic when running tests
in a vm booted with qemu wrt to qmp.sock & console.sock
files, and no longer assumes any path for them.
Signed-off-by: Sumedh Alok Sharma <sumsharma@microsoft.com>
Change the default block driver to virtio-scsi.
Since the latest qemu's commit:
https://gitlab.com/qemu-project/qemu/-/commit/
984a32f17e8dab0dc3d2328c46cb3e0c0a472a73
brings a bug for virtio-blk-pci with io_uring mode at line:
https://gitlab.com/qemu-project/qemu/-/commit/
984a32f17e8dab0dc3d2328c46cb3e0c0a472a73#
ce8eeb01f8b84f8cb8d3c35684d473fe1ee670f9_345_352
In order to avoid this issue, change the default block driver
to virtio-scsi.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
As OCI Spec annotation has been updated with adding or remove items,
we should use the updated annotation as the passed argument.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit removes the InitData annotation from the OCI Spec's
annotations.
Similar to the Policy annotation, InitData is now exclusively handled
and transmitted to the guest via the sandbox's init data mechanism.
Removing this redundant and potentially large annotation simplifies the
OCI Spec and streamlines the guest initialization process.
This change aligns the handling of InitData with existing practices
within runtime-go.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The repro below would show this error in the logs (in debug mode only):
fatal runtime error: IO Safety violation: owned file descriptor already closed
The issue was that the `pseudo.slave` file descriptor was being owned by
multiple variables simultaneously. When any of those variables would go out
of scope, they would close the same file descriptor, which is undefined
behavior.
To fix this, we clone: we create a new file descriptOR that refers to the same
file descriptION as the original. When the cloned descriptor is closed, this
affect neither the original descriptor nor the description. Only when the last
descriptor is closed does the kernel cleans up the description.
Note that we purposely consume (not clone) the original descriptor with
`child_stdin` as `pseudo` is NOT dropped automatically.
Repro
-----
Prerequisites:
- Use Rust 1.80+.
- Build the agent in debug mode.
$ cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- image: busybox:latest
name: busybox
runtimeClassName: kata
$ kubectl apply -f busyboox.yaml
pod/busybox created
$ kubectl exec -it busybox -- sh
error: Internal error occurred: Internal error occurred: error executing
command in container: failed to exec in container: failed to start exec
"e6c602352849647201860c1e1888d99ea3166512f1cc548b9d7f2533129508a9":
cannot enter container 76a499cbf747b9806689e51f6ba35e46d735064a3f176f9be034777e93a242d5,
with err ttrpc: closed
Fixes: #11054
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Log how much time "kubectl get pods" and each test case are taking,
just in case that will reveal unusually slow test clusters, and/or
opportunities to improve tests.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
In the previous commit we've added some code that broke `cargo fmt --
--check` without even noticing, as the code didn't go through the CI
process (due to it being a security advisory).
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Trailing slash in DEFAULT_KATA_GUEST_SANDBOX_DIR caused double slashes
in mount_point (e.g. "/run/kata-containers/sandbox//shm"), which failed
OPA strict equality checks against policy mount_point. Removing it aligns
generated paths with policy and fixes CreateSandboxRequest denial.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
coco-guest-components tarball is used as is for both vanilla coco
rootfs and the nvidia enabled rootfs. nvidia-attester can be built
without nvml so make it globally enabled for coco-guest-components.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
`allow_interactive_exec` requires a sandbox-name annotation, however
this is only added for pods by genpolicy. Other pod-generating resources
have unpredictable sandbox names.
This patch instead uses a regex for the sandbox name in genpolicy, based
on the specified metadata and following Kubernetes' naming logic. The
generated regex is then used in the policy to correctly match the
sandbox name.
Fixes: #11823
Signed-off-by: Charlotte Hartmann Paludo <git@charlotteharludo.com>
Co-authored-by: Paul Meyer <katexochen0@gmail.com>
Co-authored-by: Markus Rudy <mr@edgeless.systems>
This commit addresses an issue where base64 output, when used with a
default configuration, would introduce newlines, causing decoding to
fail on the runtime.
The fix ensures base64 output is a single, continuous line using the -w0
flag. This guarantees the encoded string is a valid Base64 sequence,
preventing potential runtime errors caused by invalid characters.
Note that: When you use the base64 command without any parameters, it
typically automatically adds newlines to the output, usually every 76 chars.
In contrast, base64 -w0 explicitly tells the command not to add any
newlines (-w for wrap, and 0 for a width of zero), which results in a
continuous string with no whitespace.
This is a critical distinction because if you pass a Base64 string with
newlines to a runtime, it may be treated as an invalid string, causing
the decoding process to fail.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Correctly set dir's permissions and mode. This update ensures:
The dir_mode field of CopyFileRequest is set to DIR_MODE_PERMS
(equivalent to Go's 0o750 | os.ModeDir), which is primarily used for the
top-level directory creation permissions.
The file_mode field now directly uses metadata.mode() (equivalent to
Go's st.Mode) for the target entry.
This change aims to resolve potential permission issues or inconsistencies
during directory and file creation within the guest environment by precisely
matching the expected mode propagation of the Kata agent.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The core purpose of introducing volume_manager to VolumeResource is to
centralize the management of shared file system volumes. By creating a
single VolumeManager instance within VolumeResource, all shared file
volumes are managed by one central entity. This single volume_manager
can accurately track the references of all ShareFsVolume instances to
the shared volumes, ensuring correct reference counting, proper volume
lifecycle management, and preventing issues like volumes being
overwritten.
This new design ensures that all shared volumes are managed by a central
entity, which:
(1) Guarantees correct reference counting.
(2) Manages the volume lifecycle correctly, avoiding issues like volumes
being overwritten.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit integrates the new `VolumeManager` into the `ShareFsVolume`
lifecycle. Instead of directly copying files, `ShareFsVolume::new` now
uses the `VolumeManager` to get a guest path and determine if the volume
needs to be copied. It also updates the `cleanup` function to release
the volume's reference count, allowing the `VolumeManager` to manage its
state and clean up resources when no longer in use.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit introduces a new `VolumeManager` to track the state of shared
volumes, including their reference count and its corresponding container
ids.
The manager's goal is to handle the lifecycle of shared filesystem volumes,
including:
(1) Volume State Tracking: Tracks the mapping from host source paths to guest
destination paths.
(2) Reference Counting: Manages reference counts for each volume, preventing
premature cleanup when multiple containers share the same source.
(3) Deterministic guest paths: Generates unique guest paths using random string
to avoid naming conflicts.
(4) Improved Management: Provides a centralized way to handle volume creation,
copying, and release, including aborting file watchers when volumes are no longer
in use.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit refactors the `CopyFile` related code to streamline the
logic for creating guest directories and make the code structure
clearer.
Its main goal is to improve the overall maintainability and facilitate
future feature extensions.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit is designed to perform a full sync before starting monitoring
to ensure that files which exist before monitoring starts are also synced.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit updates the configuration for the initdata block
device to use the BlockDeviceAio::Native mode.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This commit enhances control over block device AIO modes via hotplug.
Previously, hotplugging block devices was set with default AIO mode (io_uring).
Even if users reset the AIO mode in the configuration file, the changes would
not be correctly applied to individual block devices.
With this update, users can now explicitly configure the AIO mode for hot-plugging
block devices via the configuration, and those settings will be correctly applied.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
We need more information about block device, just relapce the original
method get_block_driver with get_block_device_info and return its
BlockDeviceInfo.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
- Add disable_selinux and selinux_label fields to hypervisor for SELinux support.
- Implement related SELinux support functions.
Fixes: #9866
Signed-off-by: Caspian443 <scrisis843@gmail.com>
Add container_exec_with_retries(), useful for retrying if needed
commands similar to:
kubectl exec <pod_name> -c <container_name> -- <command>
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
On s390x, QEMU fails if maxmem is set to 0:
```
invalid value of maxmem: maximum memory size (0x0) must be at least the initial memory size
```
This commit sets maxmem to the initial memory size for s390x when hotplug is disabled,
resolving the error while still ensuring that memory hotplug remains off.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The setting '-m xM,slots=y,maxmem=zM' where maxmem is from
the host's memory capacity is failing with confidential VMs
on hosts having 1T+ of RAM.
slots/maxmem are necessary for setups where the container
memory is hotplugged to the VM during container creation based
on createContainer info.
This is not the case with CoCo since StaticResourceManagement
is enabled and memory hotplug flows have not been checked.
To avoid unexpeted errors with maxmem, disable slots/maxmem
in case ConfidentialGuest is requested.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Let's make sure that whenever we try to access the attestation agent
binariy, we only proceed the startup in case:
* the binary is found (CoCo case)
* the binary is not present (non-CoCo case)
In case any error that's not `NotFound`, we should simply abort as that
could mean a potential tampering with the binary (which would be
reported as an EIO).
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
Add the annotation of OCI bundle path to store its path.
As it'll be checked within agent policy, we need add them
to pass agent policy validations.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
With the help of `update_ocispec_annotations`, we'll add the contaienr
type key with "io.katacontainers.pkg.oci.container_type" and its
corresponding type "pod_sandbox" when it's pause container and
"pod_container" when it's an other containers.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It'll updates OCI annotations by removing specified keys and adding
new ones. This function creates a new `HashMap` containing the updated
annotations, ensuring that the original map remains unchanged.
It is optimized for performance by pre-allocating the necessary capacity
and handling removals and additions efficiently.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
To enable access to the constants `POD_CONTAINER` and `POD_SANDBOX` from
other crates, their visibility has been updated to public. This change
addresses the previous limitation of restricted access and ensures these
values can be utilized across the codebase.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add the annotation of nerdctl network namespace to let nerdctl know which namespace
to use when calling the selected CNI plugin with "nerdctl/network-namespace".
As it'll be checked within agent policy, we need add them to pass agent policy validations.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
On 63f6dcdeb9 we added the support to
download either a .xz or a .zst tarball file. However, we missed adding
the code to properly unpack a .zst tarball file.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
agent-ctl's make check has been failing with:
```
Checking kata-agent-ctl v0.0.1 (/home/ubuntu/runner/_layout/_work/kata-containers/kata-containers/src/tools/agent-ctl)
error[E0432]: unresolved import `hypervisor::ch`
--> src/vm/vm_ops.rs:10:5
|
10 | ch::CloudHypervisor,
| ^^ could not find `ch` in `hypervisor`
|
note: found an item that was configured out
--> /home/ubuntu/runner/_layout/_work/kata-containers/kata-containers/src/runtime-rs/crates/hypervisor/src/lib.rs:30:9
|
30 | pub mod ch;
| ^^
note: the item is gated here
--> /home/ubuntu/runner/_layout/_work/kata-containers/kata-containers/src/runtime-rs/crates/hypervisor/src/lib.rs:26:1
|
26 | / #[cfg(all(
27 | | feature = "cloud-hypervisor",
28 | | any(target_arch = "x86_64", target_arch = "aarch64")
29 | | ))]
| |___^
```
Let's just make sure that we include ch conditionally as well.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Docker containers support specifying the shm size using the --shm-size
option and support sandbox-level shm volumes, so we've added support for
shm volumes. Since Kubernetes doesn't support specifying the shm size,
it typically uses a memory-based emptydir as the container's shm, and
its size can be specified.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Since the rate limiter would be shared by cloud-hypervisor
and firecracker etc, thus move it from clh's config to
hypervisor config crate which would be shared by other vmm.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Given that Rust-based VMMs like cloud-hypervisor, Firecracker, and
Dragonball naturally offer user-level block I/O rate limiting, I/O
throttling has been implemented to leverage this capability for these
VMMs. This PR specifically introduces support for cloud-hypervisor.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This commit introduces changes to parse the PciDeviceInfo received
in response payload when adding a network device to the VM with cloud
hypervisor. When hotplugging a network device for a given endpoint,
it rightly sets the PciPath of the plugged-in device in the endpoint.
In calls like virtcontainers/sandbox.go:AddInterface, the later call
to agent sends the pci info for uevents (instead of empty value) to
rightly update the interfaces instead of failing with `Link not found`
Signed-off-by: Sumedh Alok Sharma <sumsharma@microsoft.com>
Support for the share-rw=true parameter has been added. While this
parameter is essential for maintaining data consistency across multiple
QEMU instances sharing a backend disk image, its implementation also
serves to standardize parameters with the block device hotplug
functionality in kata-runtime/qemu.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Docker tests have been broken for a while and should be removed if we
cannot maintain those.
For now, though, let's limit it to run only with one hypervisor and
avoid wasting resources for no reason.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
devmapper tests have been failing for a while. It's been breaking on the
kata-deploy deployment, which is most likely related to Disk Pressure.
Removing files was not enough to get the tests to run, so we'll just run
those with QEMU as a way to test fixes. Once we get the test working,
we can re-enable the other VMMs, but for now let's just not waste
resources for no reason.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
add_allow_all_policy_to_yaml now also sets the initdata annotation. So don't overwrite the
initdata annotation that was previously set by create_coco_pod_yaml_with_annotations.
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
Delete annotation from OCI spec and sandbox config. This is done after the optional initdata annotation value has been read.
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
Bump cri-o to 1.34.0 to try and remediate security advisories
CVE-2025-0750 and CVE-2025-4437.
Note: Running
```
go get github.com/cri-o/cri-o@v1.34.0
```
seems to bump a lot of other go modules, hence the size of the
vendor diff
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In google.golang.org/grpc v1.72.0, `DialContext`, is deprecated, so
switch to use `NewClient` instead.
`grpc.WithBlock()` is deprecated and not recommend, so remove this
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In `github.com/prometheus/common v0.62.0` expfmt.FmtText
is deprecated, so replace with `expfmt.NewFormat(expfmt.TypeTextPlain)`.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In `github.com/prometheus/common v0.62.0` expfmt.FmtText
is deprecated, so replace with `expfmt.NewFormat(expfmt.TypeTextPlain)`.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This is following Steve's suggestion, based on what's been done on
cloud-api-adaptor.
The reason we're doing it here is because we've seen pods being evicted
due to disk pressure.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
I've hit this when using a machine with slow internet connection, which
took ages to download the kata-cleanup image, and then helm timed out in
the middle of the cleanup, leading to the cleanup job being restarted
and then bailing with an error as the runtimeclasses that kata-deploy
tries to delete were already deleted.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Since the cloud hypervisor's resize vCPU is an asynchronous operation,
it's possible that the previous resize operation hasn't completed when
the request is sent, causing the current call to return an error.
Therefore, several retries can be performed to avoid this error.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
A minor release of QEMU is out, so update to it for fixes and features.
QEMU changelog: https://wiki.qemu.org/ChangeLog/10.1
Notes:
* AVX support is not an option to be enabled / disabled anymore.
* Passt requires Glibc 2.40.+, which means a dependency on Ubuntu 25.04
or newer, thus we're disabling it.
Signed-off-by: Alex Tibbles <alex@bleg.org>
Although versions of slab prior to 0.4.10, don't have a security
vulnearability, we can bump them all to keep things in sync
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
```
Experimental fw_cfg Device Support
This feature enables passing configuration data and files, such as VM
boot configurations (kernel, kernel cmdline, e820 memory map, and ACPI
tables), from the host to the guest. (#7117)
Experimental ivshmem Device Support
Support for inter-VM shared memory has been added. For more information,
please refer to the ivshmem documentation. (#6703)
Firmware Boot Support on riscv64
In addition to direct kernel boot, firmware boot support has been added
on riscv64 hosts. (#7249)
Increased vCPU Limit on x86_64/kvm
The maximum number of supported vCPUs on x86_64 hosts using KVM has been
raised from 254 to 8192. (#7299)
Improved Block Performance with Small Block Sizes
Performance for virtio-blk with small block sizes (16KB and below)
is enhanced via submitting async IO requests in batches. (#7146)
Faster VM Pause Operation
The VM pause operation now is significantly faster particularly for VMs
with a large number of vCPUs. (#7290)
Updated Documentation on Windows Guest Support
Our Windows documentation now includes instructions to run Windows 11
guests, in addition to Windows Server guests. (#7218)
Policy on AI Generated Code
We will decline any contributions known to contain contents generated or
derived from using Large Language Models (LLMs). Details can be found
in our contributing documentation. (#7162)
Removed SGX Support
The SGX support has been removed, as announced in the deprecation notice two
release cycles ago. (#7093)
Notable Bug Fixes
Seccomp filter fixes with glibc v2.42 (#7327)
Various fixes related to (#7331, #7334, #7335)
```
From https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v48.0
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Exclude 'cgroup' namespace from namespace checks during `allow_linux`
validation. This complements the existing exclusion of the 'network'
namespace.
As runtime-rs has specific cgroup namespace configurations, and excluding it from
policy validation ensures parity between runtime-rs and runtime-go implementations.
This allows focusing validation on critical namespaces like PID, IPC, and MNT, while
avoiding potential policy mismatches due to another cgroup namespace management by
the runtime-rs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Add `normalize_namespace_type()` function to map "mount"
(case-insensitive) to "mnt" while keeping other values unchanged.
This ensures namespace comparisons treat "mount" and "mnt" as
equivalent.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
- Use set comparison to ignore ordering differences when matching
capabilities.
- Add normalization to strip "CAP_" prefix to support both CAP_XXX and
XXX formats.
This makes capability matching more robust against different ordering
and naming formats.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
As prvious configure with overlayfs is incorrect, which causes the agent
policy validation failure. And it's also different with runtime-go's
configuration. In this patch, we'll correct its fstype with overlay and
align with runtime on this matter.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
When hot-removing a block device, the kernel must first unmount the
device and then destroy it on the VM. Therefore, a
prepare_remove_block_device procedure must be added to wait for the
kernel to unmount the device before destroying it on the VM.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
When hot-removing a block device, the kernel must first remove the
device and then destroy it on the VM. Therefore, a
prepare_remove_block_device procedure must be added to wait for the
kernel to unmount the device before destroying it on the VM.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Since Dragonball's MMIO bus only supports legacy interrupts, while
the PCI bus supports MSIX interrupts, to improve performance for block
devices, virtio-blk devices are set to PCI bus mode by default.
We had tested the virtio-blk's performance using the fio with the
following commands:
fio -filename=./test -direct=1 -iodepth 32 -thread -rw=randrw
-rwmixread=50 -ioengine=libaio -bs=4k -size=10G -numjobs=4
-group_reporting -name=mytest
When used the legacy interrupt, the io test is as below:
read : io=20485MB, bw=195162KB/s, iops=48790, runt=107485msec
write: io=20475MB, bw=195061KB/s, iops=48765, runt=107485msec
Once switched to msix innterrupt, the io test is as below:
read : io=20485MB, bw=260862KB/s, iops=65215, runt= 80414msec
write: io=20475MB, bw=260727KB/s, iops=65181, runt= 80414msec
We can get 34% performance improvement.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Added support for PCI buses for virtio-blk devices. This commit adds
support for PCI buses for both cold-plugged and hot-plugged
virtio-blk devices. Furthermore, during hot-plugging, support is added for
synchronous waiting for hot-plug completion. This ensures that multiple devices
can be hot-plugged successfully without causing upcall busy errors.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
In order to support the pci bus for virtio devices,
move the pci system manager from vfio manager to
device manager, thus it can be shared by both of
vfio and virtio pci devices.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Use @DEFENABLEANNOTATIONS_COCO@ in configuration-qemu-snp.toml,
for consistency with the tdx and coco-dev configuration files.
k8s-initdata.bats was failing during CI on SNP without this change,
because the cc_init_data annotation was disabled.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
This runs Zizmor on pushes to any branch, not just main.
This is useful for:
1. Testing changes in feature branches with the manually-triggered CI.
2. Forked repos that may use a different name than "main" for their
default branch.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Test case for `get_uds_with_sid` with an empty run directory would not
hit the 0 match arm, i.e. "sandbox with the provided prefix {short_id:?}
is not found", because `get_uds_with_sid` will try to create the
directory with provided short id before detecting `target_id`.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Preset directory `kata98654sandboxpath1` will produce more than one
`target_id` in `get_uds_with_sid`, which causes test to fail. Remove
that directory to make this test work.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
`test_arch_guest_protection_*` test cases get triggered simultaneously,
which is impossible for a single machine to pass. Modify tests to detect
protection file before preceding.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Case 4 of `test_execute_hook` would fail because `args` could not be
empty, while by providing `build_oci_hook` with `vec![]` would result in
empty args at execution stage.
Modify `build_oci_hook` to set args as `None` when empty vector is
provided.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
The variable `addr` was used to store the log level string read from the
`LOG_LEVEL_ENV_VAR` environment variable. This name is misleading as it
implies a network address rather than a log level value.
This commit renames the variable to `level` to more accurately reflect
its purpose, improving the overall readability of the configuration code.
A minor whitespace formatting fix in a macro is also included.
Signed-off-by: Liang, Ma <liang3.ma@intel.com>
A new internal nightly test has been established for runtime-rs.
This commit adds a new entry `cc-se-e2e-tests-rs` to the existing
matrix and renames the existing entry `cc-se-e2e-tests` to
`cc-se-e2e-tests-go`.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Tests skipped because tests for `qemu-se` are skipped:
- k8s-empty-dirs.bats
- k8s-inotify.bats
- k8s-shared-volume.bats
Tests skipped because tests for `qemu-runtime-rs` are skipped:
- k8s-block-volume.bats
- k8s-cpu-ns.bats
- k8s-number-cpus.bats
Let's skip the tests above to run the nightly test
for runtime-rs on IBM SEL.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
There are still some issues to be address before we can mark `make test`
for `libs` as required. Mark this case as not required temporarily.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
SNP launch was failing after the confidential guest kernel was upgraded to 6.16.1.
Added required module CONFIG_MTRR enabled.
Added required module CONFIG_X86_PAT enabled.
Fixes: #11779
Signed-off-by: Ryan Savino <ryan.savino@amd.com>
Bump the version of runtime-rs' hypervisor crate
to upgrade (indirectly) protobug and remediate vulnerability
RUSTSEC-2024-0437
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The previous document about the default of create_container_timeout
is 30,000 millseconds which not keep alignment with runtime-go.
In this commit, we'll change it as 30 seconds.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Since it aligns with the create_container_timeout definition in
runtime-go, we need to set the value in configuration.toml in seconds,
not milliseconds. We must also convert it to milliseconds when the
configuration is loaded for request_timeout_ms.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
It's possible that tests take a long time to run and hence that the access
token expires before we delete the cluster. In this case `az cli` will try
to refresh the access token using the OIDC token (which will have
definitely also expired because its lifetime is ~5 minutes).
To address this we refresh the OIDC token manually instead. Automatic
refresh isn't supported per Azure/azure-cli#28708.
Fixes: #11758
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Introduce new test case in k8s-iptables.bats which verifies that
workloads can configure iptables in the UVM.
Users discovered that they weren't able to do this for common usecases
such as istio. Proper support for this should be built into UVM
kernels. This test ensures that current and future kernel
configurations don't regress this functionality.
Signed-off-by: Cameron Baird <cameronbaird@microsoft.com>
Currently, the UVM kernel fails for istio deployments (at least with the
version we tested, 1.27.0). This is because the istio sidecar container
uses ip6tables and the required kernel configs are not built-in:
```
iptables binary ip6tables has no loaded kernel support and cannot be used, err: exit status 3 out: ip6tables v1.8.10 (legacy):
can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
```
Signed-off-by: Cameron Baird <cameronbaird@microsoft.com>
In certain scenarios, particularly under CoCo/Agent Policy enforcement,
the default initial value of `Linux.Resources.Devices` is considered
non-compliant, leading to container creation failures. To address this
issue and ensure consistency with the behavior in `runtime-go`, this
commit removes the default value of `Linux.Resources.Devices` from the
OCI Spec.
This cleanup ensures that the OCI Spec aligns with runtime expectations
and prevents policy violations during container creation.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Only the StartContainer hook needs to be reserved for execution in the
guest, but we also make sure that the setting happens only when the OCI
Hooks does exist, otherwise we do nothing.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
In k8s-guest-pull-image.bats, `failed to pull image` is
not caught by assert_logs_contain() for runtime-rs.
To ensure consistency, this commit changes `failed` to
`Failed`, which works for both runtimes.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Mount validation for sealed secret requires the base path to start with
`/run/kata-containers/shared/containers`. Previously, it used
`/run/kata-containers/sandbox/passthrough`, which caused test
failures where volume mounts are used.
This commit renames the path to satisfy the validation check.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
ef642fe890 added a special case to avoid
moving cgroups that are on the "default" slice in case of deletion.
However, this special check should be done in the Parent() method
instead, which ensures that the default resource controller ID is
returned, instead of ".".
Fixes: #11599
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
- Set guest Storage.options for block rootfs to empty (do not propagate host mount options).
- Align behavior with Go runtime: only add xfs nouuid when needed.
Signed-off-by: Caspian443 <scrisis843@gmail.com>
We moved to `.zst`, but users still use the upstream kata-manager to
download older versions of the project, thus we need to support both
suffixes.
Fixes: #11714
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
Similar to what we've done for Cloud Hypervisor in the commit
9f76467cb7, we're backporting a runtime-rs
feature that would be benificial to have as part of the go runtime.
This allows users to use virito-balloon for the hypervisor to reclaim
memory freed by the guest.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
The default suggestion for top-level permissions was
`contents: read`, but scorecard notes anything other than empty,
so try updating it and see if there are any issues. I think it's
only needed if we run workflows from other repos.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Since the previous tightening a few workflow updates have
gone in and the zizmor job isn't flagging them as issues,
so address this to remove potential attack vectors
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This reverts commit cb5f143b1b, as the
cached packages have been regenerated after the switch to using zstd.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
As part of the go 1.24.6 bump there are errors about the incorrect
use of a errorf, so switch to the non-formatting version, or add
the format string as appropriate
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
golang 1.25 has been released, so 1.23 is EoL,
so we should update to ensure we don't end up with security issues
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Update the two workflows that used setup-go to
instead call `install_go.sh` script, which handles
installing the correct version of golang
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
`${kernel_name,,}` is bash 4.0 and not posix compliant, so doesn't
work on macos, so switch to `tr` which is more widely
supported
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
In #11693 the cc_init_data annotation was changes to be hypervisor
scoped, so each hypervisor needs to explicitly allow it in order to
use it now, so add this to both the go and rust runtime's remote
configurations
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
We need to get the root_hash.txt file from the image build, otherwise
there's no way to build the shim using those values for the
configuration files.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
Although the compress ratio is not as optimal as using xz, it's way
faster to compress / uncompress, and it's "good enough".
This change is not small, but it's still self-contained, and has to get
in at once, in order to help bisects in the future.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
As 3.18 is already EOL.
We need to add `--break-system-packages` to enforce the install of the
installation of the yq version that we rely on. The tests have shown
that no breakage actually happens, fortunately.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Currently, we change vm_rootfs_driver as the initdata device driver
with block_device_driver.
Fixes#11697
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
we also need support initdat within nonprotection even though the
platform is detected as NonProtection or usually is called nontee
host. Within these cases, there's no need to validate the item of
`confidential_guest=true`, we believe the result of the method
`available_guest_protection()?`.
Fixes#11697
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
The default `reconnect_timeout` (3 seconds) was found to be insufficient for
IBM SEL when using VSOCK. This commit updates the timeouts as follows:
- `dial_timeout_ms`: Set to 90ms to match the value used in go-runtime for IBM SEL
- `reconnect_timeout_ms`: Increased to 5000ms based on empirical testing
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Add support for the `InitData` resource config on IBM SEL,
so that a corresponding block device is created and the
initdata is passed to the guest through this device.
Note that we skip passing the initdata hash via QEMU’s
object, since the hypervisor does not yet support this
mechanism for IBM SEL. It will be introduced separately
once QEMU adds the feature.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Linux v6.16 brings some useful features for the confidential guests.
Most importantly, it adds an ABI to extend runtime measurement registers
(RTMR) for the TEE platforms supporting it. This is currently enabled
on Intel TDX only.
The kernel version bump from v6.12.x to v6.16 forces some CONFIG_*
changes too:
MEMORY_HOTPLUG_DEFAULT_ONLINE was dropped in favor of more config
choices. The equivalent option is MHP_DEFAULT_ONLINE_TYPE_ONLINE_AUTO.
X86_5LEVEL was made unconditional. Since this was only a TDX
configuration, dropping it completely as part of v6.16 is fine.
CRYPTO_NULL2 was merged with CRYPTO_NULL. This was only added in
confidential guest fragments (cryptsetup) so we can drop it in this update.
CRYPTO_FIPS now depends on CRYPTO_SELFTESTS which further depends on
EXPERT which we don't have. Enable both in a separate config fragment
for confidential guests. This can be moved to a common setting once
other targets bump to post v6.16.
CRYPTO_SHA256_SSE3 arch optimizations were reworked and are now enabled
by default. Instead of adding it to whitelist.conf, just drop it completely
since it was only enabled as part of "measured boot" feature for
confidential guests. CONFIG_CRYPTO_CRC32_S390 was reworked the same way.
In this case, whitelist.conf is needed.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
This reverts commit ede773db17.
`cc_init_data` should be under a hypervisor category because
it is a hypervisor-specific feature. The annotation including
`runtime` also breaks a logic for `is_annotation_enabled()`.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
We need to include `cc_init_data` in the enable_annotations
array to pass the data. Since initdata is a CoCo-specific
feature, this commit introduces a new array,
`DEFENABLEANNOTATIONS_COCO`, which contains the required
string and applies it to the relevant CoCo configuration.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Currently, there are 2 issues for the empty initdata annotation
test:
- Empty string handling
- "\[CDH\] \[ERROR\]: Get Resource failed" not appearing
`add_hypervisor_initdata_overrides()` does not handle
an empty string, which might lead to panic like:
```
called `Result::unwrap()` on an `Err` value: gz decoder failed
Caused by:
failed to fill whole buffer
```
This commit makes the function return an empty string
for a given empty input and updates the assertion string
to one that appears in both go-runtime and runtime-rs.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Currently, runtime-rs related code within the libs directory lacks
sufficient CI protection. We frequently observe the following issues:
- Inconsistent Code Formatting: Code that has not been properly
formatted
is merged.
- Failing Tests: Code with failing unit or integration tests is merged.
To address these issues, we need introduce stricter CI checks for the
libs directory. This may specifically include:
- Code Formatting Checks
- Mandatory Test Runs
Fixes#11512
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To make it aligned with the setting of runtime-go, we should keep
it as empty when users doesn't enable and set its specified path.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
We need to make sure that we use the latest kernel
and rebuild the initrd and image for the nvidia-gpu
use-cases otherwise the tests will fail since
the modules are not build against the new kernel and
they simply fail to load.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
OSV-Scanner highlights go.mod references to go stdlib 1.23.0 contrary to intention in versions.yaml, so synchronize them.
Make a converse comment for versions.yaml.
Fixes: #11700
Signed-off-by: Alex Tibbles <alex@bleg.org>
Let's rename the runtime-rs initdata annotation from
`io.katacontainers.config.runtime.cc_init_data` to
`io.katacontainers.config.hypervisor.cc_init_data`.
Rationale:
- initdata itself is a hypervisor-specific feature
- the new name aligns with the annotation handling logic:
c92bb1aa88/src/libs/kata-types/src/annotations/mod.rs (L514-L968)
This commit updates the annotation for go-runtime and tests accordingly.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Enable testing of initdata on the qemu-coco-dev and qemu-se
runtime classes, so we can validate the function on s390x
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This commit support the seccomp_sandbox option from the configuration.toml file
and add the logic for appending command-line arguments based on this new configuration parameter.
Fixes: #11524
Signed-off-by: wangxinge <wangxinge@bupt.edu.cn>
Previouly it is reusing the ovmf, which will enter some
issue for path checking, so move to aavmf as it should
be.
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
Read only the sealed secret prefix instead of the whole file.
Improves performance and reduces memory usage in I/O-heavy environments.
Fixes: #11643
Signed-off-by: Park.Jiyeon <jiyeonnn2@icloud.com>
Dependening on the platform configuration, users might want to
set a more secure policy than the QEMU default.
Signed-off-by: Paul Meyer <katexochen0@gmail.com>
This change introduces a new command line option `--vm`
to boot up a pod VM for testing. The tool connects with
kata agent running inside the VM to send the test commands.
The tool uses `hypervisor` crates from runtime-rs for VM
lifecycle management. Current implementation supports
Qemu & Cloud Hypervisor as VMMs.
In summary:
- tool parses the VMM specific runtime-rs kata config file in
/opt/kata/share/defaults/kata-containers/runtime-rs/*
- prepares and starts a VM using runtime-rs::hypervisor vm APIs
- retrieves agent's server address to setup connection
- tests the requested commands & shutdown the VM
Fixes#11566
Signed-off-by: Sumedh Alok Sharma <sumsharma@microsoft.com>
The seccomp feature for Cloud Hypervisor and Firecracker is enabled by default.
This commit introduces an option to disable seccomp for both and updates the built-in configuration.toml file accordingly.
Fixes: #11535
Signed-off-by: wangxinge <wangxinge@bupt.edu.cn>
Route kata-shim logs directly to systemd-journald under 'kata' identifier.
This refactoring enables `kata-shim` logs to be properly attributed to
'kata' in systemd-journald, instead of inheriting the 'containerd'
identifier.
Previously, `kata-shim` logs were challenging to filter and debug as
they
appeared under the `containerd.service` unit.
This commit resolves this by:
1. Introducing a `LogDestination` enum to explicitly define logging
targets (File or Journal).
2. Modifying logger creation to set `SYSLOG_IDENTIFIER=kata` when
logging
to Journald.
3. Ensuring type safety and correct ownership handling for different
logging backends.
This significantly enhances the observability and debuggability of Kata
Containers, making it easier to monitor and troubleshoot Kata-specific
events.
Fixes: #11590
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
After moving Arm64 CI nodes to new one, we do faced an interesting
issue for timeout when it executes the command with crictl runp,
the error is usally: code = DeadlineExceeded
Fixes: #11662
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
This series should make runtime-rs's vcpu allocation behaviour match the
behaviour of runtime-go so we can now enable pertinent tests which were
skipped so far due the difference between both shims.
Signed-off-by: Pavel Mores <pmores@redhat.com>
Configuration information is adjusted after loading from file but so
far, there has been no similar check for configuration coming from
annotations. This commit introduces re-adjusting config after
annotations have been processed.
A small refactor was necessary as a prerequisite which introduces
function TomlConfig::adjust_config() to make it easier to invoke
the adjustment for a whole TomlConfig instance. This function is
analogous to the existing validate() function.
The immediate motivation for this change is to make sure that 0
in "default_vcpus" annotation will be properly adjusted to 1 as
is the case if 0 is loaded from a config file. This is required
to match the golang runtime behaviour.
Signed-off-by: Pavel Mores <pmores@redhat.com>
Also included (as commented out) is a test that does not pass although
it should. See source code comment for explanation why fixing this seems
beyond the scope of this PR.
Signed-off-by: Pavel Mores <pmores@redhat.com>
This commit focuses purely on the formal change of type. If any subsequent
changes in semantics are needed they are purposely avoided here so that the
commit can be reviewed as a 100% formal and 0% semantic change.
Signed-off-by: Pavel Mores <pmores@redhat.com>
This commit addresses a part of the same problem as PR #7623 did for the
golang runtime. So far we've been rounding up individual containers'
vCPU requests and then summing them up which can lead to allocation of
excess vCPUs as described in the mentioned PR's cover letter. We address
this by reversing the order of operations, we sum the (possibly fractional)
container requests and only then round up the total.
We also align runtime-rs's behaviour with runtime-go in that we now
include the default vcpu request from the config file ('default_vcpu')
in the total.
We diverge from PR #7623 in that `default_vcpu` is still treated as an
integer (this will be a topic of a separate commit), and that this
implementation avoids relying on 32-bit floating point arithmetic as there
are some potential problems with using f32. For instance, some numbers
commonly used in decimal, notably all of single-decimal-digit numbers
0.1, 0.2 .. 0.9 except 0.5, are periodic in binary and thus fundamentally
not representable exactly. Arithmetics performed on such numbers can lead
to surprising results, e.g. adding 0.1 ten times gives 1.0000001, not 1,
and taking a ceil() results in 2, clearly a wrong answer in vcpu
allocation.
So instead, we take advantage of the fact that container requests happen
to be expressed as a quota/period fraction so we can sum up quotas,
fundamentally integral numbers (possibly fractional only due to the need
to rewrite them with a common denominator) with much less danger of
precision loss.
Signed-off-by: Pavel Mores <pmores@redhat.com>
When hot-plugging CPUs on QEMU, we send a QMP command with JSON
arguments. QEMU 9.2 recently became more strict[1] enforcing the
JSON schema for QMP parameters. As a result, running Kata Containers
with QEMU 9.2 results in a message complaining that the core-id
parameter is expected to be an integer:
```
qmp hotplug cpu, cpuID=cpu-0 socketID=1, error:
QMP command failed:
Invalid parameter type for 'core-id', expected: integer
```
Fix that by changing the core-id, socket-id and thread-id to be
integer values.
[1]: be93fd5372Fixes: #11633
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
As we have changed the initdata annotation definition, Accordingly, we also
need correct its const definition with KATA_ANNO_CFG_RUNTIME_INIT_DATA.
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
This adds SECURITY.md to the list of GH-native files that should be excluded by
the reference checker.
Today this is useful for downstreams who already have a SECURITY.md file for
compliance reasons. When Kata onboards that file, this commit will also be
required.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
When the network interface provisioned by the CNI has static ARP table entries,
the runtime calls AddARPNeighbor to propagate these to the agent. As of today,
these calls are simply rejected.
In order to allow the calls, we do some sanity checks on the arguments:
We must ensure that we don't unexpectedly route traffic to the host that was
not intended to leave the VM. In a first approximation, this applies to
loopback IPs and devices. However, there may be other sensitive ranges (for
example, VPNs between VMs), so there should be some flexibility for users to
restrict this further. This is why we introduce a setting, similar to
UpdateRoutes, that allows restricting the neighbor IPs further.
The only valid state of an ARP neighbor entry is NUD_PERMANENT, which has a
value of 128 [1]. This is already enforced by the runtime.
According to rtnetlink(7), valid flag values are 8 and 128, respectively [2],
thus we allow any combination of these.
[1]: https://github.com/torvalds/linux/blob/4790580/include/uapi/linux/neighbour.h#L72
[2]: https://github.com/torvalds/linux/blob/4790580/include/uapi/linux/neighbour.h#L49C20-L53Fixes: #11664
Signed-off-by: Markus Rudy <mr@edgeless.systems>
To make it work within CI, we do alignment with kata-runtime's definition
with "io.katacontainers.config.runtime.cc_init_data".
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Print more details about the behavior of "kubectl logs", trying to understand
errors like:
https://github.com/kata-containers/kata-containers/actions/runs/16662887973/job/47164791712
not ok 1 Check the number vcpus are correctly allocated to the sandbox
(in test file k8s-sandbox-vcpus-allocation.bats, line 37)
`[ `kubectl logs ${pods[$i]}` -eq ${expected_vcpus[$i]} ]' failed with status 2
No resources found in kata-containers-k8s-tests namespace.
...
k8s-sandbox-vcpus-allocation.bats: line 37: [: -eq: unary operator expected
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
This auto-detects the repo by default (instead of having to specify
KATA_DEV_MODE=true) so that forked repos can leverage the static-checks.yaml CI
check without modification.
An alternative would have been to pass the repo in static-checks.yaml. However,
because of the matrix, this would've changed the check name, which is a pain to
handle in either the gatekeeper/GH UI.
Example fork failure:
https://github.com/microsoft/kata-containers/actions/runs/16656407213/job/47142421739#step:8:75
I've tested this change to work in a fork.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
In order to have a reproducible code generation process, we need to pin
the versions of the tools used. This is accomplished easiest by
generating inside a container.
This commit adds a container image definition with fixed dependencies
for Golang proto/ttrpc code generation, and changes the agent Makefile
to invoke the update-generated-proto.sh script from within that
container.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
The generated Go bindings for the agent are out of date. This commit
was produced by running
src/agent/src/libs/protocols/hack/update-generated-proto.sh with
protobuf compiler versions matching those of the last run, according to
the generated code comments.
Since there are new RPC methods, those needed to be added to the
HybridVSockTTRPCMockImp.
Signed-off-by: Markus Rudy <mr@edgeless.systems>
Updated versions.yaml to use Firecracker v1.12.1.
Replaced firecracker and jailer binaries under /opt/kata/bin.
Tested with kata-fc runtime on Kubernetes:
- Deployed pods using gitpod/openvscode-server
- Verified microVM startup, container access, and Firecracker usage
- Confirmed Firecracker and jailer versions via CLI
Signed-off-by: Kumar Mohit <68772712+itsmohitnarayan@users.noreply.github.com>
- "confidential_emptyDir" becomes "emptyDir" in the settings file.
- "confidential_configMap" becomes "configMap" in settings.
- "mount_source_cpath" becomes "cpath".
- The new "root_path" gets used instead of the old "cpath" to point to
the container root path..
- "confidential_guest" is no longer used. By default it gets replaced
by "enable_configmap_secret_storages"=false, because CoCo is using
CopyFileRequest instead of the Storage data structures for ConfigMap
and/or Secret volume mounts during CreateContainerRequest.
- The value of "guest_pull" becomes true by default.
- "image_layer_verification" is no longer used - just CoCo's guest pull
is supported.
- The Request input files from unit tests are changing to reflect the
new default settings values described above.
- tests/integration/kubernetes/tests_common.sh adjusts the settings for
platforms that are not set-up for CoCo during CI (i.e., platforms
other than SNP, TDX, and CoCo Dev).
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Skip pulling container image layers when guest-pull=true. The contents
of these layers were ignored due to:
- #11162, and
- tarfs snapshotter support having been removed from genpolicy.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
AKS Confidential Containers are using the tarfs snapshotter. CoCo
upstream doesn't use this snapshotter, so remove this Policy complexity
from upstream.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
`mem-agent` here is now a library and do not contain examples, ignore
Cargo.lock to get rid of untracked file noise produced by `cargo run` or
`cargo test`.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Re-generates the client code against Cloud Hypervisor v47.0.
Note: The client code of cloud-hypervisor's OpenAPI is automatically
generated by openapi-generator.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
`MmapRegion` is only used while `virtio-fs` is enabled during testing
dragonball, gate the import behind `virtio-fs` feature.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Some variables went unused if certain features are not enabled, use
`#[allow(unused)]` to suppress those warnings at the time being.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
`VcpuManagerError` is only needed when `host-device` feature is enabled,
gate the import behind that feature.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Code inside `test_mac_addr_serialization_and_deserialization` test does
not actually require this `with-serde` feature to test, removing the
assertion here to enable this test.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Add full cgroups support on host. Cgroups are managed by `FsManager` and
`SystemdManager`. As the names impies, the `FsManager` manages cgroups
through cgroupfs, while the `SystemdManager` manages cgroups through
systemd. The two manages support cgroup v1 and cgroup v2.
Two types of cgroups path are supported:
1. For colon paths, for example "foo.slice:bar:baz", the runtime manages
cgroups by `SystemdManager`;
2. For relative/absolute paths, the runtime manages cgroups by
`FsManager`.
vCPU threads are added into the sandbox cgroups in cgroup v1 + cgroupfs,
others, cgroup v1 + systemd, cgroup v2 + cgroupfs, cgroup v2 + systemd, VMM
process is added into the cgroups.
The systemd doesn't provide a way to add thread to a unit. `add_thread()`
in `SystemdManager` is equivalent to `add_process()`.
Cgroup v2 supports threaded mode. However, we should enable threaded mode
from leaf node to the root node (`/`) iteratively [1]. This means the
runtime needs to modify the cgroups created by container runtime (e.g.
containerd). Considering cgroupfs + cgroup v2 is not a common combination,
its behavior is aligned with systemd + cgroup v2, which is not allowed to
manage process at the thread level.
1: https://www.kernel.org/doc/html/v4.18/admin-guide/cgroup-v2.html#threadsFixes: #11356
Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
As some reasons, it first should make it align with runtime-go, this
commit will do this work.
Fixes#11543
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
The actual memory usage on the host is equal to the hypervisor memory usage
plus the user memory usage. An OOM killer might kill the shim when the
memory limit on host is same with that of container and the container
consumes all available memory. In this case, the containerd will never
receive OOM event, but get "task exit" event. That makes the `k8s-oom.bats`
test fail.
The fix is to add a new container to increase the sandbox memory limit.
When the container "oom-test" is killed by OOM killer, there is still
available memory for the shim, so it will not be killed.
Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
When enabling systemd cgroup driver and sandbox cgroup only, the shim is
under a systemd unit. When the unit is stopping, systemd sends SIGTERM to
the shim. The shim can't exit immediately, as there are some cleanups to
do. Therefore, ignoring SIGTERM is required here. The shim should complete
the work within a period (Kata sets it to 300s by default). Once a timeout
occurs, systemd will send SIGKILL.
Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
Our CI keeps on getting
```
jq: error (at <stdin>:1): Cannot index string with string "tag_name"
```
during the install dependencies phase, which I suspect
might be due to github rate limits being reduced, so try
to pass through the `GH_TOKEN` env and use it in the auth header.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
It is important that we continue to support VirtIO-SCSI. While
VirtIO-BLK is a common choice, virtio-scsi offers significant
performance advantages in specific scenarios, particularly when
utilizing iothreads and with NVMe Fabrics.
Maintaining Flexibility and Choice by supporting both virtio-blk and
virtio-scsi, we provide greater flexibility for users to choose the
optimal storage(virtio-blk, virtio-scsi) interface based on their
specific workload requirements and hardware configurations.
As virtio-scsi controller has been created when qemu vm starts with
block device driver is set to `virtio-scsi`. This commit is for blockdev_add
the backend block device and device_add frondend virtio-scsi device via qmp.
Fixes#11516
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
As block device index is an very important unique id of a block device
and can indicate a block device which is equivalent to device_id.
In case of index is required in calculating scsi LUN and reduce
useless arguments within reusing `hotplug_block_device`, we'd better
change the device_id with block device index.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
In this commit, block device aio are introduced within hotplug_block_device
within qemu via qmp and the "iouring" is set the default.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
It should be correctly handled within the device manager when do
create_block_device if the driver_option is virtio-scsi.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
It supports handling scsi device when block device driver is `scsi`.
And it will ensure a correct storage source with LUN.
Fixes#11516
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
It's used to help discover scsi devices inside guest and also add a
new const value `KATA_SCSI_DEV_TYPE` to help pass information.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
AIO is the I/O mechanism used by qemu with options:
- threads
Pthread based disk I/O.
- native
Native Linux I/O.
- io_uring (default mode)
Linux io_uring API. This provides the fastest I/O operations on
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Although Previous implementation of hotplugging block device via QMP
can successfully hot-plug the regular file based block device, but it
fails when the backend is /dev/xxx(e.g. /dev/loop0). With analysis about
it, we can know that it lacks the ablility to hotplug host block devices.
This commit will fill the gap, and make it work well for host block
devices.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
As there were a few moderate security vulnerability fixes missed as part
of the 3.19.0 release.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
For the release itself, let's simply copy the VERSION file to the
tarball.
To do so, we had to change the logic that merges the build, as at that
point the tag is not yet pushed to the repo.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
On commit 90bc749a19, we've changed the
QEMUTDXPATH in order to get it to work with GPUs, but the change broke
the non-GPU TDX use-case, which depends on the distro binary.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
- Add nodeSelector configuration to values.yaml with empty default
- Update DaemonSet template to conditionally include nodeSelector
- Add documentation and examples for nodeSelector usage in README
- Allows users to restrict kata-containers deployment to specific nodes by labeling them
Signed-off-by: Gus Minto-Cowcher <gus@basecamp-research.com>
According to the issue [1], Tokio will panic when we are giving a blocking
socket to Tokio's `from_std()` method, the information is as follows:
```
A panic occurred at crates/agent/src/sock/vsock.rs:59: Registering a
blocking socket with the tokio runtime is unsupported. If you wish to do
anyways, please add `--cfg tokio_allow_from_blocking_fd` to your RUSTFLAGS.
```
A workaround is to set the socket to non-blocking.
1: https://github.com/tokio-rs/tokio/issues/7172
Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
The KERNEL_DEBUG_ENABLED was missing in the outer shell script
so overrides via make were not possible.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Bump these crates to remove the unmaintained dependency
proc-macro-error and remediate RUSTSEC-2024-0370
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Bump these crates across various components to remove the
dependency on unmaintained instant crate and remediate
RUSTSEC-2024-0384
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- The github generated template had an old version which
isn't valid for the pr-scan, so update to the latest
- The action needs also `actions: read` and `contents:read` to run in kata-containers
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Some of the nix apis we are using are now enabled by features,
so add these to resolve the compilation issues
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This new version of gc fixes s390x attestation, also introduces registry
configuration setting directly via initdata.
Signed-off-by: Xynnn007 <xynnn@linux.alibaba.com>
The peer pods project is using the agent-ctl tool in some
tests, so tagging our cache will let them more easily identify
development versions of kata for testing between releases.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Sometimes, containers or execs do not use stdin, so there is no chance
to add parent stdin to the process's writer hashmap, resulting in the
parent stdin's fd not being closed when the process is cleaned up later.
Therefore, when creating a process, first explicitly add parent stdin to
the wirter hashmap. Make sure that the parent stdin's fd can be closed
when the process is cleaned up later.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
We want to be able to build a debug version of the kernel for various
use-cases like debugging, tracing and others.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
The convention for rootfs-* names is:
* rootfs-${image_type}-${special_build}
If this is not followed, cache will never work as expected, leading to
building the initrd / image on every single build, which is specially
constly when building the nvidia specific targets.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
The init data could not be read properly within kata-agent because the
data length field was omitted, a consequence of a mismatch in the data
write format.
Fixes#11556
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Now AA supports to receive initdata toml plaintext and deliver it in the
attestation. This patch creates a file under
'/run/confidential-containers/initdata'
to store the initdata toml and give it to AA process.
When we have a separate component to handle initdata, we will move the
logic to that component.
Signed-off-by: Xynnn007 <xynnn@linux.alibaba.com>
Update to https://github.com/teawater/mem-agent/tree/kata-20250627.
The commit list:
3854b3a Update nix version from 0.23.2 to 0.30.1
d9a4ced Update tokio version from 1.33 to 1.45.1
9115c4d run_eviction_single_config: Simplify check evicted pages after
eviction
68b48d2 get_swappiness: Use a rounding method to obtain the swappiness
value
14c4508 run_eviction_single_config: Add max_seq and min_seq check with
each info
8a3a642 run_eviction_single_config: Move infov update to main loop
b6d30cf memcg.rs: run_aging_single_config: Fix error of last_inc_time
check
54fce7e memcg.rs: Update anon eviction code
41c31bf cgroup.rs: Fix build issue with musl
0d6aa77 Remove lazy_static from dependencies
a66711d memcg.rs: update_and_add: Fix memcg not work after set memcg
issue
cb932b1 Add logs and change some level of some logs
93c7ad8 Add per-cgroup and per-numa config support
092a75b Remove all Cargo.lock to support different versions of rust
540bf04 Update mem-agent-srv, mem-agent-ctl and mem-agent-lib to
v0.2.0
81f39b2 compact.rs: Change default value of compact_sec_max to 300
c455d47 compact.rs: Fix psi_path error with cgroup v2 issue
6016e86 misc.rs: Fix log error
ded90e9 Set mem-agent-srv and mem-agent-ctl as bin
Fixes: #11478
Signed-off-by: teawater <zhuhui@kylinos.cn>
As the following job has passed 10 days in a row for the nightly test:
```
kata-containers-ci-on-push / run-k8s-tests-on-zvsi / run-k8s-tests (nydus, qemu-coco-dev, kubeadm)
```
this commit makes the job required again.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Set the node in the spec template of a Job manifest, allowing to use
set_node() on tests like k8s-parallel.bats
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
The previous description for the `block_device_driver` was inaccurate or
outdated. This commit updates the documentation to provide a more
precise explanation of its function.
Fixes#11488
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
When we run a kata pod with runtime-rs/qemu and with a default
configuration toml, it will fail with error "unsupported driver type
virtio-scsi".
As virtio-scsi within runtime-rs is not so popular, we set default block
device driver with `virtio-blk-*`.
Fixes#11488
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This patch changes the container process HashMap to use exec_id as the primary
key instead of PID, preventing exec_id collisions that could be exploited in
Confidential Computing scenarios where the host is less trusted than the guest.
Key changes:
- Changed `processes: HashMap<pid_t, Process>` to `HashMap<String, Process>`
- Added exec_id collision detection in `start()` method
- Updated process lookup operations to use exec_id directly
- Simplified `get_process()` with direct HashMap access
This prevents multiple exec operations from reusing the same exec_id, which
could be problematic in CoCo use cases where process isolation and unique
identification are critical for security.
Signed-off-by: Ankita Pareek <ankitapareek@microsoft.com>
The `/opt/kata/VERSION` file, which is created using `git describe
--tags`, requires the newly released tag to be updated in order to be
accurate.
To do so, let's add a `fetch-tags: true` to the checkout action used
during the `create-kata-tarball` job.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
The CoCo non-TEE job (run-k8s-tests-coco-nontee) used to be required but
we had to withdraw it to fix a problem (#11156). Now the job is back
running and stable, so time to make it required again.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
tempdir hasn't been updated for seven years and pulls in
remove_dir_all@0.5.3 which has security advisory
GHSA-mc8h-8q98-g5hr, so replace this with using tempfile,
which the crate got merged into and we use elsewhere in the
project
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Ignore Cargo.lock in `libs` to prevent developers from accidentally
track lock files in `libs` workspace.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
This PR adds support for adding a network device before starting the
cloud-hypervisor VM.
This commit will get the host devices from NamedHypervisorConfig and
assign it to VmConfig's devices which is for vfio devices when clh
starts launching.
And with this, it successfully finish the vfio devices conversion from
a generic Hypervisor config to a clh specific VmConfig.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This commit introduce `host_devices` to help convert vfio devices from
a generic hypervisor config to a cloud-hypervisor specific VmConfig.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
This PR adds support for adding a vfio device before starting the
cloud-hypervisor VM (or cold-plug vfio device).
This commit changes "pending_devices" for clh implementation via adding
DeviceType::Vfio() into pending_devices. And it will get shared host devices
after correctly handling vfio devices (Specially for primary device).
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
crates in `libs` workspace do not ship binaries, they are just libraries
for other workspace to reference, the `Cargo.lock` file hence would not
take effect. Removing Cargo.lock for `libs` workspace.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
In line with configuration for other TEEs, shared_fs should
be set to none for IBM SEL. This commit updates the value for
runtime/runtime-rs.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
As we're using a `kubectl wait --timeout ...` to check whether the
kata-deploy pod's been deleted or not, let's remove the `--wait` from
the `helm uninstall ...` call as k0s tests were failing because the
`kubectl wait --timeout...` was starting after the pod was deleted,
making the test fail.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
We've been pinning a specific version of k0s for CRI-O tests, which may
make sense for CRI-O, but doesn't make sense at all when it comes to
testing that we can install kata-deploy on latest k0s (and currently our
test for that is broken).
Let's bump to the latest, and from this point we start debugging,
instead of debugging on an ancient version of the project.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
Bump url, reqwests and idna crates in order to move away from
idna <1.0.3 and remediate CVE-2024-12224.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Previously, the rootlessDir variable in `src/runtime/virtcontainers/pkg/rootless.go` was initialized at
package load time using `os.Getenv("XDG_RUNTIME_DIR")`. However, in rootless
VMM mode, the correct value of $XDG_RUNTIME_DIR is set later during runtime
using os.Setenv(), so rootlessDir remained empty.
This patch defers the initialization of rootlessDir until the first call
to `GetRootlessDir()`, ensuring it always reflects the current environment
value of $XDG_RUNTIME_DIR.
Fixes: #11526
Signed-off-by: stevenfryto <sunzitai_1832@bupt.edu.cn>
There are workflows that rely on `az aks install-cli` to get kubectl
installed. There is a well-known problem on install-cli, related with
API usage rate limit, that has recently caused the command to fail
quite often.
This is replacing install-cli with the azure/setup-kubectl github
action which has no such as rate limit problem.
While here, removed the install_cli() function from gha-run-k8s-common.sh
so avoid developers using it by mistake in the future.
Fixes#11463
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Removing kernel config files realting
to SEV as part of the SEV deprecation
efforts.
Co-authored-by: Adithya Krishnan Kannan <AdithyaKrishnan.Kannan@amd.com>
Signed-off-by: Arvind Kumar <arvinkum@amd.com>
Removing runtime SEV functionality,
such as the kbs, ovmf, VMSA handling,
and SEV configs as part of deprecating
SEV from kata.
Co-authored-by: Adithya Krishnan Kannan <AdithyaKrishnan.Kannan@amd.com>
Signed-off-by: Arvind Kumar <arvinkum@amd.com>
Removing files related to SEV, responsible for
installing and configuring Kata containers.
Co-authored-by: Adithya Krishnan Kannan <AdithyaKrishnan.Kannan@amd.com>
Signed-off-by: Arvind Kumar <arvinkum@amd.com>
Add init data annotation within preparing remote hypervisor annotations
when prepare vm, so that it can be passed within CreateVMRequest.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
679cc9d47c was merged and bumped the
podoverhead for the gpu related runtimeclasses. However, the bump on the
`kata-runtimeClasses.yaml` as overlooked, making our tests fail due to
that discrepancy.
Let's just adjust the values here and move on.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
We cannot only rely only on default_cpu and default_memory in the
config, default is 1 and 2Gi but we need some overhead for QEMU and
the other related binaries running as the pod overhead. Especially
when QEMU is hot-plugging GPUs, CPUs, and memory it can consume more
memory.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
atty is unmaintained, with the last release almost 3 years
ago, so we don't need to check for updates, but instead will
remove it from out dependency tree.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
structopt features were integrated into clap v3 and so is not
actively updated and pulls in the atty crate which has a security
advisory, so update clap, remove structopts, update the code that
used it to remove the outdated dependencies.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
slog-term 2.9.0 included atty, which is unmaintained
as has a security advisory GHSA-g98v-hv3f-hcfr,
so bump the version across our components to remove
this dependency.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
We had the proper config.toml configuration for static builds
but were building the glibc target and not the musl target.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
The way GH works, we can only require Zizmor results on ALL PR runs, or
none, so remove the path filter.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Previously, the source field was subject to mandatory checks. However,
in guest-pull mode, this field doesn't consistently provide useful
information. Our practical experience has shown that relying on this
field for critical data isn't always necessary.
In other aspect, not all cases need mandatory check for KataVirtualVolume.
based on this fact, we'd better to make from_base64 do only one thing and
remove the validate(). Of course, We also keep the previous capability to
make it easy for possible cases which use such method and we rename it
clearly with from_base64_and_validate.
This commit relaxes the mandatory checks on the KataVirtualVolume specifically
for guest-pull mode, acknowledging its diminished utility in this context.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
When hot plugging vcpu in dragonball hypervisor, use the synchronization
interface and wait until the hot plug cpu is executed in the guest
before returning. This ensures that the subsequent device hot plug will
not conflict with the previous call.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Let dragonball's resize_vcpu api support synchronization, and only
return after the hot-plug of the CPU is successfully executed in the
guest kernel. This ensures that the subsequent device hot-plug operation
can also proceed smoothly.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Got follow warning with make test of kata-agent:
Compiling rustjail v0.1.0 (/data/teawater/kata-containers/src/agent/rustjail)
Compiling kata-agent v0.1.0 (/data/teawater/kata-containers/src/agent)
warning: unused import: `std::os::unix::fs`
--> rustjail/src/mount.rs:1147:9
|
1147 | use std::os::unix::fs;
| ^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
This commit fixes it.
Fixes: #11508
Signed-off-by: teawater <zhuhui@kylinos.cn>
Introduce a const value `KATA_VIRTUAL_VOLUME_PREFIX` defined in the libs/kata-types,
and it'll be better import such const value from there.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This was originally implemented as a Jenkins skip and is only used in a few
workflows. Nowadays this would be better implemented via the gatekeeper.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This patch fixes the rules.rego file to ensure that the
policy is correctly parsed and applied by opa.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
This commit updates the `tests_common.sh` script
to enable the `confidential_guest`
setting for the coco tests in the Kubernetes
integration tests.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
This patch removes storages from the testcases.json file for execprocess.
This is because input storage objects are invalid for two reasons:
1. "io.katacontainers.fs-opt.layer=" is missing option in annotations.
2. by default, we don't have host-tarfs-dm-verity enabled, so the storage
objects are not created in policy.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
---
This patch introduces some basic checks for the
`image_guest_pull` storage type in the genpolicy tool.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
This patch improves the test framework for the
genpolicy tool by enabling the use of config maps.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
Add the definiation of variable DEFCREATECONTAINERTIMEOUT into
Makefile target with default timeout 30s.
Fixes: #485
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
It's used to indicate timeout value set for image pulling in
guest during creating container.
This allows users to set this timeout with annotation according to the
size of image to be pulled.
Fixes#10692
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
It allows users to set this create container timeout within
configuration.toml according to the size of image to be pulled
inside guest.
Fixes#10692
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To better understand the impact of different timeout values on system
behavior, this section provides a more comprehensive explanation of the
request_timeout_ms:
This timeout value is used to set the maximum duration for the agent to
process a CreateContainerRequest. It's also used to ensure that workloads,
especially those involving large image pulls within the guest, have sufficient
time to complete.
Based on explaination above, it's renamed with `create_container_timeout`,
Specially, exposed in 'configuration.toml'
Fixes#10692
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This helps considerably to avoid patching the code, and just adjusting
the build environment to use a smaller alignment than the default one.
Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
These tests are not passing, or being maintained,
so as discussed on the AC meeting, we will skip them
from automatically running until they can be reviewed
and re-worked, so avoid wasting CI cycles.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This adds Zizmor GHA security scanning as a PR gate.
Note that this does NOT require that Zizmor returns 0 alerts, but rather
that Zizmor's invocation completes successfully (regardless of how many
alerts it raises).
I will set up the former after this commit is merged (through the GH UI).
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Enable GPU annotations by adding `default_gpus` and `default_gpu_model`
into the list of valid annotations `enable_annotations`.
Fixes#10484
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Add GPU specific annotations used by remote hypervisor for instance
selection during `prepare_vm`.
Fixes#10484
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Two annotations: `default_gpus and `default_gpu_model` as GPU annotations
are introduced for Kata VM configurations to improve instance selection on
remote hypervisors. By adding these annotations:
(1) `default_gpus`: Allows users to specify the minimum number of GPUs a VM
requires. This ensures that the remote hypervisor selects an instance
with at least that many GPUs, preventing resource under-provisioning.
(2) `default_gpu_model`: Lets users define the specific GPU model needed for
the VM. This is crucial for workloads that depend on particular GPU archs or
features, ensuring compatibility and optimal performance.
Fixes#10484
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To provide the remote hypervisor with the necessary intelligence
to select the most appropriate instance for a given GPU instance,
leading to better resource allocation, two fields `default_gpus`
and `default_gpu_model` are introduced in `RemoteInfo`.
Fixes#10484
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To better support containerd 2.1 and later versions, remove the
hardcoded `layer.erofs` and instead parse `/proc/mounts` to obtain the
real mount source (and `/sys/block/loopX/loop/backing_file` if needed).
If the mount source doesn't end with `layer.erofs`, it should be marked
as unsupported, as it may be a filesystem meta file generated by later
containerd versions for the EROFS flattened filesystem feature.
Also check whether the filesystem type is `overlay` or not, since the
containerd mount manager [1] may change it after being introduced.
[1] https://github.com/containerd/containerd/issues/11303
Fixes: f63ec50ba3 ("runtime: Add EROFS snapshotter with block device support")
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Current Dockerfile fails when trying to build from the root of the repo
docker build -t kata-monitor -f tools/packaging/kata-monitor/Dockerfile .
with "invalid go version '1.23.0': must match format 1.23"
Using go 1.23 in the Dockerfile fixes the build error
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
I notices that agent-ctl is including a 9 month old version of
image-rs and the libs crates haven't been update for potentially
many years, so bump all of these.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This commit introduces the ability to run Pods without shared fs
mechanism in Kata.
The default shared fs can lead to unnecessary resource consumption
and security risks for certain use cases. Specifically, scenarios
where files only need to be copied into the VM once at Pod creation
(e.g., non-tee envs) and don't require dynamic updates make the shared
fs redundant and inefficient.
By explicitly disabling shared fs functionality, we reduce resource
overhead and shrink the attack surface. Users will need to employ
alternative methods(e.g. guest-pull) to ensure container images are
shared into the guest VM for these specific scenarios.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
In the pre commit:74eccc54e7b31cc4c9abd8b6e4007c3a4c1d4dd4,
it missed return the right rootfs volume.
In the is_block_rootfs fn, if the rootfs is based on a
block device such as devicemapper, it should clear the
volume's source and let the device_manager to use the
dev_id to get the device's host path.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
For containerd's Blockfile Snapshotter, it will pass
a rootfs mounts with a rawfile as a mount source
and mount options with "loop" embeded.
To support this type of rootfs, it is necessary to identify this as a
blockfile rootfs through the "loop" flag, and then use the volume source
of the rootfs as the source of the block device to hot-insert it into
the guest.
Fixes:#11464
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Instead of building it every time, we can store the regorus
binary in OCI registry using oras and download it from there.
This reduces the install time from ~1m40s to ~15s.
Signed-off-by: Paul Meyer <katexochen0@gmail.com>
This commit add support of resize_vcpu for cloud-hypervisor
using the it's vm resize api. It can support bothof vcpu hotplug
and hot unplug.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
For cloud-hypervisor, currently only hot plugging of memory is
supported, but hot unplugging of memory is not supported. In addition,
by default, cloud-hypervisor uses ACPI-based memory hot-plugging instead
of virtio-mem based memory hot-plugging.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Add API interfaces for get vminfo and resize. get vminfo can obtain the
memory size and number of vCPUs from the cloud hypervisor vmm in real
time. This interface provides information for the subsequent resize
memory and vCPU.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
The system's own Deserialize cannot implement parsing from string to
MacAddr, so we need to implement this trait ourself.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
To make it flexibility and extensibility This change modifies the Kata
Agent's handling of `InitData` to allow for unrecognized key-value pairs.
The `InitData` field now directly utilizes `HashMap<String, String>`,
enabling it to carry arbitrary metadata and information that may be
consumed by other components
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
During sandbox preparation, initdata should be specified to TdxConfig,
specially mrconfigid, which is used to pass to tdx guest report for
measurement.
Fixes#11180
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
SEV-SNP guest configuration utilizes a different set of properties
compared to the existing 'sev-guest' object. This change introduces
the `host-data` property within the sev-snp-guest object. This property
allows for configuring an SEV-SNP guest with host-provided data, which
is crucial for data integrity verification during attestation.
The `host-data` property is specifically valid for SEV-SNP guests
running
on a capable platform. It is configured as a base64-encoded string when
using the sev-snp-guest object.
the example cmdline looks like:
```shell
-object sev-snp-guest,id=sev-snp0,host-data=CGNkCHoBC5CcdGXir...
```
Fixes#11180
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To facilitate the transfer of initdata generated during
`prepare_initdata_device_config`, a new parameter has been
introduced into the `prepare_protection_device_config` function.
Furthermore, to specifically pass initdata to SEV-SNP Guests, a
`host_data` field has been added to the `SevSnpConfig` structure.
However, this field is exclusively applicable to the SEV-SNP platform.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Retrieve the Initdata string content from the security_info of the
Configuration. Based on the Protection Platform type, calculate the
digest of the Initdata. Write the Initdata content to the block
device. Subsequently, construct the BlockConfig based on this block
device information.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
To correctly manage initdata as a block device, a new InitData
Resource type, inherently a block device, has been introduced
within the ResourceManager. As a component of the Sandbox's
resources, this InitData Resource needs to be appropriately
handled by the Device Manager's handler.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This commit implements the retrieval and processing of InitData provided
via a Pod annotation. Specifically, it enables runtime-rs to:
(1) Parse the "io.katacontainers.config.hypervisor.cc_init_data"
annotation from the Pod YAML.
(2) Perform reverse operations on the annotation value: base64 decoding
followed by gzip decompression.
(3) Deserialize the decompressed data into the internal InitData
structure.
(4) Serialize the resulting InitData into a string and store it in the
Configuration.
This allows users to inject configuration data into the TEE Guest by
encoding and compressing it and passing it as an annotation in the Pod
configuration. This mechanism supports scenarios where dynamic config
is required for Confidential Containers.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This commit introduces the Initdata Spec and the logic for
calculating its digest. It includes:
(1) Define a `ProtectedPlatform` enum to represent major TEE platform
types.
(2) Create an `InitData` struct to support building and serializing
initialization data in TOML format.
(3) Implement adaptation for SHA-256, SHA-384, and SHA-512 digest
algorithms.
(4) Provide a platform-specific mechanism for adjusting digest lengths
(zero-padding).
(5) Supporting the decoding and verification of base64+gzip encoded
Initdata.
The core functionality ensures the integrity of data injected by the
host through trusted algorithms, while also accommodating the
measurement requirements of different TEE platforms.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
This commit introduces a new `initdata` field of type String to
hypervisor `SecurityInfo`.
In accordance with the Initdata Specification, this field will
facilitate the injection of well-defined data from an untrusted host
into the TEE. To ensure the integrity of this injected data, the TEE
evidence's hostdata capability or the (v)TPM dynamic measurement
capability will be leveraged, as outlined in the specification.
Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
Don't use local launched_pods variable in test_rc_policy(), because
teardown() needs to use this variable to print a description of the
pods, for debugging purposes.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
The locking mechanism around the layers cache file was insufficient to
prevent corruption of the file. This commit moves the layers cache's
management in-memory, only reading the cache file once at the beginning
of `genpolicy`, and only writing to it once, at the end of `genpolicy`.
In the case that obtaining a lock on the cache file fails,
reading/writing to it is skipped, and the cache is not used/persisted.
Signed-off-by: charludo <git@charlotteharludo.com>
`vmm-sys-util` was duplicated while updating the `ignore` list of
`rust-vmm` crates in #11431, remove duplicated one and sort the list.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
When moving from clap v2 to v4 a bunch of
functions have been removed, so update the code
to handle these replacements
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
When moving from clap v2 to v4 a bunch of
functions have been removed, so update the code
to handle these replacements
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Update dependabot ignore list in cargo ecosystem to ignore upgrades from
rust-vmm crates, since those crates need to be managed carefully and
manually.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Pin Github owned actions to specific hashes as recommended
as tags are mutable see https://pin-gh-actions.kammel.dev/.
This one of the recommendations that scorecard gives us.
Note this was generated with `frizbee actions`
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Fedora 40 is EoL, and I've seen the registry pull fail
a few times recently, so let's bump to fedora 42 which
has 10 months of support left.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now we are decoupled from the image-rs crate,
we can bump the protobuf version across our project
to resolve the GHSA-2gh3-rmm4-6rq5 advisory
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This patch updates the container image for the CI test workloads:
- `k8s-layered-sc-deployment.yaml`
- `k8s-pod-sc-deployment.yaml`
- `k8s-pod-sc-nobodyupdate-deployment.yaml`
- `k8s-pod-sc-supplementalgroups-deployment.yaml`
- `k8s-policy-deployment.yaml`
Also updates unit tests:
- `test_create_container_security_context`
- `test_create_container_security_context_supplemental_groups`
This fixes tests failing due to an image pull error as the previous image is no longer available in
the container registry.
Signed-off-by: Archana Choudhary <archana1@microsoft.com>
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
After commit a3f973db3b merged, protection::GuestProtection::[Snp,Sev]
have changed to tuple variants, and can no longer be used in assert_eq
marco without tuple values, or some errors will raised:
```
assert_eq!(actual.unwrap(), GuestProtection::Snp);
| ^^^^^^^^^^^^^^^^^^^^ expected \
`GuestProtection`, found enum constructor
```
Signed-off-by: Lei Liu <liulei.pt@bytedance.com>
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
| [`runk`](src/tools/runk) | utility | Standard OCI container runtime based on the agent. |
| [`ci`](.github/workflows) | CI | Continuous Integration configuration files and scripts. |
| [`ocp-ci`](ci/openshift-ci/README.md) | CI | Continuous Integration configuration for the OpenShift pipelines. |
| [`katacontainers.io`](https://github.com/kata-containers/www.katacontainers.io) | Source for the [`katacontainers.io`](https://www.katacontainers.io) site. |
pushd"${katacontainers_repo_dir}/tools/packaging/kata-deploy"||{echo"Failed to push to ${katacontainers_repo_dir}/tools/packaging/kata-deploy";exit 125;}
# Note: uses pushd to change into the clonned directory!
git_sparse_clone(){
localorigin="$1"
localrevision="$2"
shift2
localsparse_paths=("$@")
local repo
repo=$(basename -s .git "${origin}")
git init "${repo}"
pushd"${repo}"||exit1
git remote add origin "${origin}"
git fetch --depth 1 origin "${revision}"
git sparse-checkout init --cone
git sparse-checkout set"${sparse_paths[@]}"
git checkout FETCH_HEAD
}
###############################
# Disable security to allow e2e
###############################
@@ -116,33 +143,50 @@ az network vnet subnet update \
for NODE_NAME in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}');do[["${NODE_NAME}"=~ 'worker']]&& kubectl label node "${NODE_NAME}" node.kubernetes.io/worker=;done
@@ -450,7 +450,7 @@ You can build and install the guest kernel image as shown [here](../tools/packag
# Install a hypervisor
When setting up Kata using a [packaged installation method](install/README.md#installing-on-a-linux-system), the
`QEMU` VMM is installed automatically. Cloud-Hypervisor, Firecracker and StratoVirt VMMs are available from the [release tarballs](https://github.com/kata-containers/kata-containers/releases), as well as through [`kata-deploy`](../tools/packaging/kata-deploy/README.md).
`QEMU` VMM is installed automatically. Cloud-Hypervisor, Firecracker and StratoVirt VMMs are available from the [release tarballs](https://github.com/kata-containers/kata-containers/releases), as well as through [`kata-deploy`](../tools/packaging/kata-deploy/helm-chart/README.md).
You may choose to manually build your VMM/hypervisor.
As a community we want to strike a balance between having up-to-date toolchains, to receive the
latest security fixes and to be able to benefit from new features and packages, whilst not being
too bleeding edge and disrupting downstream and other consumers. As a result we have the following
guidelines (note, not hard rules) for our go and rust toolchains that we are attempting to try out:
## Go toolchain
Go is released [every six months](https://go.dev/wiki/Go-Release-Cycle) with support for the
[last two major release versions](https://go.dev/doc/devel/release#policy). We always want to
ensure that we are on a supported version so we receive security fixes. To try and make
things easier for some of our users, we aim to be using the older of the two supported major
versions, unless there is a compelling reason to adopt the newer version.
In practice this means that we bump our major version of the go toolchain every six months to
version (1.x-1) in response to a new version (1.x) coming out, which makes our current version
(1.x-2) no longer supported. We will bump the minor version whenever required to satisfy
dependency updates, or security fixes.
Our go toolchain version is recorded in [`versions.yaml`](../versions.yaml) under
`.languages.golang.version` and should match with the version in our `go.mod` files.
## Rust toolchain
Rust has a [six week](https://doc.rust-lang.org/book/appendix-05-editions.html#:~:text=The%20Rust%20language%20and%20compiler,these%20tiny%20changes%20add%20up.)
release cycle and they only support the latest stable release, so if we wanted to remain on a
supported release we would only ever build with the latest stable and bump every 6 weeks.
However feedback from our community has indicated that this is a challenge as downstream consumers
often want to get rust from their distro, or downstream fork and these struggle to keep up with
the six week release schedule. As a result the community has agreed to try out a policy of
"stable-2", where we aim to build with a rust version that is two versions behind the latest stable
version.
In practice this should mean that we bump our rust toolchain every six weeks, to version
1.x-2 when 1.x is released as stable and we should be picking up the latest point release
of that version, if there were any.
The rust-toolchain that we are using is recorded in [`rust-toolchain.toml`](../rust-toolchain.toml).
@@ -4,7 +4,7 @@ As we know, we can interact with cgroups in two ways, **`cgroupfs`** and **`syst
## usage
For systemd, kata agent configures cgroups according to the following `linux.cgroupsPath` format standard provided by `runc` (`[slice]:[prefix]:[name]`). If you don't provide a valid `linux.cgroupsPath`, kata agent will treat it as `"system.slice:kata_agent:<container-id>"`.
For systemd, kata agent configures cgroups according to the following `linux.cgroupsPath` format standard provided by `runc` (`[slice]:[prefix]:[name]`). If you don't provide a valid `linux.cgroupsPath`, kata agent will treat it as `"system.slice:kata_agent:<container-id>"`.
> Here slice is a systemd slice under which the container is placed. If empty, it defaults to system.slice, except when cgroup v2 is used and rootless container is created, in which case it defaults to user.slice.
>
@@ -65,7 +65,7 @@ The kata agent will translate the parameters in the `linux.resources` of `config
## Systemd Interface
`session.rs` and `system.rs` in `src/agent/rustjail/src/cgroups/systemd/interface` are automatically generated by `zbus-xmlgen`, which is is an accompanying tool provided by `zbus` to generate Rust code from `D-Bus XML interface descriptions`. The specific commands to generate these two files are as follows:
`session.rs` and `system.rs` in `src/agent/rustjail/src/cgroups/systemd/interface` are automatically generated by `zbus-xmlgen`, which is is an accompanying tool provided by `zbus` to generate Rust code from `D-Bus XML interface descriptions`. The specific commands to generate these two files are as follows:
@@ -111,20 +111,20 @@ In our case, there will be a variety of resources, and every resource has severa
- Are the "service", "message dispatcher" and "runtime handler" all part of the single Kata 3.x runtime binary?
Yes. They are components in Kata 3.x runtime. And they will be packed into one binary.
1. Service is an interface, which is responsible for handling multiple services like task service, image service and etc.
2. Message dispatcher, it is used to match multiple requests from the service module.
3. Runtime handler is used to deal with the operation for sandbox and container.
1. Service is an interface, which is responsible for handling multiple services like task service, image service and etc.
2. Message dispatcher, it is used to match multiple requests from the service module.
3. Runtime handler is used to deal with the operation for sandbox and container.
- What is the name of the Kata 3.x runtime binary?
Apparently we can't use `containerd-shim-v2-kata` because it's already used. We are facing the hardest issue of "naming" again. Any suggestions are welcomed.
Internally we use `containerd-shim-v2-rund`.
- Is the Kata 3.x design compatible with the containerd shimv2 architecture?
- Is the Kata 3.x design compatible with the containerd shimv2 architecture?
Yes. It is designed to follow the functionality of go version kata. And it implements the `containerd shim v2` interface/protocol.
- How will users migrate to the Kata 3.x architecture?
The migration plan will be provided before the Kata 3.x is merging into the main branch.
- Is `Dragonball` limited to its own built-in VMM? Can the `Dragonball` system be configured to work using an external `Dragonball` VMM/hypervisor?
@@ -134,35 +134,35 @@ In our case, there will be a variety of resources, and every resource has severa
`runD` is the `containerd-shim-v2` counterpart of `runC` and can run a pod/containers. `Dragonball` is a `microvm`/VMM that is designed to run container workloads. Instead of `microvm`/VMM, we sometimes refer to it as secure sandbox.
- QEMU, Cloud Hypervisor and Firecracker support are planned, but how that would work. Are they working in separate process?
Yes. They are unable to work as built in VMM.
- What is `upcall`?
The `upcall` is used to hotplug CPU/memory/MMIO devices, and it solves two issues.
1. avoid dependency on PCI/ACPI
2. avoid dependency on `udevd` within guest and get deterministic results for hotplug operations. So `upcall` is an alternative to ACPI based CPU/memory/device hotplug. And we may cooperate with the community to add support for ACPI based CPU/memory/device hotplug if needed.
`Dbs-upcall` is a `vsock-based` direct communication tool between VMM and guests. The server side of the `upcall` is a driver in guest kernel (kernel patches are needed for this feature) and it'll start to serve the requests once the kernel has started. And the client side is in VMM , it'll be a thread that communicates with VSOCK through `uds`. We have accomplished device hotplug / hot-unplug directly through `upcall` in order to avoid virtualization of ACPI to minimize virtual machine's overhead. And there could be many other usage through this direct communication channel. It's already open source.
- The URL below says the kernel patches work with 4.19, but do they also work with 5.15+ ?
Forward compatibility should be achievable, we have ported it to 5.10 based kernel.
- Are these patches platform-specific or would they work for any architecture that supports VSOCK?
It's almost platform independent, but some message related to CPU hotplug are platform dependent.
- Could the kernel driver be replaced with a userland daemon in the guest using loopback VSOCK?
We need to create device nodes for hot-added CPU/memory/devices, so it's not easy for userspace daemon to do these tasks.
- The fact that `upcall` allows communication between the VMM and the guest suggests that this architecture might be incompatible with https://github.com/confidential-containers where the VMM should have no knowledge of what happens inside the VM.
1.`TDX` doesn't support CPU/memory hotplug yet.
2. For ACPI based device hotplug, it depends on ACPI `DSDT` table, and the guest kernel will execute `ASL` code to handle during handling those hotplug event. And it should be easier to audit VSOCK based communication than ACPI `ASL` methods.
- What is the security boundary for the monolithic / "Built-in VMM" case?
It has the security boundary of virtualization. More details will be provided in next stage.
Today, there exist a few gaps between Container Storage Interface (CSI) and virtual machine (VM) based runtimes such as Kata Containers
Today, there exist a few gaps between Container Storage Interface (CSI) and virtual machine (VM) based runtimes such as Kata Containers
that prevent them from working together smoothly.
First, it’s cumbersome to use a persistent volume (PV) with Kata Containers. Today, for a PV with Filesystem volume mode, Virtio-fs
is the only way to surface it inside a Kata Container guest VM. But often mounting the filesystem (FS) within the guest operating system (OS) is
is the only way to surface it inside a Kata Container guest VM. But often mounting the filesystem (FS) within the guest operating system (OS) is
desired due to performance benefits, availability of native FS features and security benefits over the Virtio-fs mechanism.
Second, it’s difficult if not impossible to resize a PV online with Kata Containers. While a PV can be expanded on the host OS,
the updated metadata needs to be propagated to the guest OS in order for the application container to use the expanded volume.
Second, it’s difficult if not impossible to resize a PV online with Kata Containers. While a PV can be expanded on the host OS,
the updated metadata needs to be propagated to the guest OS in order for the application container to use the expanded volume.
Currently, there is not a way to propagate the PV metadata from the host OS to the guest OS without restarting the Pod sandbox.
# Proposed Solution
Because of the OS boundary, these features cannot be implemented in the CSI node driver plugin running on the host OS
as is normally done in the runc container. Instead, they can be done by the Kata Containers agent inside the guest OS,
but it requires the CSI driver to pass the relevant information to the Kata Containers runtime.
An ideal long term solution would be to have the `kubelet` coordinating the communication between the CSI driver and
the container runtime, as described in [KEP-2857](https://github.com/kubernetes/enhancements/pull/2893/files).
Because of the OS boundary, these features cannot be implemented in the CSI node driver plugin running on the host OS
as is normally done in the runc container. Instead, they can be done by the Kata Containers agent inside the guest OS,
but it requires the CSI driver to pass the relevant information to the Kata Containers runtime.
An ideal long term solution would be to have the `kubelet` coordinating the communication between the CSI driver and
the container runtime, as described in [KEP-2857](https://github.com/kubernetes/enhancements/pull/2893/files).
However, as the KEP is still under review, we would like to propose a short/medium term solution to unblock our use case.
The proposed solution is built on top of a previous [proposal](https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/direct-assign-volume.md)
The proposed solution is built on top of a previous [proposal](https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/direct-assign-volume.md)
described by Eric Ernst. The previous proposal has two gaps:
1. Writing a `csiPlugin.json` file to the volume root path introduced a security risk. A malicious user can gain unauthorized
access to a block device by writing their own `csiPlugin.json` to the above location through an ephemeral CSI plugin.
1. Writing a `csiPlugin.json` file to the volume root path introduced a security risk. A malicious user can gain unauthorized
access to a block device by writing their own `csiPlugin.json` to the above location through an ephemeral CSI plugin.
2. The proposal didn't describe how to establish a mapping between a volume and a kata sandbox, which is needed for
2. The proposal didn't describe how to establish a mapping between a volume and a kata sandbox, which is needed for
implementing CSI volume resize and volume stat collection APIs.
This document particularly focuses on how to address these two gaps.
## Assumptions and Limitations
1. The proposal assumes that a block device volume will only be used by one Pod on a node at a time, which we believe
is the most common pattern in Kata Containers use cases. It’s also unsafe to have the same block device attached to more than
one Kata pod. In the context of Kubernetes, the `PersistentVolumeClaim` (PVC) needs to have the `accessMode` as `ReadWriteOncePod`.
2. More advanced Kubernetes volume features such as, `fsGroup`, `fsGroupChangePolicy`, and `subPath` are not supported.
1. The proposal assumes that a block device volume will only be used by one Pod on a node at a time, which we believe
is the most common pattern in Kata Containers use cases. It’s also unsafe to have the same block device attached to more than
one Kata pod. In the context of Kubernetes, the `PersistentVolumeClaim` (PVC) needs to have the `accessMode` as `ReadWriteOncePod`.
2. More advanced Kubernetes volume features such as, `fsGroup`, `fsGroupChangePolicy`, and `subPath` are not supported.
## End User Interface
1. The user specifies a PV as a direct-assigned volume. How a PV is specified as a direct-assigned volume is left for each CSI implementation to decide.
There are a few options for reference:
1. A storage class parameter specifies whether it's a direct-assigned volume. This avoids any lookups of PVC
or Pod information from the CSI plugin (as external provisioner takes care of these). However, all PVs in the storage class with the parameter set
1. A storage class parameter specifies whether it's a direct-assigned volume. This avoids any lookups of PVC
or Pod information from the CSI plugin (as external provisioner takes care of these). However, all PVs in the storage class with the parameter set
will have host mounts skipped.
2. Use a PVC annotation. This approach requires the CSI plugins have `--extra-create-metadata` [set](https://kubernetes-csi.github.io/docs/external-provisioner.html#persistentvolumeclaim-and-persistentvolume-parameters)
to be able to perform a lookup of the PVC annotations from the API server. Pro: API server lookup of annotations only required during creation of PV.
to be able to perform a lookup of the PVC annotations from the API server. Pro: API server lookup of annotations only required during creation of PV.
Con: The CSI plugin will always skip host mounting of the PV.
3. The CSI plugin can also lookup pod `runtimeclass` during `NodePublish`. This approach can be found in the [ALIBABA CSI plugin](https://github.com/kubernetes-sigs/alibaba-cloud-csi-driver/blob/master/pkg/disk/nodeserver.go#L248).
2. The CSI node driver delegates the direct assigned volume to the Kata Containers runtime. The CSI node driver APIs need to
2. The CSI node driver delegates the direct assigned volume to the Kata Containers runtime. The CSI node driver APIs need to
be modified to pass the volume mount information and collect volume information to/from the Kata Containers runtime by invoking `kata-runtime` command line commands.
to propagate the volume mount information to the Kata Containers runtime for it to carry out the filesystem mount operation.
The `volumePath` is the [target_path](https://github.com/container-storage-interface/spec/blob/master/csi.proto#L1364) in the CSI `NodePublishVolumeRequest`.
The `mountInfo` is a serialized JSON string.
* **NodeGetVolumeStats** -- It invokes `kata-runtime direct-volume stats --volume-path [volumePath]` to retrieve the filesystem stats of direct-assigned volume.
* **NodeExpandVolume** -- It invokes `kata-runtime direct-volume resize --volume-path [volumePath] --size [size]` to send a resize request to the Kata Containers runtime to
The `mountInfo` is a serialized JSON string.
* **`NodeGetVolumeStats`** -- It invokes `kata-runtime direct-volume stats --volume-path [volumePath]` to retrieve the filesystem stats of direct-assigned volume.
* **`NodeExpandVolume`** -- It invokes `kata-runtime direct-volume resize --volume-path [volumePath] --size [size]` to send a resize request to the Kata Containers runtime to
resize the direct-assigned volume.
* **NodeStageVolume/NodeUnStageVolume** -- It invokes `kata-runtime direct-volume remove --volume-path [volumePath]` to remove the persisted metadata of a direct-assigned volume.
* **`NodeStageVolume/NodeUnStageVolume`** -- It invokes `kata-runtime direct-volume remove --volume-path [volumePath]` to remove the persisted metadata of a direct-assigned volume.
The `mountInfo` object is defined as follows:
```Golang
@@ -78,17 +78,17 @@ Notes: given that the `mountInfo` is persisted to the disk by the Kata runtime,
## Implementation Details
### Kata runtime
Instead of the CSI node driver writing the mount info into a `csiPlugin.json` file under the volume root,
as described in the original proposal, here we propose that the CSI node driver passes the mount information to
the Kata Containers runtime through a new `kata-runtime` commandline command. The `kata-runtime` then writes the mount
Instead of the CSI node driver writing the mount info into a `csiPlugin.json` file under the volume root,
as described in the original proposal, here we propose that the CSI node driver passes the mount information to
the Kata Containers runtime through a new `kata-runtime` commandline command. The `kata-runtime` then writes the mount
information to a `mountInfo.json` file in a predefined location (`/run/kata-containers/shared/direct-volumes/[volume_path]/`).
When the Kata Containers runtime starts a container, it verifies whether a volume mount is a direct-assigned volume by checking
whether there is a `mountInfo` file under the computed Kata `direct-volumes` directory. If it is, the runtime parses the `mountInfo` file,
When the Kata Containers runtime starts a container, it verifies whether a volume mount is a direct-assigned volume by checking
whether there is a `mountInfo` file under the computed Kata `direct-volumes` directory. If it is, the runtime parses the `mountInfo` file,
updates the mount spec with the data in `mountInfo`. The updated mount spec is then passed to the Kata agent in the guest VM together
with other mounts. The Kata Containers runtime also creates a file named by the sandbox id under the `direct-volumes/[volume_path]/`
directory. The reason for adding a sandbox id file is to establish a mapping between the volume and the sandbox using it.
Later, when the Kata Containers runtime handles the `get-stats` and `resize` commands, it uses the sandbox id to identify
with other mounts. The Kata Containers runtime also creates a file named by the sandbox id under the `direct-volumes/[volume_path]/`
directory. The reason for adding a sandbox id file is to establish a mapping between the volume and the sandbox using it.
Later, when the Kata Containers runtime handles the `get-stats` and `resize` commands, it uses the sandbox id to identify
the endpoint of the corresponding `containerd-shim-kata-v2`.
The shim then forwards the corresponding request to the `kata-agent` to carry out the operations inside the guest VM. For `resize` operation,
the Kata runtime also needs to notify the hypervisor to resize the block device (e.g. call `block_resize` in QEMU).
The shim then forwards the corresponding request to the `kata-agent` to carry out the operations inside the guest VM. For `resize` operation,
the Kata runtime also needs to notify the hypervisor to resize the block device (e.g. call `block_resize` in QEMU).
### Kata agent changes
The mount spec of a direct-assigned volume is passed to `kata-agent` through the existing `Storage` GRPC object.
The mount spec of a direct-assigned volume is passed to `kata-agent` through the existing `Storage` GRPC object.
Two new APIs and three new GRPC objects are added to GRPC protocol between the shim and agent for resizing and getting volume stats:
```protobuf
@@ -226,7 +226,7 @@ Let’s assume that changes have been made in the `aws-ebs-csi-driver` node driv
1. In the node CSI driver, the `NodePublishVolume` API invokes: `kata-runtime direct-volume add --volume-path "/kubelet/a/b/c/d/sdf" --mount-info "{\"Device\": \"/dev/sdf\", \"fstype\": \"ext4\"}"`.
2. The `Kata-runtime` writes the mount-info JSON to a file called `mountInfo.json` under `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
**Node unstage volume**
**Node `unstage` volume**
1. In the node CSI driver, the `NodeUnstageVolume` API invokes: `kata-runtime direct-volume remove --volume-path "/kubelet/a/b/c/d/sdf"`.
2. Kata-runtime deletes the directory `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
@@ -59,5 +59,5 @@ The table below summarized when and where those different hooks will be executed
+ `Hook Path` specifies where hook's path be resolved.
+ `Exec Place` specifies in which namespace those hooks can be executed.
+ For `CreateContainer` Hooks, OCI requires to run them inside the container namespace while the hook executable path is in the host runtime, which is a non-starter for VM-based containers. So we design to keep them running in the *host vmm namespace.*
+ `Exec Time` specifies at which time point those hooks can be executed.
+ For `CreateContainer` Hooks, OCI requires to run them inside the container namespace while the hook executable path is in the host runtime, which is a non-starter for VM-based containers. So we design to keep them running in the *host vmm namespace.*
+ `Exec Time` specifies at which time point those hooks can be executed.
@@ -119,17 +119,17 @@ The metrics service also doesn't hold any metrics in memory.
*Metrics size*: response size of one Prometheus scrape request.
It's easy to estimate the size of one metrics fetch request issued by Prometheus.
The formula to calculate the expected size when no gzip compression is in place is:
The formula to calculate the expected size when no gzip compression is in place is:
9 + (144 - 9) * `number of kata sandboxes`
Prometheus supports `gzip compression`. When enabled, the response size of each request will be smaller:
Prometheus supports `gzip compression`. When enabled, the response size of each request will be smaller:
2 + (10 - 2) * `number of kata sandboxes`
**Example**
We have 10 sandboxes running on a node. The expected size of one metrics fetch request issued by Prometheus against the kata-monitor agent running on that node will be:
**Example**
We have 10 sandboxes running on a node. The expected size of one metrics fetch request issued by Prometheus against the kata-monitor agent running on that node will be:
9 + (144 - 9) * 10 = **1.35M**
If `gzip compression` is enabled:
If `gzip compression` is enabled:
2 + (10 - 2) * 10 = **82K**
#### Metrics delay ####
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.