This makes the user experience better, as the admin can deploy Kata
Containers without having to download / set up any additional file.
Of course, if the admin wants something more specific, examples are
provided.
Tests and documentation are updated to reflect this change.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add three example values files to make it easier for users to try out
different Kata Containers configurations:
- try-kata.values.yaml: Enables all available shims
- try-kata-tee.values.yaml: Enables only TEE/confidential computing shims
- try-kata-nvidia-gpu.values.yaml: Enables only NVIDIA GPU shims
These files use the new structured configuration format and serve as
ready-to-use examples for common deployment scenarios.
Also update the README.md to document these example files and how to use them.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Add comprehensive documentation for the new structured configuration
format, including:
- Migration guide from legacy env.* format
- List of deprecated fields with removal timeline (2 releases)
- Examples of the new structured format
- Explanation of key benefits
- Backward compatibility notes
The documentation makes it clear that the legacy format is deprecated
but will continue to work during the transition period.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This commit adds backward compatibility support to ensure existing
configurations using the legacy env.* format continue to work.
The helper functions now check for legacy env.* values first, and
only fall back to the new structured format if legacy values are
not set. This allows for gradual migration without breaking
existing deployments.
Backward compatibility is maintained for:
- env.shims, env.shims_* (per architecture)
- env.defaultShim, env.defaultShim_* (per architecture)
- env.allowedHypervisorAnnotations
- env.snapshotterHandlerMapping_* (per architecture)
- env.pullTypeMapping_* (per architecture)
- env.agentHttpsProxy, env.agentNoProxy
- env._experimentalSetupSnapshotter
- env._experimentalForceGuestPull_* (per architecture)
- env.debug
Legacy env vars (SHIMS, DEFAULT_SHIM, etc.) are still set in the
DaemonSet when using the old format to maintain full compatibility.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This commit introduces a new structured configuration format for
configuring Kata Containers shims in the Helm chart. The new format
provides:
- Per-shim configuration with enabled/supportedArches
- Per-shim snapshotter, guest pull, and agent proxy settings
- Architecture-aware default shim configuration
- Root-level debug and snapshotter setup configuration
All shims are disabled by default and must be explicitly enabled.
This provides better type safety and clearer organization compared
to the legacy env.* string-based format.
The templates are updated to use the new structure exclusively.
Backward compatibility will be added in a follow-up commit.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As the some of the global vars can be empty, we should actually check
their _FOR_ARCH version instead.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we're making the values.yaml more user friendly, we actually have to
handle the https_proxy and no_proxy entries per shim, instead of having
this globally available, as this will only affect images being pulled
inside the guest (as in, when using TEE variations of the shims).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Adds a practical set of kernel config used by docker-in-docker and kind
for network bridging and filtering. It also includes the matching IPv6
support to allow tools like kind that require IPv6 network policies to
work out of the box.
This support includes:
- nftables reject and filtering support for inet/ipv4/ipv6
- Bridge filtering for container-to-container traffic
- IPv6 NAT, filtering, and packet matching rules for network policies
- VXLAN and IPsec crypto support for network tunneling
- TMPFS POSIX ACL support for filesystem permissions
The configs are organized across fragment files:
- common/fs.conf: TMPFS ACL support
- common/crypto.conf: IPsec/VXLAN crypto algorithms
- common/network.conf: VXLAN, IPsec ESP, nftables bridge/ARP/netdev
- common/netfilter.conf: IPv6 netfilter stack and nftables advanced features
Fixes: #11886
Signed-off-by: Simon Kaegi <simon.kaegi@gmail.com>
The github API suggestions that `Authorization: Bearer <YOUR-TOKEN>`
is the way to set the auth token, but it also mentioned that `token`
should work, so it's unclear if this will help much, but it shouldn't harm.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
As stratovirt CI was removed in #12006 we should remove the
jobs from required.
Also the docker tests have been commented out for months, and
we are considering removing them, so clean this file up.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
sometimes it's hard to enumerate all blacklisted namespaces, lets add a
regular expression based only filter to allow specifying namespaces that
should be mutated.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Parallelize busybox builds to build a bit faster and create the
build directory prior to Docker execution, which on my
environment, helps with permission issues when building busybox
without the kata-containers/build directory existing beforehand.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
With the change made to the matrix when the CC GPU runner was added,
there was a change in the job name (@sprt saw that coming, but I
didn't).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
For the nvidia-gpu-snp and nvidia-gpu-tdx we must set containerd to
allow the CDI annotation to be passed to down.
This solution may become obsolete soon enough, but the cleanest way to
have it properly working is by adding it here (even if we remove it
before the next release).
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
It's been noticed that as more RAM is needed to run the CC tests, we
also need to update the podOverhead of the NVIDIA CC runtime classes to
avoid getting OOM Killed.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's now make sure that we don't add duplicated values to any of our
entries, making the script as sane as possible for sequential runs.
Vibed with Cursor's help!
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's add some helper functions, not yet used, to avoid adding
duplicated items.
This idea is an expansion of Choi's idea to avoid setting duplicated
items, and it'll help on making the whole script idempotent on
sequential runs.
Vibed with Cursor's help!
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
I know, this is not simplifying much things for now, but it has a good
intent in the background and will serve as base for making the
kata-deploy helm chart more user friendly.
With that said, let's add ALLOWED_HYPERVISOR_ANNOTATIONS per arch, while
adding support to set something like "qemu:foo,bar clh:bar foobar
barfoo". Why? Because in the future we'll have a better way to set this
per shim (and the shim is per arch ...).
More details of what we'll do in the future are being discussed here:
https://github.com/kata-containers/kata-containers/issues/12024
Anyways, the variables are **DELIBERATELY** not exposed to the chart for
now, as those will be later on when addressing the issue mentioned
above.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When the runtimeClasses were added, as part of 7cfa826804, the
firecracker runtimeClass ended up missing from the dictionary.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This reverts commit be05e1370c, which is
not a problem as we never released such option.
Conflicts:
tools/packaging/kata-deploy/helm-chart/README.md
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We had this logic inside the script when we didn't use the helm chart.
However, this only makes the shim script more convoluted for no reason.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
In order to fix:
```
=== Running govulncheck on containerd-shim-kata-v2 ===
Vulnerabilities found in containerd-shim-kata-v2:
=== Symbol Results ===
Vulnerability #1: GO-2025-4015
Excessive CPU consumption in Reader.ReadResponse in net/textproto
More info: https://pkg.go.dev/vuln/GO-2025-4015
Standard library
Found in: net/textproto@go1.24.6
Fixed in: net/textproto@go1.24.8
Vulnerable symbols found:
#1: textproto.Reader.ReadResponse
Vulnerability #2: GO-2025-4014
Unbounded allocation when parsing GNU sparse map in archive/tar
More info: https://pkg.go.dev/vuln/GO-2025-4014
Standard library
Found in: archive/tar@go1.24.6
Fixed in: archive/tar@go1.24.8
Vulnerable symbols found:
#1: tar.Reader.Next
Vulnerability #3: GO-2025-4013
Panic when validating certificates with DSA public keys in crypto/x509
More info: https://pkg.go.dev/vuln/GO-2025-4013
Standard library
Found in: crypto/x509@go1.24.6
Fixed in: crypto/x509@go1.24.8
Vulnerable symbols found:
#1: x509.Certificate.Verify
#2: x509.Certificate.Verify
Vulnerability #4: GO-2025-4012
Lack of limit when parsing cookies can cause memory exhaustion in net/http
More info: https://pkg.go.dev/vuln/GO-2025-4012
Standard library
Found in: net/http@go1.24.6
Fixed in: net/http@go1.24.8
Vulnerable symbols found:
#1: http.Client.Do
#2: http.Client.Get
#3: http.Client.Head
#4: http.Client.Post
#5: http.Client.PostForm
Use '-show traces' to see the other 9 found symbols
Vulnerability #5: GO-2025-4011
Parsing DER payload can cause memory exhaustion in encoding/asn1
More info: https://pkg.go.dev/vuln/GO-2025-4011
Standard library
Found in: encoding/asn1@go1.24.6
Fixed in: encoding/asn1@go1.24.8
Vulnerable symbols found:
#1: asn1.Unmarshal
#2: asn1.UnmarshalWithParams
Vulnerability #6: GO-2025-4010
Insufficient validation of bracketed IPv6 hostnames in net/url
More info: https://pkg.go.dev/vuln/GO-2025-4010
Standard library
Found in: net/url@go1.24.6
Fixed in: net/url@go1.24.8
Vulnerable symbols found:
#1: url.JoinPath
#2: url.Parse
#3: url.ParseRequestURI
#4: url.URL.Parse
#5: url.URL.UnmarshalBinary
Vulnerability #7: GO-2025-4009
Quadratic complexity when parsing some invalid inputs in encoding/pem
More info: https://pkg.go.dev/vuln/GO-2025-4009
Standard library
Found in: encoding/pem@go1.24.6
Fixed in: encoding/pem@go1.24.8
Vulnerable symbols found:
#1: pem.Decode
Vulnerability #8: GO-2025-4008
ALPN negotiation error contains attacker controlled information in
crypto/tls
More info: https://pkg.go.dev/vuln/GO-2025-4008
Standard library
Found in: crypto/tls@go1.24.6
Fixed in: crypto/tls@go1.24.8
Vulnerable symbols found:
#1: tls.Conn.Handshake
#2: tls.Conn.HandshakeContext
#3: tls.Conn.Read
#4: tls.Conn.Write
#5: tls.Dial
Use '-show traces' to see the other 4 found symbols
Vulnerability #9: GO-2025-4007
Quadratic complexity when checking name constraints in crypto/x509
More info: https://pkg.go.dev/vuln/GO-2025-4007
Standard library
Found in: crypto/x509@go1.24.6
Fixed in: crypto/x509@go1.24.9
Vulnerable symbols found:
#1: x509.CertPool.AppendCertsFromPEM
#2: x509.Certificate.CheckCRLSignature
#3: x509.Certificate.CheckSignature
#4: x509.Certificate.CheckSignatureFrom
#5: x509.Certificate.CreateCRL
Use '-show traces' to see the other 27 found symbols
Vulnerability #10: GO-2025-4006
Excessive CPU consumption in ParseAddress in net/mail
More info: https://pkg.go.dev/vuln/GO-2025-4006
Standard library
Found in: net/mail@go1.24.6
Fixed in: net/mail@go1.24.8
Vulnerable symbols found:
#1: mail.AddressParser.Parse
#2: mail.AddressParser.ParseList
#3: mail.Header.AddressList
#4: mail.ParseAddress
#5: mail.ParseAddressList
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we'll face issues like:
```
Error: found in Chart.yaml, but missing in charts/ directory: node-feature-discovery
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure that we add NFD as a weak dependency of the kata-deploy
helm chart.
What we're doing for now is leaving it up to the user / admin to enable
it, and if enabled then we do a explicit check for virtualization
support (x86_64 only for now).
In case NFD is already deployed, we fail the installation (in case it's
enabled on the kata-deploy helm chart) with a clear error message to the
user.
While I know that kata-remote **DOES NOT** require virtualization, I've
left this out (with a comment for when we add a peer-pods dependency on
kata-deploy) in order to simplify things for now, as kata-remote is not
a deployed shim by default.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As Kata Containers can be consumed by other helm-charts, hard coding the
default runtime class name to `kata` is not optimal.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
All the options that take a specific shim as an argument MUST have
specific per arch settings, as not all the shims are available for all
the arches, leading to issues when setting up multi-arch deployments.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure that we consume NVRC releases straight from GitHub instead
of building the binaries ourselves.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This allows us to test privileged containers when using the webhook.
We can do this because kata-deploy sets privileged_without_host_devices = true for kata runtime by default.
Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
The libs in question were added when moving to developer.nvidia.com
but switching back to ubuntu only based builds they are not needed.
Remove them to keep the rootfs as minimal as possible.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
In the case of CC we need additional libraries in the rootfs.
Add them conditionally if type == confidential.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
When the NodeFeatureRule CRD is detected kata-deploy will:
* Create the specific NodeFeatureRules for the x86_64 TEEs
* Adapt the TEEs runtime classes to take into account the amount of keys
available in the system when spawning the podsandbox.
Note, we still do not have NFD as sub-dependency of the helm chart, and
I'm not even sure if we will have. However, it's important to integrate
better with the scenarios where the NFD is already present.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This allows us to do a full multi-arch deployment, as the user can
easily select which shim can be deployed per arch, as some of the VMMs
are not supported on all architectures, which would lead to a broken
installation.
Now, passing shims per arch we can easily have an heterogenous
deployment where, for instance, we can set qemu-se-runtime-rs for s390x,
qemu-cca for aarch64, and qemu-snp / qemu-tdx for x86_64 and call all of
those a default kata-confidential ... and have everything working with
the same deployment.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Build only from Ubuntu repositories do not mix with developer.nvidia.com
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Update tools/osbuilder/rootfs-builder/nvidia/nvidia_chroot.sh
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Aurélien has moved to a reliable mirror for our tests, but we missed
that our tools Dockerfiles could benefit from the same change, which is
added now.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Although we saw this happening, we expected it to NOT happen ...
As the kernel is not signed, but we expect it to be (the cached
version), then we're bailing. :-/
Let's ensure a full rebuild of kernels happen and we'll be good from
that point onwards.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
By doing this we can ensure that more than one instance of
nydus-snapshotter can be running inside the cluster, which is super
useful for doing A-B "upgrades" (where we install a new version of
kata-containers + nydus on B, while A is still running, and then only
uninstall A after making sure that B is working as expected).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We've been wrongly trying to set up the `${shim}` (as the qemu-snp, for
instance) as the hypervisor name in the kata-containers configuration
file, leading to an `tomlq` breaking as all the .hypervisors.qemu* shims
are tied to the `qemu` hypervisor, and it happens regardless of the shim
having a different name, or the hypervisor being experimental or not.
```sh
$ grep "hypervisor.qemu*" src/runtime/config/configuration-*
src/runtime/config/configuration-qemu-cca.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-coco-dev.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu-snp.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu-tdx.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-nvidia-gpu.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-se.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-snp.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu-tdx.toml.in:[hypervisor.qemu]
src/runtime/config/configuration-qemu.toml.in:[hypervisor.qemu]
$ grep "hypervisor.qemu*" src/runtime-rs/config/configuration-*
src/runtime-rs/config/configuration-qemu-runtime-rs.toml.in:[hypervisor.qemu]
src/runtime-rs/config/configuration-qemu-se-runtime-rs.toml.in:[hypervisor.qemu]
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We've recently added support for:
* deploying and setting up a snapshotter, via
_experimentalSetupSnapshotter
* enabling experimental_force_guest_pull, via
_experimentalForceGuestPull
However, we never updated the documentation for those, thus let's do it
now.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>