This supersedes https://github.com/kata-containers/kata-containers/pull/12622. I replaced Zensical with mkdocs-materialx. Materialx is a fork of mkdocs-material created after mkdocs-material was put into maintenance mode. We'll use this platform until Zensical is more feature complete. Added a few of the existing docs into the site to make a more user-friendly flow. Signed-off-by: LandonTClipp <11232769+LandonTClipp@users.noreply.github.com>
9.5 KiB
Helm Configuration
Parameters
The helm chart provides a comprehensive set of configuration options. You may view the parameters and their descriptions by going to the GitHub source or by using helm:
# List available kata-deploy chart versions:
# helm search repo kata-deploy-charts/kata-deploy --versions
#
# Then replace X.Y.Z below with the desired chart version:
helm show values --version X.Y.Z oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy
shims
Kata ships with a number of pre-built artifacts and runtimes. You may selectively enable or disable specific shims. For example:
shims:
disableAll: true
qemu:
enabled: true
qemu-nvidia-gpu:
enabled: true
qemu-nvidia-gpu-snp:
enabled: false
Shims can also have configuration options specific to them:
qemu-nvidia-gpu:
enabled: ~
supportedArches:
- amd64
- arm64
allowedHypervisorAnnotations: []
containerd:
snapshotter: ""
runtimeClass:
# This label is automatically added by gpu-operator. Override it
# if you want to use a different label.
# Uncomment once GPU Operator v26.3 is out
# nodeSelector:
# nvidia.com/cc.ready.state: "false"
It's best to reference the default values.yaml file above for more details.
Custom Runtimes
Kata allows you to create custom runtime configurations. This is done by overlaying one of the pre-existing runtime configs with user-provided configs. For example, we can use the qemu-nvidia-gpu as a base config and overlay our own parameters to it:
customRuntimes:
enabled: false
runtimes:
my-gpu-runtime:
baseConfig: "qemu-nvidia-gpu" # Required: existing config to use as base
dropIn: | # Optional: overrides via config.d mechanism
[hypervisor.qemu]
default_memory = 1024
default_vcpus = 4
runtimeClass: |
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: kata-my-gpu-runtime
labels:
app.kubernetes.io/managed-by: kata-deploy
handler: kata-my-gpu-runtime
overhead:
podFixed:
memory: "640Mi"
cpu: "500m"
scheduling:
nodeSelector:
katacontainers.io/kata-runtime: "true"
# Optional: CRI-specific configuration
containerd:
snapshotter: "nydus" # Configure containerd snapshotter (nydus, erofs, etc.)
crio:
pullType: "guest-pull" # Configure CRI-O runtime_pull_image = true
Again, view the default values.yaml file for more details.
Examples
We provide a few examples that you can pass to helm via the -f/--values flag.
try-kata-tee.values.yaml
This file enables only the TEE (Trusted Execution Environment) shims for confidential computing:
helm install kata-deploy oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy \
--version VERSION \
-f try-kata-tee.values.yaml
Includes:
qemu-snp- AMD SEV-SNP (amd64)qemu-tdx- Intel TDX (amd64)qemu-se- IBM Secure Execution for Linux (SEL) (s390x)qemu-se-runtime-rs- IBM Secure Execution for Linux (SEL) Rust runtime (s390x)qemu-cca- Arm Confidential Compute Architecture (arm64)qemu-coco-dev- Confidential Containers development (amd64, s390x)qemu-coco-dev-runtime-rs- Confidential Containers development Rust runtime (amd64, s390x)
try-kata-nvidia-gpu.values.yaml
This file enables only the NVIDIA GPU-enabled shims:
helm install kata-deploy oci://ghcr.io/kata-containers/kata-deploy-charts/kata-deploy \
--version VERSION \
-f try-kata-nvidia-gpu.values.yaml
Includes:
qemu-nvidia-gpu- Standard NVIDIA GPU support (amd64, arm64)qemu-nvidia-gpu-snp- NVIDIA GPU with AMD SEV-SNP (amd64)qemu-nvidia-gpu-tdx- NVIDIA GPU with Intel TDX (amd64)
nodeSelector
We can deploy Kata only to specific nodes using nodeSelector
# First, label the nodes where you want kata-containers to be installed
$ kubectl label nodes worker-node-1 kata-containers=enabled
$ kubectl label nodes worker-node-2 kata-containers=enabled
# Then install the chart with `nodeSelector`
$ helm install kata-deploy \
--set nodeSelector.kata-containers="enabled" \
"${CHART}" --version "${VERSION}"
You can also use a values file:
nodeSelector:
kata-containers: "enabled"
node-type: "worker"
$ helm install kata-deploy -f values.yaml "${CHART}" --version "${VERSION}"
Multiple Kata installations on the Same Node
For debugging, testing and other use-case it is possible to deploy multiple
versions of Kata on the very same node. All the needed artifacts are getting the
multiInstallSuffix appended to distinguish each installation. BEWARE that one
needs at least containerd-2.0 since this version has drop-in conf support
which is a prerequisite for the multiInstallSuffix to work properly.
$ helm install kata-deploy-cicd \
-n kata-deploy-cicd \
--set env.multiInstallSuffix=cicd \
--set env.debug=true \
"${CHART}" --version "${VERSION}"
Note: runtimeClasses are automatically created by Helm (via
runtimeClasses.enabled=true, which is the default).
Now verify the installation by examining the runtimeClasses:
$ kubectl get runtimeClasses
NAME HANDLER AGE
kata-clh-cicd kata-clh-cicd 77s
kata-cloud-hypervisor-cicd kata-cloud-hypervisor-cicd 77s
kata-dragonball-cicd kata-dragonball-cicd 77s
kata-fc-cicd kata-fc-cicd 77s
kata-qemu-cicd kata-qemu-cicd 77s
kata-qemu-coco-dev-cicd kata-qemu-coco-dev-cicd 77s
kata-qemu-nvidia-gpu-cicd kata-qemu-nvidia-gpu-cicd 77s
kata-qemu-nvidia-gpu-snp-cicd kata-qemu-nvidia-gpu-snp-cicd 77s
kata-qemu-nvidia-gpu-tdx-cicd kata-qemu-nvidia-gpu-tdx-cicd 76s
kata-qemu-runtime-rs-cicd kata-qemu-runtime-rs-cicd 77s
kata-qemu-se-runtime-rs-cicd kata-qemu-se-runtime-rs-cicd 77s
kata-qemu-snp-cicd kata-qemu-snp-cicd 77s
kata-qemu-tdx-cicd kata-qemu-tdx-cicd 77s
kata-stratovirt-cicd kata-stratovirt-cicd 77s
RuntimeClass Node Selectors for TEE Shims
Manual configuration: Any nodeSelector you set under shims.<shim>.runtimeClass.nodeSelector
is always applied to that shim's RuntimeClass, whether or not NFD is present. Use this when
you want to pin TEE workloads to specific nodes (e.g. without NFD, or with custom labels).
Auto-inject when NFD is present: If you do not set a runtimeClass.nodeSelector for a
TEE shim, the chart can automatically inject NFD-based labels when NFD is detected in the
cluster (deployed by this chart with node-feature-discovery.enabled=true or found externally):
- AMD SEV-SNP shims:
amd.feature.node.kubernetes.io/snp: "true" - Intel TDX shims:
intel.feature.node.kubernetes.io/tdx: "true" - IBM Secure Execution for Linux (SEL) shims (s390x):
feature.node.kubernetes.io/cpu-security.se.enabled: "true"
The chart uses Helm's lookup function to detect NFD (by looking for the
node-feature-discovery-worker DaemonSet). Auto-inject only runs when NFD is detected and
no manual runtimeClass.nodeSelector is set for that shim.
Note: NFD detection requires cluster access. During helm template (dry-run without a
cluster), external NFD is not seen, so auto-injected labels are not added. Manual
runtimeClass.nodeSelector values are still applied in all cases.
Customizing Configuration with Drop-in Files
When kata-deploy installs Kata Containers, the base configuration files should not be modified directly. Instead, use drop-in configuration files to customize settings. This approach ensures your customizations survive kata-deploy upgrades.
How Drop-in Files Work
The Kata runtime reads the base configuration file and then applies any .toml
files found in the config.d/ directory alongside it. Files are processed in
alphabetical order, with later files overriding earlier settings.
Creating Custom Drop-in Files
To add custom settings, create a .toml file in the appropriate config.d/
directory. Use a numeric prefix to control the order of application.
Reserved prefixes (used by kata-deploy):
10-*: Core kata-deploy settings20-*: Debug settings30-*: Kernel parameters
Recommended prefixes for custom settings: 50-89
Drop-In Config Examples
Adding Custom Kernel Parameters
# SSH into the node or use kubectl exec
sudo mkdir -p /opt/kata/share/defaults/kata-containers/runtimes/qemu/config.d/
sudo cat > /opt/kata/share/defaults/kata-containers/runtimes/qemu/config.d/50-custom.toml << 'EOF'
[hypervisor.qemu]
kernel_params = "my_param=value"
EOF
Changing Default Memory Size
sudo cat > /opt/kata/share/defaults/kata-containers/runtimes/qemu/config.d/50-memory.toml << 'EOF'
[hypervisor.qemu]
default_memory = 4096
EOF