Compare commits

..

42 Commits

Author SHA1 Message Date
Fabiano Fidêncio
f97388b0d9 versions: bump containerd active version to 2.2
SSIA

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-07 19:12:49 +01:00
Fabiano Fidêncio
481aed7886 tests: cri: Re-enable podsandboxapi tests
SSIA

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-07 19:12:30 +01:00
Manuel Huber
d9d1073cf1 gpu: Install packages for devkit
Introduce a new function to install additional packages into the
devkit flavor. With modprobe, we avoid errors on pod startup
related to loading nvidia kernel modules in the NVRC phase.
Note, the production flavor gets modprobe from busybox, see its
configuration file containing CONFIG_MODPROBE=y.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-06 09:58:32 +01:00
Manuel Huber
a786582d0b rootfs: deprecate initramfs dm-verity mode
Remove the initramfs folder, its build steps, and use the kernel
based dm-verity enforcement for the handlers which used the
initramfs mode. Also, remove the initramfs verity mode
capability from the shims and their configs.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
cf7f340b39 tests: Read and overwrite kernel_verity_parameters
Read the kernel_verity_paramers from the shim config and adjust
the root hash for the negative test.
Further, improve some of the test logic by using shared
functions. This especially ensures we don't read the full
journalctl logs on a node but only the portion of the logs we are
actually supposed to look at.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
7958be8634 runtime: Make kernel_verity_params overwritable
Similar to the kernel_params annotation, add a
kernel_verity_params annotation and add logic to make these
parameters overwritable. For instance, this can be used in test
logic to provide bogus dm-verity hashes for negative tests.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
7700095ea8 runtime-rs: Make kernel_verity_params overwritable
Similar to the kernel_params annotation, add a
kernel_verity_params annotation and add logic to make these
parameters overwritable. For instance, this can be used in test
logic to provide bogus dm-verity hashes for negative tests.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
472b50fa42 runtime-rs: Enable kernelinit dm-verity variant
This change introduces the kernel_verity_parameters knob to the
rust based shim, picking up dm-verity information in a new config
field (the corresponding build variable is already produced by
the shim build). The change extends the shim to parse dm-verity
information from this parameter and to construct the kernel command
line appropriately, based on the indicated initramfs or kernelinit
build variant.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
f639c3fa17 runtime: Enable kernelinit dm-verity variant
This change introduces the kernel_verity_parameters knob to the
Go based shim, picking up dm-verity information in a new config
field (the corresponding build variable is already produced by
the shim build). The change extends the shim to parse dm-verity
information from this parameter and to construct the kernel command
line appropriately, based on the indicated initramfs or kernelinit
build variant.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
e120dd4cc6 tests: cc: Remove quotes from kernel command line
With dm-mod.create parameters using quotes, we remove the
backslashes used to escape these quotes from the output we
retrieve. This will enable attestation tests to work with the
kernelinit dm-verity mode.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
976df22119 rootfs: Change condition for cryptsetup-bin
Measured rootfs mode and CDH secure storage feature require the
cryptsetup-bin and e2fsprogs components in the guest.
This change makes this more explicity - confidential guests are
users of the CDH secure container image layer storage feature.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
a3c4e0b64f rootfs: Introduce kernelinit dm-verity mode
This change introduces the kernelinit dm-verity mode, allowing
initramfs-less dm-verity enforcement against the rootfs image.
For this, the change introduces a new variable with dm-verity
information. This variable will be picked up by shim
configurations in subsequent commits.
This will allow the shims to build the kernel command line
with dm-verity information based on the existing
kernel_parameters configuration knob and a new
kernel_verity_params configuration knob. The latter
specifically provides the relevant dm-verity information.
This new configuration knob avoids merging the verity
parameters into the kernel_params field. Avoiding this, no
cumbersome escape logic is required as we do not need to pass the
dm-mod.create="..." parameter directly in the kernel_parameters,
but only relevant dm-verity parameters in semi-structured manner
(see above). The only place where the final command line is
assembled is in the shims. Further, this is a line easy to comment
out for developers to disable dm-verity enforcement (or for CI
tasks).

This change produces the new kernelinit dm-verity parameters for
the NVIDIA runtime handlers, and modifies the format of how
these parameters are prepared for all handlers. With this, the
parameters are currently no longer provided to the
kernel_params configuration knob for any runtime handler.
This change alone should thus not be used as dm-verity
information will no longer be picked up by the shims.

systemd-analyze on the coco-dev handler shows that using the
kernelinit mode on a local machine, less time is spent in the
kernel phase, slightly speeding up pod start-up. On that machine,
the average of 172.5ms was reduced to 141ms (4 measurements, each
with a basic pod manifest), i.e., the kernel phase duration is
improved by about 18 percent.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
83a0bd1360 gpu: use dm-verity for the non-TEE GPU handler
Use a dm-verity protected rootfs image for the non-TEE NVIDIA
GPU handler as well.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
02ed4c99bc rootfs: Use maxdepth=1 to search for kata tarballs
These tarballs are in the top layer of the build directory,
no need to traverse all sub-directories.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
d37db5f068 rootfs: Restore "gpu: Handle root_hash.txt ..."
This reverts commit 923f97bc66 in
order to re-instantiate the logic from commit
e4a13b9a4a.

The latter commit was previously reverted due to the NVIDIA GPU TEE
handler using an initrd, not an image.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
f1ca547d66 initramfs: introduce log function
Log to /dev/kmsg, this way logs will show up and not get lost.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
6d0bb49716 runtime: nvidia: Use img and sanitize whitespaces
Shift NVIDIA shim configurations to use an image instead of an initrd,
and remove trailing whitespaces from the configs.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Manuel Huber
282014000f tests: cc: support initrd, image for attestation
Allow using an image instead of an initrd. For confidential
guests using images, the assumption is that the guest kernel uses
dm-verity protection, implicitly measuring the rootfs image via
the kernel command line's dm-verity information.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-05 23:04:35 +01:00
Greg Kurz
e430b2641c Merge pull request #12435 from bpradipt/crio-annotation
shim: Add CRI-O annotation support for device cold plug
2026-02-05 09:29:19 +01:00
Alex Lyn
e257430976 Merge pull request #12433 from manuelh-dev/mahuber/cfg-sanitize-whitespaces
runtimes: Sanitize trailing whitespaces
2026-02-05 09:31:21 +08:00
Fabiano Fidêncio
dda1b30c34 tests: nvidia-nim: Use sealed secrets for NGC_API_KEY
Convert the NGC_API_KEY from a regular Kubernetes secret to a sealed
secret for the CC GPU tests. This ensures the API key is only accessible
within the confidential enclave after successful attestation.

The sealed secret uses the "vault" type which points to a resource stored
in the Key Broker Service (KBS). The Confidential Data Hub (CDH) inside
the guest will unseal this secret by fetching it from KBS after
attestation.

The initdata file is created AFTER create_tmp_policy_settings_dir()
copies the empty default file, and BEFORE auto_generate_policy() runs.
This allows genpolicy to add the generated policy.rego to our custom
CDH configuration.

The sealed secret format follows the CoCo specification:
sealed.<JWS header>.<JWS payload>.<signature>

Where the payload contains:
- version: "0.1.0"
- type: "vault" (pointer to KBS resource)
- provider: "kbs"
- resource_uri: KBS path to the actual secret

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-04 12:34:44 +01:00
Fabiano Fidêncio
c9061f9e36 tests: kata-deploy: Increase post-deployment wait time
Increase the sleep time after kata-deploy deployment from 10s to 60s
to give more time for runtimes to be configured. This helps avoid
race conditions on slower K8s distributions like k3s where the
RuntimeClass may not be immediately available after the DaemonSet
rollout completes.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-04 12:13:53 +01:00
Fabiano Fidêncio
0fb2c500fd tests: kata-deploy: Merge E2E tests to avoid timing issues
Merge the two E2E tests ("Custom RuntimeClass exists with correct
properties" and "Custom runtime can run a pod") into a single test, as
those 2 are very much dependent of each other.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-04 12:13:53 +01:00
Fabiano Fidêncio
fef93f1e08 tests: kata-deploy: Use die() instead of fail() for error handling
Replace fail() calls with die() which is already provided by
common.bash. The fail() function doesn't exist in the test
infrastructure, causing "command not found" errors when tests fail.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-04 12:13:53 +01:00
Fabiano Fidêncio
f90c12d4df kata-deploy: Avoid text file busy error with nydus-snapshotter
We cannot overwrtie a binary that's currently in use, and that's the
reason that elsewhere we remove / unlink the binary (the running process
keeps its file descriptor, so we're good doing that) and only then we
copy the binary.  However, we missed doing this for the
nydus-snapshotter deployment.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-04 10:24:49 +01:00
Manuel Huber
30c7325e75 runtimes: Sanitize trailing whitespaces
Clean up trailing whitespaces, making life easier for those who
have configured their IDE to clean these up.
Suggest to not add new code with trailing whitespaces etc.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-03 11:46:30 -08:00
Steve Horsman
30494abe48 Merge pull request #12426 from kata-containers/dependabot/github_actions/zizmorcore/zizmor-action-0.4.1
build(deps): bump zizmorcore/zizmor-action from 0.2.0 to 0.4.1
2026-02-03 14:38:54 +00:00
Pradipta Banerjee
8a449d358f shim: Add CRI-O annotation support for device cold plug
Add support for CRI-O annotations when fetching pod identifiers for
device cold plug. The code now checks containerd CRI annotations first,
then falls back to CRI-O annotations if they are empty.

This enables device cold plug to work with both containerd and CRI-O
container runtimes.

Annotations supported:
- containerd: io.kubernetes.cri.sandbox-name, io.kubernetes.cri.sandbox-namespace
- CRI-O: io.kubernetes.cri-o.KubeName, io.kubernetes.cri-o.Namespace

Signed-off-by: Pradipta Banerjee <pradipta.banerjee@gmail.com>
2026-02-03 04:51:15 +00:00
Steve Horsman
6bb77a2f13 Merge pull request #12390 from mythi/tdx-updates-2026-2
runtime: tdx QEMU configuration changes
2026-02-02 16:58:44 +00:00
Zvonko Kaiser
6702b48858 Merge pull request #12428 from fidencio/topic/nydus-snapshotter-start-from-a-clean-state
kata-deploy: nydus: Always start from a clean state
2026-02-02 11:21:26 -05:00
Steve Horsman
0530a3494f Merge pull request #12415 from nlle/make-helm-updatestrategy-configurable
kata-deploy: Make update strategy configurable for kata-deploy DaemonSet
2026-02-02 10:29:01 +00:00
Steve Horsman
93dcaee965 Merge pull request #12423 from manuelh-dev/mahuber/pause-build-fix
packaging: Delete pause_bundle dir before unpack
2026-02-02 10:26:30 +00:00
Fabiano Fidêncio
62ad0814c5 kata-deploy: nydus: Always start from a clean state
Clean up existing nydus-snapshotter state to ensure fresh start with new
version.

This is safe across all K8s distributions (k3s, rke2, k0s, microk8s,
etc.) because we only touch the nydus data directory, not containerd's
internals.

When containerd tries to use non-existent snapshots, it will
re-pull/re-unpack.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-02 11:06:37 +01:00
Mikko Ylinen
870630c421 kata-deploy: drop custom TDX installation steps
As we have moved to use QEMU (and OVMF already earlier) from
kata-deploy, the custom tdx configurations and distro checks
are no longer needed.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2026-02-02 11:11:26 +02:00
Mikko Ylinen
927be7b8ad runtime: tdx: move to use QEMU from kata-deploy
Currently, a working TDX setup expects users to install special
TDX support builds from Canonical/CentOS virt-sig for TDX to
work. kata-deploy configured TDX runtime handler to use QEMU
from the distro's paths.

With TDX support now being available in upstream Linux and
Ubuntu 24.04 having an install candidate (linux-image-generic-6.17)
for a new enough kernel, move TDX configuration to use QEMU from
kata-deploy.

While this is the new default, going back to the original
setup is possible by making manual changes to TDX runtime handlers.

Note: runtime-rs is already using QEMUPATH for TDX.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2026-02-02 11:10:52 +02:00
Nikolaj Lindberg Lerche
6e98df2bac kata-deploy: Make update strategy configurable for kata-deploy DaemonSet
This Allows the updateStrategy to be configured for the kata-deploy helm
chart, this is enabling administrators to control the aggressiveness of
updates. For a less aggressive approach, the strategy can be set to
`OnDelete`. Alternatively, the update process can be made more
aggressive by adjusting the `maxUnavailable` parameter.

Signed-off-by: Nikolaj Lindberg Lerche <nlle@ambu.com>
2026-02-01 20:14:29 +01:00
Dan Mihai
d7ff54769c tests: policy: remove the need for using sudo
Modify the copy of root user's settings file, instead of modifying the
original file.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2026-02-01 20:09:50 +01:00
Dan Mihai
4d860dcaf5 tests: policy: avoid redundant debug output
Avoid redundant and confusing teardown_common() debug output for
k8s-policy-pod.bats and k8s-policy-pvc.bats.

The Policy tests skip the Message field when printing information about
their pods, because unfortunately that field might contain a truncated
Policy log - for the test cases that intentiocally cause Policy
failures. The non-truncated Policy log is already available from other
"kubectl describe" fields.

So, avoid the redundant pod information from teardown_common(), that
also included the confusing Message field.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2026-02-01 20:09:50 +01:00
dependabot[bot]
dc8d9e056d build(deps): bump zizmorcore/zizmor-action from 0.2.0 to 0.4.1
Bumps [zizmorcore/zizmor-action](https://github.com/zizmorcore/zizmor-action) from 0.2.0 to 0.4.1.
- [Release notes](https://github.com/zizmorcore/zizmor-action/releases)
- [Commits](e673c3917a...135698455d)

---
updated-dependencies:
- dependency-name: zizmorcore/zizmor-action
  dependency-version: 0.4.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 15:08:10 +00:00
Manuel Huber
8b0c199f43 packaging: Delete pause_bundle dir before unpack
Delete the pause_bundle directory before running the umoci unpack
operation. This will make builds idempotent and not fail with
errors like "create runtime bundle: config.json already exists in
.../build/pause-image/destdir/pause_bundle". This will make life
better when building locally.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-31 19:43:11 +01:00
Steve Horsman
4d1095e653 Merge pull request #12350 from manuelh-dev/mahuber/term-grace-period
tests: Remove terminationGracePeriod in manifests
2026-01-29 15:17:17 +00:00
Manuel Huber
6438fe7f2d tests: Remove terminationGracePeriod in manifests
Do not kill containers immediately, instead use Kubernetes'
default termination grace period.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-01-23 16:18:44 -08:00
143 changed files with 1244 additions and 844 deletions

View File

@@ -26,8 +26,6 @@ jobs:
matrix:
containerd_version: ['active']
vmm: ['dragonball', 'cloud-hypervisor', 'qemu-runtime-rs']
# TODO: enable me when https://github.com/containerd/containerd/issues/11640 is fixed
if: false
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}

View File

@@ -26,8 +26,6 @@ jobs:
matrix:
containerd_version: ['active']
vmm: ['qemu-runtime-rs']
# TODO: enable me when https://github.com/containerd/containerd/issues/11640 is fixed
if: false
runs-on: s390x-large
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
@@ -48,7 +46,7 @@ jobs:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/cri-containerd/gha-run.sh
run: bash tests/integration/cri-containerd/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}

View File

@@ -21,7 +21,7 @@ jobs:
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@e673c3917a1aef3c65c972347ed84ccd013ecda4 # v0.2.0
uses: zizmorcore/zizmor-action@135698455da5c3b3e55f73f4419e481ab68cdd95 # v0.4.1
with:
advanced-security: false
annotations: true

View File

@@ -149,6 +149,9 @@ pub const KATA_ANNO_CFG_HYPERVISOR_KERNEL_HASH: &str =
/// A sandbox annotation for passing additional guest kernel parameters.
pub const KATA_ANNO_CFG_HYPERVISOR_KERNEL_PARAMS: &str =
"io.katacontainers.config.hypervisor.kernel_params";
/// A sandbox annotation for passing guest dm-verity parameters.
pub const KATA_ANNO_CFG_HYPERVISOR_KERNEL_VERITY_PARAMS: &str =
"io.katacontainers.config.hypervisor.kernel_verity_params";
/// A sandbox annotation for passing a container guest image path.
pub const KATA_ANNO_CFG_HYPERVISOR_IMAGE_PATH: &str = "io.katacontainers.config.hypervisor.image";
/// A sandbox annotation for passing a container guest image SHA-512 hash value.
@@ -630,6 +633,9 @@ impl Annotation {
KATA_ANNO_CFG_HYPERVISOR_KERNEL_PARAMS => {
hv.boot_info.replace_kernel_params(value);
}
KATA_ANNO_CFG_HYPERVISOR_KERNEL_VERITY_PARAMS => {
hv.boot_info.replace_kernel_verity_params(value)?;
}
KATA_ANNO_CFG_HYPERVISOR_IMAGE_PATH => {
hv.boot_info.validate_boot_path(value)?;
hv.boot_info.image = value.to_string();

View File

@@ -76,6 +76,134 @@ const VIRTIO_FS_INLINE: &str = "inline-virtio-fs";
const MAX_BRIDGE_SIZE: u32 = 5;
const KERNEL_PARAM_DELIMITER: &str = " ";
/// Block size (in bytes) used by dm-verity block size validation.
pub const VERITY_BLOCK_SIZE_BYTES: u64 = 512;
/// Parsed kernel dm-verity parameters.
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct KernelVerityParams {
/// Root hash value.
pub root_hash: String,
/// Salt used to generate verity hash tree.
pub salt: String,
/// Number of data blocks in the verity mapping.
pub data_blocks: u64,
/// Data block size in bytes.
pub data_block_size: u64,
/// Hash block size in bytes.
pub hash_block_size: u64,
}
/// Parse and validate kernel dm-verity parameters.
pub fn parse_kernel_verity_params(params: &str) -> Result<Option<KernelVerityParams>> {
if params.trim().is_empty() {
return Ok(None);
}
let mut values = HashMap::new();
for field in params.split(',') {
let field = field.trim();
if field.is_empty() {
continue;
}
let mut parts = field.splitn(2, '=');
let key = parts.next().unwrap_or("");
let value = parts.next().ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("Invalid kernel_verity_params entry: {field}"),
)
})?;
if key.is_empty() {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!("Invalid kernel_verity_params entry: {field}"),
));
}
values.insert(key.to_string(), value.to_string());
}
let root_hash = values
.get("root_hash")
.ok_or_else(|| {
io::Error::new(
io::ErrorKind::InvalidData,
"Missing kernel_verity_params root_hash",
)
})?
.to_string();
let salt = values.get("salt").cloned().unwrap_or_default();
let parse_uint_field = |name: &str| -> Result<u64> {
match values.get(name) {
Some(value) if !value.is_empty() => value.parse::<u64>().map_err(|e| {
io::Error::new(
io::ErrorKind::InvalidData,
format!("Invalid kernel_verity_params {} '{}': {}", name, value, e),
)
}),
_ => Err(io::Error::new(
io::ErrorKind::InvalidData,
format!("Missing kernel_verity_params {name}"),
)),
}
};
let data_blocks = parse_uint_field("data_blocks")?;
let data_block_size = parse_uint_field("data_block_size")?;
let hash_block_size = parse_uint_field("hash_block_size")?;
if salt.is_empty() {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Missing kernel_verity_params salt",
));
}
if data_blocks == 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Invalid kernel_verity_params data_blocks: must be non-zero",
));
}
if data_block_size == 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Invalid kernel_verity_params data_block_size: must be non-zero",
));
}
if hash_block_size == 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Invalid kernel_verity_params hash_block_size: must be non-zero",
));
}
if data_block_size % VERITY_BLOCK_SIZE_BYTES != 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Invalid kernel_verity_params data_block_size: must be multiple of {}",
VERITY_BLOCK_SIZE_BYTES
),
));
}
if hash_block_size % VERITY_BLOCK_SIZE_BYTES != 0 {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
format!(
"Invalid kernel_verity_params hash_block_size: must be multiple of {}",
VERITY_BLOCK_SIZE_BYTES
),
));
}
Ok(Some(KernelVerityParams {
root_hash,
salt,
data_blocks,
data_block_size,
hash_block_size,
}))
}
lazy_static! {
static ref HYPERVISOR_PLUGINS: Mutex<HashMap<String, Arc<dyn ConfigPlugin>>> =
@@ -294,6 +422,10 @@ pub struct BootInfo {
#[serde(default)]
pub kernel_params: String,
/// Guest kernel dm-verity parameters.
#[serde(default)]
pub kernel_verity_params: String,
/// Path to initrd file on host.
#[serde(default)]
pub initrd: String,
@@ -441,6 +573,17 @@ impl BootInfo {
self.kernel_params = all_params.join(KERNEL_PARAM_DELIMITER);
}
/// Replace kernel dm-verity parameters after validation.
pub fn replace_kernel_verity_params(&mut self, new_params: &str) -> Result<()> {
if new_params.trim().is_empty() {
return Ok(());
}
parse_kernel_verity_params(new_params)?;
self.kernel_verity_params = new_params.to_string();
Ok(())
}
/// Validate guest kernel image annotation.
pub fn validate_boot_path(&self, path: &str) -> Result<()> {
validate_path!(path, "path {} is invalid{}")?;

View File

@@ -148,8 +148,7 @@ ifneq (,$(QEMUCMD))
endif
endif
ROOTMEASURECONFIG ?= ""
KERNELTDXPARAMS += $(ROOTMEASURECONFIG)
KERNELVERITYPARAMS ?= ""
# TDX
DEFSHAREDFS_QEMU_TDX_VIRTIOFS := none
@@ -176,8 +175,8 @@ DEFMEMSLOTS := 10
DEFMAXMEMSZ := 0
##VAR DEFBRIDGES=<number> Default number of bridges
DEFBRIDGES := 1
DEFENABLEANNOTATIONS := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"default_vcpus\", \"default_memory\"]
DEFENABLEANNOTATIONS_COCO := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"default_vcpus\", \"default_memory\", \"cc_init_data\"]
DEFENABLEANNOTATIONS := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"kernel_verity_params\", \"default_vcpus\", \"default_memory\"]
DEFENABLEANNOTATIONS_COCO := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"kernel_verity_params\", \"default_vcpus\", \"default_memory\", \"cc_init_data\"]
DEFDISABLEGUESTSECCOMP := true
DEFDISABLEGUESTEMPTYDIR := false
##VAR DEFAULTEXPFEATURES=[features] Default experimental features enabled
@@ -527,6 +526,7 @@ USER_VARS += MACHINEACCELERATORS
USER_VARS += CPUFEATURES
USER_VARS += DEFMACHINETYPE_CLH
USER_VARS += KERNELPARAMS
USER_VARS += KERNELVERITYPARAMS
USER_VARS += KERNELPARAMS_DB
USER_VARS += KERNELPARAMS_FC
USER_VARS += LIBEXECDIR

View File

@@ -19,7 +19,7 @@ image = "@IMAGEPATH@"
# - xfs
# - erofs
rootfs_type = @DEFROOTFSTYPE@
# Block storage driver to be used for the VM rootfs is backed
# by a block device.
vm_rootfs_driver = "@VMROOTFSDRIVER_CLH@"
@@ -41,7 +41,7 @@ valid_hypervisor_paths = @CLHVALIDHYPERVISORPATHS@
# List of valid annotations values for ctlpath
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends:
# Your distribution recommends:
valid_ctlpaths = []
# Optional space-separated list of options to pass to the guest kernel.

View File

@@ -23,7 +23,7 @@ image = "@IMAGEPATH@"
# - erofs
rootfs_type = @DEFROOTFSTYPE@
# Block storage driver to be used for the VM rootfs is backed
# by a block device. This is virtio-blk-pci, virtio-blk-mmio or nvdimm
vm_rootfs_driver = "@VMROOTFSDRIVER_DB@"
@@ -41,7 +41,7 @@ valid_hypervisor_paths = @DBVALIDHYPERVISORPATHS@
# List of valid annotations values for ctlpath
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends:
# Your distribution recommends:
valid_ctlpaths = []
# Optional space-separated list of options to pass to the guest kernel.

View File

@@ -373,16 +373,16 @@ disable_image_nvdimm = false
# Default false
hotplug_vfio_on_root_bus = false
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
hot_plug_vfio = "no-port"
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "no-port"
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.

View File

@@ -767,4 +767,4 @@ dan_conf = "@DEFDANCONF@"
# to non-k8s cases)
# cold_plug_vfio != no_port AND pod_resource_api_sock != "" => kubelet
# based cold plug.
pod_resource_api_sock = "@DEFPODRESOURCEAPISOCK@"
pod_resource_api_sock = "@DEFPODRESOURCEAPISOCK@"

View File

@@ -39,7 +39,7 @@ vm_rootfs_driver = "virtio-blk-pci"
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -74,6 +74,11 @@ valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELTDXPARAMS@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWARETDXPATH@"

View File

@@ -304,7 +304,7 @@ debug_console_enabled = false
# Agent connection dialing timeout value in seconds
# (default: 45)
dial_timeout = 45
dial_timeout = 45
# Confidential Data Hub API timeout value in seconds
# (default: 50)

View File

@@ -151,7 +151,11 @@ impl CloudHypervisorInner {
#[cfg(target_arch = "aarch64")]
let console_param_debug = KernelParams::from_string("console=ttyAMA0,115200n8");
let mut rootfs_param = KernelParams::new_rootfs_kernel_params(rootfs_driver, rootfs_type)?;
let mut rootfs_params = KernelParams::new_rootfs_kernel_params(
&cfg.boot_info.kernel_verity_params,
rootfs_driver,
rootfs_type,
)?;
let mut console_params = if enable_debug {
if confidential_guest {
@@ -165,8 +169,7 @@ impl CloudHypervisorInner {
params.append(&mut console_params);
// Add the rootfs device
params.append(&mut rootfs_param);
params.append(&mut rootfs_params);
// Now add some additional options required for CH
let extra_options = [

View File

@@ -144,13 +144,14 @@ impl DragonballInner {
let mut kernel_params = KernelParams::new(self.config.debug_info.enable_debug);
if self.config.boot_info.initrd.is_empty() {
// get rootfs driver
// When booting from the image, add rootfs and verity parameters here.
let rootfs_driver = self.config.blockdev_info.block_device_driver.clone();
kernel_params.append(&mut KernelParams::new_rootfs_kernel_params(
let mut rootfs_params = KernelParams::new_rootfs_kernel_params(
&self.config.boot_info.kernel_verity_params,
&rootfs_driver,
&self.config.boot_info.rootfs_type,
)?);
)?;
kernel_params.append(&mut rootfs_params);
}
kernel_params.append(&mut KernelParams::from_string(

View File

@@ -86,12 +86,12 @@ impl FcInner {
let mut kernel_params = KernelParams::new(self.config.debug_info.enable_debug);
kernel_params.push(Param::new("pci", "off"));
kernel_params.push(Param::new("iommu", "off"));
let rootfs_driver = self.config.blockdev_info.block_device_driver.clone();
kernel_params.append(&mut KernelParams::new_rootfs_kernel_params(
&rootfs_driver,
let mut rootfs_params = KernelParams::new_rootfs_kernel_params(
&self.config.boot_info.kernel_verity_params,
&self.config.blockdev_info.block_device_driver,
&self.config.boot_info.rootfs_type,
)?);
)?;
kernel_params.append(&mut rootfs_params);
kernel_params.append(&mut KernelParams::from_string(
&self.config.boot_info.kernel_params,
));

View File

@@ -11,6 +11,7 @@ use crate::{
VM_ROOTFS_ROOT_BLK, VM_ROOTFS_ROOT_PMEM,
};
use kata_types::config::LOG_VPORT_OPTION;
use kata_types::config::hypervisor::{parse_kernel_verity_params, VERITY_BLOCK_SIZE_BYTES};
use kata_types::fs::{
VM_ROOTFS_FILESYSTEM_EROFS, VM_ROOTFS_FILESYSTEM_EXT4, VM_ROOTFS_FILESYSTEM_XFS,
};
@@ -20,7 +21,76 @@ use kata_types::fs::{
const VSOCK_LOGS_PORT: &str = "1025";
const KERNEL_KV_DELIMITER: &str = "=";
const KERNEL_PARAM_DELIMITER: &str = " ";
const KERNEL_PARAM_DELIMITER: char = ' ';
// Split kernel params on spaces, but keep quoted substrings intact.
// Example: dm-mod.create="dm-verity,,,ro,0 736328 verity 1 /dev/vda1 /dev/vda2 ...".
fn split_kernel_params(params_string: &str) -> Vec<String> {
let mut params = Vec::new();
let mut current = String::new();
let mut in_quote = false;
for c in params_string.chars() {
if c == '"' {
in_quote = !in_quote;
current.push(c);
continue;
}
if c == KERNEL_PARAM_DELIMITER && !in_quote {
let trimmed = current.trim();
if !trimmed.is_empty() {
params.push(trimmed.to_string());
}
current.clear();
continue;
}
current.push(c);
}
let trimmed = current.trim();
if !trimmed.is_empty() {
params.push(trimmed.to_string());
}
params
}
struct KernelVerityConfig {
root_hash: String,
salt: String,
data_blocks: u64,
data_block_size: u64,
hash_block_size: u64,
}
fn new_kernel_verity_params(params_string: &str) -> Result<Option<KernelVerityConfig>> {
let cfg = parse_kernel_verity_params(params_string)
.map_err(|err| anyhow!(err.to_string()))?;
Ok(cfg.map(|params| KernelVerityConfig {
root_hash: params.root_hash,
salt: params.salt,
data_blocks: params.data_blocks,
data_block_size: params.data_block_size,
hash_block_size: params.hash_block_size,
}))
}
fn kernel_verity_root_flags(rootfs_type: &str) -> Result<String> {
let normalized = if rootfs_type.is_empty() {
VM_ROOTFS_FILESYSTEM_EXT4
} else {
rootfs_type
};
match normalized {
VM_ROOTFS_FILESYSTEM_EXT4 => Ok("data=ordered,errors=remount-ro ro".to_string()),
VM_ROOTFS_FILESYSTEM_XFS | VM_ROOTFS_FILESYSTEM_EROFS => Ok("ro".to_string()),
_ => Err(anyhow!("Unsupported rootfs type {}", rootfs_type)),
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct Param {
@@ -71,7 +141,11 @@ impl KernelParams {
Self { params }
}
pub(crate) fn new_rootfs_kernel_params(rootfs_driver: &str, rootfs_type: &str) -> Result<Self> {
pub(crate) fn new_rootfs_kernel_params(
kernel_verity_params: &str,
rootfs_driver: &str,
rootfs_type: &str,
) -> Result<Self> {
let mut params = vec![];
match rootfs_driver {
@@ -119,7 +193,52 @@ impl KernelParams {
params.push(Param::new("rootfstype", rootfs_type));
Ok(Self { params })
let mut params = Self { params };
let cfg = match new_kernel_verity_params(kernel_verity_params)? {
Some(cfg) => cfg,
None => return Ok(params),
};
let (root_device, hash_device) = match rootfs_driver {
VM_ROOTFS_DRIVER_PMEM => ("/dev/pmem0p1", "/dev/pmem0p2"),
VM_ROOTFS_DRIVER_BLK | VM_ROOTFS_DRIVER_BLK_CCW | VM_ROOTFS_DRIVER_MMIO => {
("/dev/vda1", "/dev/vda2")
}
_ => return Err(anyhow!("Unsupported rootfs driver {}", rootfs_driver)),
};
let data_sectors = (cfg.data_block_size / VERITY_BLOCK_SIZE_BYTES) * cfg.data_blocks;
let root_flags = kernel_verity_root_flags(rootfs_type)?;
let dm_cmd = format!(
"dm-verity,,,ro,0 {} verity 1 {} {} {} {} {} 0 sha256 {} {}",
data_sectors,
root_device,
hash_device,
cfg.data_block_size,
cfg.hash_block_size,
cfg.data_blocks,
cfg.root_hash,
cfg.salt
);
params.remove_all_by_key("root".to_string());
params.remove_all_by_key("rootflags".to_string());
params.remove_all_by_key("rootfstype".to_string());
params.push(Param {
key: "dm-mod.create".to_string(),
value: format!("\"{}\"", dm_cmd),
});
params.push(Param::new("root", "/dev/dm-0"));
params.push(Param::new("rootflags", &root_flags));
if rootfs_type.is_empty() {
params.push(Param::new("rootfstype", VM_ROOTFS_FILESYSTEM_EXT4));
} else {
params.push(Param::new("rootfstype", rootfs_type));
}
Ok(params)
}
pub(crate) fn append(&mut self, params: &mut KernelParams) {
@@ -138,7 +257,7 @@ impl KernelParams {
pub(crate) fn from_string(params_string: &str) -> Self {
let mut params = vec![];
let parameters_vec: Vec<&str> = params_string.split(KERNEL_PARAM_DELIMITER).collect();
let parameters_vec = split_kernel_params(params_string);
for param in parameters_vec.iter() {
if param.is_empty() {
@@ -170,7 +289,7 @@ impl KernelParams {
parameters.push(param.to_string()?);
}
Ok(parameters.join(KERNEL_PARAM_DELIMITER))
Ok(parameters.join(&KERNEL_PARAM_DELIMITER.to_string()))
}
}
@@ -347,7 +466,8 @@ mod tests {
for (i, t) in tests.iter().enumerate() {
let msg = format!("test[{i}]: {t:?}");
let result = KernelParams::new_rootfs_kernel_params(t.rootfs_driver, t.rootfs_type);
let result =
KernelParams::new_rootfs_kernel_params("", t.rootfs_driver, t.rootfs_type);
let msg = format!("{msg}, result: {result:?}");
if t.result.is_ok() {
assert!(result.is_ok(), "{}", msg);
@@ -359,4 +479,55 @@ mod tests {
}
}
}
#[test]
fn test_kernel_verity_params() -> Result<()> {
let params = KernelParams::new_rootfs_kernel_params(
"root_hash=abc,salt=def,data_blocks=1,data_block_size=4096,hash_block_size=4096",
VM_ROOTFS_DRIVER_BLK,
VM_ROOTFS_FILESYSTEM_EXT4,
)?;
let params_string = params.to_string()?;
assert!(params_string.contains("dm-mod.create="));
assert!(params_string.contains("root=/dev/dm-0"));
assert!(params_string.contains("rootfstype=ext4"));
let err = KernelParams::new_rootfs_kernel_params(
"root_hash=abc,data_blocks=1,data_block_size=4096,hash_block_size=4096",
VM_ROOTFS_DRIVER_BLK,
VM_ROOTFS_FILESYSTEM_EXT4,
)
.err()
.expect("expected missing salt error");
assert!(format!("{err}").contains("Missing kernel_verity_params salt"));
let err = KernelParams::new_rootfs_kernel_params(
"root_hash=abc,salt=def,data_block_size=4096,hash_block_size=4096",
VM_ROOTFS_DRIVER_BLK,
VM_ROOTFS_FILESYSTEM_EXT4,
)
.err()
.expect("expected missing data_blocks error");
assert!(format!("{err}").contains("Missing kernel_verity_params data_blocks"));
let err = KernelParams::new_rootfs_kernel_params(
"root_hash=abc,salt=def,data_blocks=foo,data_block_size=4096,hash_block_size=4096",
VM_ROOTFS_DRIVER_BLK,
VM_ROOTFS_FILESYSTEM_EXT4,
)
.err()
.expect("expected invalid data_blocks error");
assert!(format!("{err}").contains("Invalid kernel_verity_params data_blocks"));
let err = KernelParams::new_rootfs_kernel_params(
"root_hash=abc,salt=def,data_blocks=1,data_block_size=4096,hash_block_size=4096,badfield",
VM_ROOTFS_DRIVER_BLK,
VM_ROOTFS_FILESYSTEM_EXT4,
)
.err()
.expect("expected invalid entry error");
assert!(format!("{err}").contains("Invalid kernel_verity_params entry"));
Ok(())
}
}

View File

@@ -179,16 +179,13 @@ impl Kernel {
let mut kernel_params = KernelParams::new(config.debug_info.enable_debug);
if config.boot_info.initrd.is_empty() {
// QemuConfig::validate() has already made sure that if initrd is
// empty, image cannot be so we don't need to re-check that here
kernel_params.append(
&mut KernelParams::new_rootfs_kernel_params(
&config.boot_info.vm_rootfs_driver,
&config.boot_info.rootfs_type,
)
.context("adding rootfs params failed")?,
);
let mut rootfs_params = KernelParams::new_rootfs_kernel_params(
&config.boot_info.kernel_verity_params,
&config.boot_info.vm_rootfs_driver,
&config.boot_info.rootfs_type,
)
.context("adding rootfs/verity params failed")?;
kernel_params.append(&mut rootfs_params);
}
kernel_params.append(&mut KernelParams::from_string(

View File

@@ -152,9 +152,9 @@ FIRMWARETDVFVOLUMEPATH :=
FIRMWARESNPPATH := $(PREFIXDEPS)/share/ovmf/AMDSEV.fd
ROOTMEASURECONFIG ?= ""
KERNELTDXPARAMS += $(ROOTMEASURECONFIG)
KERNELQEMUCOCODEVPARAMS += $(ROOTMEASURECONFIG)
KERNELVERITYPARAMS ?= ""
KERNELVERITYPARAMS_NV ?= ""
KERNELVERITYPARAMS_CONFIDENTIAL_NV ?= ""
# Name of default configuration file the runtime will use.
CONFIG_FILE = configuration.toml
@@ -174,10 +174,6 @@ HYPERVISORS := $(HYPERVISOR_FC) $(HYPERVISOR_QEMU) $(HYPERVISOR_CLH) $(HYPERVISO
QEMUPATH := $(QEMUBINDIR)/$(QEMUCMD)
QEMUVALIDHYPERVISORPATHS := [\"$(QEMUPATH)\"]
#QEMUTDXPATH := $(QEMUBINDIR)/$(QEMUTDXCMD)
QEMUTDXPATH := PLACEHOLDER_FOR_DISTRO_QEMU_WITH_TDX_SUPPORT
QEMUTDXVALIDHYPERVISORPATHS := [\"$(QEMUTDXPATH)\"]
QEMUTDXEXPERIMENTALPATH := $(QEMUBINDIR)/$(QEMUTDXEXPERIMENTALCMD)
QEMUTDXEXPERIMENTALVALIDHYPERVISORPATHS := [\"$(QEMUTDXEXPERIMENTALPATH)\"]
@@ -221,8 +217,8 @@ DEFMEMSLOTS := 10
DEFMAXMEMSZ := 0
#Default number of bridges
DEFBRIDGES := 1
DEFENABLEANNOTATIONS := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\"]
DEFENABLEANNOTATIONS_COCO := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"default_vcpus\", \"default_memory\", \"cc_init_data\"]
DEFENABLEANNOTATIONS := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"kernel_verity_params\"]
DEFENABLEANNOTATIONS_COCO := [\"enable_iommu\", \"virtio_fs_extra_args\", \"kernel_params\", \"kernel_verity_params\", \"default_vcpus\", \"default_memory\", \"cc_init_data\"]
DEFDISABLEGUESTSECCOMP := true
DEFDISABLEGUESTEMPTYDIR := false
#Default experimental features enabled
@@ -659,6 +655,8 @@ USER_VARS += DEFAULTMEMORY_NV
USER_VARS += DEFAULTVFIOPORT_NV
USER_VARS += DEFAULTPCIEROOTPORT_NV
USER_VARS += KERNELPARAMS_NV
USER_VARS += KERNELVERITYPARAMS_NV
USER_VARS += KERNELVERITYPARAMS_CONFIDENTIAL_NV
USER_VARS += DEFAULTTIMEOUT_NV
USER_VARS += DEFSANDBOXCGROUPONLY_NV
USER_VARS += DEFROOTFSTYPE
@@ -685,6 +683,7 @@ USER_VARS += TDXCPUFEATURES
USER_VARS += DEFMACHINETYPE_CLH
USER_VARS += DEFMACHINETYPE_STRATOVIRT
USER_VARS += KERNELPARAMS
USER_VARS += KERNELVERITYPARAMS
USER_VARS += KERNELTDXPARAMS
USER_VARS += KERNELQEMUCOCODEVPARAMS
USER_VARS += LIBEXECDIR
@@ -702,18 +701,15 @@ USER_VARS += PROJECT_TYPE
USER_VARS += PROJECT_URL
USER_VARS += QEMUBINDIR
USER_VARS += QEMUCMD
USER_VARS += QEMUTDXCMD
USER_VARS += QEMUTDXEXPERIMENTALCMD
USER_VARS += QEMUCCAEXPERIMENTALCMD
USER_VARS += QEMUSNPCMD
USER_VARS += QEMUPATH
USER_VARS += QEMUTDXPATH
USER_VARS += QEMUTDXEXPERIMENTALPATH
USER_VARS += QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT
USER_VARS += QEMUSNPPATH
USER_VARS += QEMUCCAEXPERIMENTALPATH
USER_VARS += QEMUVALIDHYPERVISORPATHS
USER_VARS += QEMUTDXVALIDHYPERVISORPATHS
USER_VARS += QEMUTDXEXPERIMENTALVALIDHYPERVISORPATHS
USER_VARS += QEMUCCAVALIDHYPERVISORPATHS
USER_VARS += QEMUCCAEXPERIMENTALVALIDHYPERVISORPATHS

View File

@@ -251,9 +251,9 @@ guest_hook_path = ""
# and we strongly advise users to refer the Cloud Hypervisor official
# documentation for a better understanding of its internals:
# https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/docs/io_throttling.md
#
#
# Bandwidth rate limiter options
#
#
# net_rate_limiter_bw_max_rate controls network I/O bandwidth (size in bits/sec
# for SB/VM).
# The same value is used for inbound and outbound bandwidth.
@@ -287,9 +287,9 @@ net_rate_limiter_ops_one_time_burst = 0
# and we strongly advise users to refer the Cloud Hypervisor official
# documentation for a better understanding of its internals:
# https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/docs/io_throttling.md
#
#
# Bandwidth rate limiter options
#
#
# disk_rate_limiter_bw_max_rate controls disk I/O bandwidth (size in bits/sec
# for SB/VM).
# The same value is used for inbound and outbound bandwidth.
@@ -476,9 +476,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -367,9 +367,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -636,9 +636,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -52,6 +52,11 @@ valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELQEMUCOCODEVPARAMS@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWAREPATH@"
@@ -362,17 +367,17 @@ msize_9p = @DEFMSIZE9P@
# nvdimm is not supported when `confidential_guest = true`.
disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM@
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
hot_plug_vfio = "no-port"
hot_plug_vfio = "no-port"
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "no-port"
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "no-port"
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
@@ -694,9 +699,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -15,7 +15,7 @@
[hypervisor.qemu]
path = "@QEMUSNPPATH@"
kernel = "@KERNELPATH_CONFIDENTIAL_NV@"
initrd = "@INITRDPATH_CONFIDENTIAL_NV@"
image = "@IMAGEPATH_CONFIDENTIAL_NV@"
machine_type = "@MACHINETYPE@"
@@ -34,7 +34,7 @@ rootfs_type = @DEFROOTFSTYPE@
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -75,7 +75,7 @@ snp_id_auth = ""
# SNP Guest Policy, the POLICY parameter to the SNP_LAUNCH_START command.
# If unset, the QEMU default policy (0x30000) will be used.
# Notice that the guest policy is enforced at VM launch, and your pod VMs
# Notice that the guest policy is enforced at VM launch, and your pod VMs
# won't start at all if the policy denys it. This will be indicated by a
# 'SNP_LAUNCH_START' error.
snp_guest_policy = 196608
@@ -92,6 +92,11 @@ snp_guest_policy = 196608
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS_NV@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS_CONFIDENTIAL_NV@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWARESNPPATH@"
@@ -394,10 +399,10 @@ disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM_NV@
pcie_root_port = 0
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "@DEFAULTVFIOPORT_NV@"
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
@@ -710,9 +715,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFAULTTIMEOUT_NV@

View File

@@ -14,7 +14,7 @@
[hypervisor.qemu]
path = "@QEMUTDXEXPERIMENTALPATH@"
kernel = "@KERNELPATH_CONFIDENTIAL_NV@"
initrd = "@INITRDPATH_CONFIDENTIAL_NV@"
image = "@IMAGEPATH_CONFIDENTIAL_NV@"
machine_type = "@MACHINETYPE@"
tdx_quote_generation_service_socket_port = @QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT@
@@ -34,7 +34,7 @@ rootfs_type = @DEFROOTFSTYPE@
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -69,6 +69,11 @@ valid_hypervisor_paths = @QEMUTDXEXPERIMENTALVALIDHYPERVISORPATHS@
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS_NV@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS_CONFIDENTIAL_NV@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWARETDVFPATH@"
@@ -371,10 +376,10 @@ disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM_NV@
pcie_root_port = 0
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "@DEFAULTVFIOPORT_NV@"
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
@@ -687,9 +692,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFAULTTIMEOUT_NV@

View File

@@ -14,7 +14,7 @@
[hypervisor.qemu]
path = "@QEMUPATH@"
kernel = "@KERNELPATH_NV@"
initrd = "@INITRDPATH_NV@"
image = "@IMAGEPATH_NV@"
machine_type = "@MACHINETYPE@"
# rootfs filesystem type:
@@ -51,6 +51,11 @@ valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS_NV@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS_NV@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWAREPATH@"
@@ -361,16 +366,16 @@ msize_9p = @DEFMSIZE9P@
# nvdimm is not supported when `confidential_guest = true`.
disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM_NV@
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
hot_plug_vfio = "no-port"
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "@DEFAULTVFIOPORT_NV@"
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
@@ -689,9 +694,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFAULTTIMEOUT_NV@

View File

@@ -25,7 +25,7 @@ machine_type = "@MACHINETYPE@"
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -349,7 +349,7 @@ msize_9p = @DEFMSIZE9P@
# nvdimm is not supported when `confidential_guest = true`.
disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM@
# Enable hot-plugging of VFIO devices to a bridge-port,
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
hot_plug_vfio = "no-port"
@@ -677,9 +677,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -33,7 +33,7 @@ rootfs_type = @DEFROOTFSTYPE@
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -74,7 +74,7 @@ snp_id_auth = ""
# SNP Guest Policy, the POLICY parameter to the SNP_LAUNCH_START command.
# If unset, the QEMU default policy (0x30000) will be used.
# Notice that the guest policy is enforced at VM launch, and your pod VMs
# Notice that the guest policy is enforced at VM launch, and your pod VMs
# won't start at all if the policy denys it. This will be indicated by a
# 'SNP_LAUNCH_START' error.
snp_guest_policy = 196608
@@ -702,9 +702,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -12,7 +12,7 @@
# XXX: Type: @PROJECT_TYPE@
[hypervisor.qemu]
path = "@QEMUTDXPATH@"
path = "@QEMUPATH@"
kernel = "@KERNELCONFIDENTIALPATH@"
image = "@IMAGECONFIDENTIALPATH@"
machine_type = "@MACHINETYPE@"
@@ -33,7 +33,7 @@ rootfs_type = @DEFROOTFSTYPE@
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -54,7 +54,7 @@ enable_annotations = @DEFENABLEANNOTATIONS_COCO@
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @QEMUVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @QEMUTDXVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
@@ -68,6 +68,11 @@ valid_hypervisor_paths = @QEMUTDXVALIDHYPERVISORPATHS@
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELTDXPARAMS@"
# Optional dm-verity parameters (comma-separated key=value list):
# root_hash=...,salt=...,data_blocks=...,data_block_size=...,hash_block_size=...
# These are used by the runtime to assemble dm-verity kernel params.
kernel_verity_params = "@KERNELVERITYPARAMS@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWARETDVFPATH@"
@@ -679,9 +684,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -361,17 +361,17 @@ msize_9p = @DEFMSIZE9P@
# nvdimm is not supported when `confidential_guest = true`.
disable_image_nvdimm = @DEFDISABLEIMAGENVDIMM@
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
hot_plug_vfio = "no-port"
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "no-port"
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
cold_plug_vfio = "no-port"
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
@@ -693,9 +693,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -410,9 +410,9 @@ enable_pprof = false
# Indicates the CreateContainer request timeout needed for the workload(s)
# It using guest_pull this includes the time to pull the image inside the guest
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# Note: The effective timeout is determined by the lesser of two values: runtime-request-timeout from kubelet config
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout) and create_container_timeout.
# In essence, the timeout used for guest pull=runtime-request-timeout<create_container_timeout?runtime-request-timeout:create_container_timeout.
create_container_timeout = @DEFCREATECONTAINERTIMEOUT@

View File

@@ -19,8 +19,13 @@ import (
)
const (
// containerd CRI annotations
nameAnnotation = "io.kubernetes.cri.sandbox-name"
namespaceAnnotation = "io.kubernetes.cri.sandbox-namespace"
// CRI-O annotations
crioNameAnnotation = "io.kubernetes.cri-o.KubeName"
crioNamespaceAnnotation = "io.kubernetes.cri-o.Namespace"
)
// coldPlugDevices handles cold plug of CDI devices into the sandbox
@@ -78,8 +83,7 @@ func coldPlugWithAPI(ctx context.Context, s *service, ociSpec *specs.Spec) error
// the Kubelet does not pass the device information via CRI during
// Sandbox creation.
func getDeviceSpec(ctx context.Context, socket string, ann map[string]string) ([]string, error) {
podName := ann[nameAnnotation]
podNs := ann[namespaceAnnotation]
podName, podNs := getPodIdentifiers(ann)
// create dialer for unix socket
dialer := func(ctx context.Context, target string) (net.Conn, error) {
@@ -111,7 +115,7 @@ func getDeviceSpec(ctx context.Context, socket string, ann map[string]string) ([
}
resp, err := client.Get(ctx, prr)
if err != nil {
return nil, fmt.Errorf("cold plug: GetPodResources failed: %w", err)
return nil, fmt.Errorf("cold plug: GetPodResources failed for pod(%s) in namespace(%s): %w", podName, podNs, err)
}
podRes := resp.PodResources
if podRes == nil {
@@ -141,6 +145,24 @@ func formatCDIDevIDs(specName string, devIDs []string) []string {
return result
}
func debugPodID(ann map[string]string) string {
return fmt.Sprintf("%s/%s", ann[namespaceAnnotation], ann[nameAnnotation])
// getPodIdentifiers returns the pod name and namespace from annotations.
// It first checks containerd CRI annotations, then falls back to CRI-O annotations.
func getPodIdentifiers(ann map[string]string) (podName, podNamespace string) {
podName = ann[nameAnnotation]
podNamespace = ann[namespaceAnnotation]
// Fall back to CRI-O annotations if containerd annotations are empty
if podName == "" {
podName = ann[crioNameAnnotation]
}
if podNamespace == "" {
podNamespace = ann[crioNamespaceAnnotation]
}
return podName, podNamespace
}
func debugPodID(ann map[string]string) string {
podName, podNamespace := getPodIdentifiers(ann)
return fmt.Sprintf("%s/%s", podNamespace, podName)
}

View File

@@ -93,6 +93,7 @@ type hypervisor struct {
MachineAccelerators string `toml:"machine_accelerators"`
CPUFeatures string `toml:"cpu_features"`
KernelParams string `toml:"kernel_params"`
KernelVerityParams string `toml:"kernel_verity_params"`
MachineType string `toml:"machine_type"`
QgsPort uint32 `toml:"tdx_quote_generation_service_socket_port"`
BlockDeviceDriver string `toml:"block_device_driver"`
@@ -387,6 +388,10 @@ func (h hypervisor) kernelParams() string {
return h.KernelParams
}
func (h hypervisor) kernelVerityParams() string {
return h.KernelVerityParams
}
func (h hypervisor) machineType() string {
if h.MachineType == "" {
return defaultMachineType
@@ -814,6 +819,7 @@ func newFirecrackerHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
RootfsType: rootfsType,
FirmwarePath: firmware,
KernelParams: vc.DeserializeParams(vc.KernelParamFields(kernelParams)),
KernelVerityParams: h.kernelVerityParams(),
NumVCPUsF: h.defaultVCPUs(),
DefaultMaxVCPUs: h.defaultMaxVCPUs(),
MemorySize: h.defaultMemSz(),
@@ -948,6 +954,7 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
MachineAccelerators: machineAccelerators,
CPUFeatures: cpuFeatures,
KernelParams: vc.DeserializeParams(vc.KernelParamFields(kernelParams)),
KernelVerityParams: h.kernelVerityParams(),
HypervisorMachineType: machineType,
QgsPort: h.qgsPort(),
NumVCPUsF: h.defaultVCPUs(),
@@ -1088,6 +1095,7 @@ func newClhHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
FirmwarePath: firmware,
MachineAccelerators: machineAccelerators,
KernelParams: vc.DeserializeParams(vc.KernelParamFields(kernelParams)),
KernelVerityParams: h.kernelVerityParams(),
HypervisorMachineType: machineType,
NumVCPUsF: h.defaultVCPUs(),
DefaultMaxVCPUs: h.defaultMaxVCPUs(),
@@ -1165,16 +1173,17 @@ func newDragonballHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
kernelParams := h.kernelParams()
return vc.HypervisorConfig{
KernelPath: kernel,
ImagePath: image,
RootfsType: rootfsType,
KernelParams: vc.DeserializeParams(vc.KernelParamFields(kernelParams)),
NumVCPUsF: h.defaultVCPUs(),
DefaultMaxVCPUs: h.defaultMaxVCPUs(),
MemorySize: h.defaultMemSz(),
MemSlots: h.defaultMemSlots(),
EntropySource: h.GetEntropySource(),
Debug: h.Debug,
KernelPath: kernel,
ImagePath: image,
RootfsType: rootfsType,
KernelParams: vc.DeserializeParams(vc.KernelParamFields(kernelParams)),
KernelVerityParams: h.kernelVerityParams(),
NumVCPUsF: h.defaultVCPUs(),
DefaultMaxVCPUs: h.defaultMaxVCPUs(),
MemorySize: h.defaultMemSz(),
MemSlots: h.defaultMemSlots(),
EntropySource: h.GetEntropySource(),
Debug: h.Debug,
}, nil
}
@@ -1249,6 +1258,7 @@ func newStratovirtHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
ImagePath: image,
RootfsType: rootfsType,
KernelParams: vc.DeserializeParams(strings.Fields(kernelParams)),
KernelVerityParams: h.kernelVerityParams(),
HypervisorMachineType: machineType,
NumVCPUsF: h.defaultVCPUs(),
DefaultMaxVCPUs: h.defaultMaxVCPUs(),

View File

@@ -636,6 +636,15 @@ func addHypervisorPathOverrides(ocispec specs.Spec, config *vc.SandboxConfig, ru
}
}
if value, ok := ocispec.Annotations[vcAnnotations.KernelVerityParams]; ok {
if value != "" {
if _, err := vc.ParseKernelVerityParams(value); err != nil {
return fmt.Errorf("invalid kernel_verity_params in annotation: %w", err)
}
config.HypervisorConfig.KernelVerityParams = value
}
}
return nil
}

View File

@@ -466,8 +466,8 @@ func (clh *cloudHypervisor) enableProtection() error {
}
}
func getNonUserDefinedKernelParams(rootfstype string, disableNvdimm bool, dax bool, debug bool, confidential bool, iommu bool) ([]Param, error) {
params, err := GetKernelRootParams(rootfstype, disableNvdimm, dax)
func getNonUserDefinedKernelParams(rootfstype string, disableNvdimm bool, dax bool, debug bool, confidential bool, iommu bool, kernelVerityParams string) ([]Param, error) {
params, err := GetKernelRootParams(rootfstype, disableNvdimm, dax, kernelVerityParams)
if err != nil {
return []Param{}, err
}
@@ -587,7 +587,7 @@ func (clh *cloudHypervisor) CreateVM(ctx context.Context, id string, network Net
disableNvdimm := (clh.config.DisableImageNvdimm || clh.config.ConfidentialGuest)
enableDax := !disableNvdimm
params, err := getNonUserDefinedKernelParams(hypervisorConfig.RootfsType, disableNvdimm, enableDax, clh.config.Debug, clh.config.ConfidentialGuest, clh.config.IOMMU)
params, err := getNonUserDefinedKernelParams(hypervisorConfig.RootfsType, disableNvdimm, enableDax, clh.config.Debug, clh.config.ConfidentialGuest, clh.config.IOMMU, hypervisorConfig.KernelVerityParams)
if err != nil {
return err
}

View File

@@ -699,7 +699,12 @@ func (fc *firecracker) fcInitConfiguration(ctx context.Context) error {
return err
}
params, err := GetKernelRootParams(fc.config.RootfsType, true, false)
params, err := GetKernelRootParams(
fc.config.RootfsType,
true,
false,
fc.config.KernelVerityParams,
)
if err != nil {
return err
}

View File

@@ -16,6 +16,7 @@ import (
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"github.com/pkg/errors"
@@ -126,18 +127,56 @@ const (
EROFS RootfsType = "erofs"
)
func GetKernelRootParams(rootfstype string, disableNvdimm bool, dax bool) ([]Param, error) {
var kernelRootParams []Param
func GetKernelRootParams(rootfstype string, disableNvdimm bool, dax bool, kernelVerityParams string) ([]Param, error) {
cfg, err := ParseKernelVerityParams(kernelVerityParams)
if err != nil {
return []Param{}, err
}
// EXT4 filesystem is used by default.
if rootfstype == "" {
rootfstype = string(EXT4)
}
if cfg != nil {
rootDevice := "/dev/pmem0p1"
hashDevice := "/dev/pmem0p2"
if disableNvdimm {
rootDevice = "/dev/vda1"
hashDevice = "/dev/vda2"
}
dataSectors := (cfg.dataBlockSize / 512) * cfg.dataBlocks
verityCmd := fmt.Sprintf(
"dm-verity,,,ro,0 %d verity 1 %s %s %d %d %d 0 sha256 %s %s",
dataSectors,
rootDevice,
hashDevice,
cfg.dataBlockSize,
cfg.hashBlockSize,
cfg.dataBlocks,
cfg.rootHash,
cfg.salt,
)
rootFlags, err := kernelVerityRootFlags(rootfstype)
if err != nil {
return []Param{}, err
}
return []Param{
{Key: "dm-mod.create", Value: fmt.Sprintf("\"%s\"", verityCmd)},
{Key: "root", Value: "/dev/dm-0"},
{Key: "rootflags", Value: rootFlags},
{Key: "rootfstype", Value: rootfstype},
}, nil
}
if disableNvdimm && dax {
return []Param{}, fmt.Errorf("Virtio-Blk does not support DAX")
}
kernelRootParams := []Param{}
if disableNvdimm {
// Virtio-Blk
kernelRootParams = append(kernelRootParams, Param{"root", string(VirtioBlk)})
@@ -171,10 +210,116 @@ func GetKernelRootParams(rootfstype string, disableNvdimm bool, dax bool) ([]Par
}
kernelRootParams = append(kernelRootParams, Param{"rootfstype", rootfstype})
return kernelRootParams, nil
}
const (
verityBlockSizeBytes = 512
)
type kernelVerityConfig struct {
rootHash string
salt string
dataBlocks uint64
dataBlockSize uint64
hashBlockSize uint64
}
func ParseKernelVerityParams(params string) (*kernelVerityConfig, error) {
if strings.TrimSpace(params) == "" {
return nil, nil
}
values := map[string]string{}
for _, field := range strings.Split(params, ",") {
field = strings.TrimSpace(field)
if field == "" {
continue
}
parts := strings.SplitN(field, "=", 2)
if len(parts) != 2 {
return nil, fmt.Errorf("invalid kernel_verity_params entry: %q", field)
}
values[parts[0]] = parts[1]
}
cfg := &kernelVerityConfig{
rootHash: values["root_hash"],
salt: values["salt"],
}
if cfg.rootHash == "" {
return nil, fmt.Errorf("missing kernel_verity_params root_hash")
}
parseUintField := func(name string) (uint64, error) {
value, ok := values[name]
if !ok || value == "" {
return 0, fmt.Errorf("missing kernel_verity_params %s", name)
}
parsed, err := strconv.ParseUint(value, 10, 64)
if err != nil {
return 0, fmt.Errorf("invalid kernel_verity_params %s %q: %w", name, value, err)
}
return parsed, nil
}
dataBlocks, err := parseUintField("data_blocks")
if err != nil {
return nil, err
}
dataBlockSize, err := parseUintField("data_block_size")
if err != nil {
return nil, err
}
hashBlockSize, err := parseUintField("hash_block_size")
if err != nil {
return nil, err
}
if cfg.salt == "" {
return nil, fmt.Errorf("missing kernel_verity_params salt")
}
if dataBlocks == 0 {
return nil, fmt.Errorf("invalid kernel_verity_params data_blocks: must be non-zero")
}
if dataBlockSize == 0 {
return nil, fmt.Errorf("invalid kernel_verity_params data_block_size: must be non-zero")
}
if hashBlockSize == 0 {
return nil, fmt.Errorf("invalid kernel_verity_params hash_block_size: must be non-zero")
}
if dataBlockSize%verityBlockSizeBytes != 0 {
return nil, fmt.Errorf("invalid kernel_verity_params data_block_size: must be multiple of %d", verityBlockSizeBytes)
}
if hashBlockSize%verityBlockSizeBytes != 0 {
return nil, fmt.Errorf("invalid kernel_verity_params hash_block_size: must be multiple of %d", verityBlockSizeBytes)
}
cfg.dataBlocks = dataBlocks
cfg.dataBlockSize = dataBlockSize
cfg.hashBlockSize = hashBlockSize
return cfg, nil
}
func kernelVerityRootFlags(rootfstype string) (string, error) {
// EXT4 filesystem is used by default.
if rootfstype == "" {
rootfstype = string(EXT4)
}
switch RootfsType(rootfstype) {
case EROFS:
return "ro", nil
case XFS:
return "ro", nil
case EXT4:
return "data=ordered,errors=remount-ro ro", nil
default:
return "", fmt.Errorf("unsupported rootfs type")
}
}
// DeviceType describes a virtualized device type.
type DeviceType int
@@ -483,6 +628,9 @@ type HypervisorConfig struct {
// KernelParams are additional guest kernel parameters.
KernelParams []Param
// KernelVerityParams are additional guest dm-verity parameters.
KernelVerityParams string
// HypervisorParams are additional hypervisor parameters.
HypervisorParams []Param

View File

@@ -22,6 +22,7 @@ func TestGetKernelRootParams(t *testing.T) {
expected []Param
disableNvdimm bool
dax bool
verityParams string
error bool
}{
// EXT4
@@ -34,6 +35,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: false,
verityParams: "",
error: false,
},
{
@@ -45,6 +47,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: true,
verityParams: "",
error: false,
},
{
@@ -56,6 +59,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: true,
dax: false,
verityParams: "",
error: false,
},
@@ -69,6 +73,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: false,
verityParams: "",
error: false,
},
{
@@ -80,6 +85,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: true,
verityParams: "",
error: false,
},
{
@@ -91,6 +97,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: true,
dax: false,
verityParams: "",
error: false,
},
@@ -104,6 +111,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: false,
verityParams: "",
error: false,
},
{
@@ -115,6 +123,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: true,
verityParams: "",
error: false,
},
{
@@ -126,6 +135,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: true,
dax: false,
verityParams: "",
error: false,
},
@@ -139,6 +149,7 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: false,
dax: false,
verityParams: "",
error: true,
},
@@ -152,12 +163,61 @@ func TestGetKernelRootParams(t *testing.T) {
},
disableNvdimm: true,
dax: true,
verityParams: "",
error: true,
},
{
rootfstype: string(EXT4),
expected: []Param{
{
Key: "dm-mod.create",
Value: "\"dm-verity,,,ro,0 8 verity 1 /dev/vda1 /dev/vda2 4096 4096 1 0 sha256 abc def\"",
},
{Key: "root", Value: "/dev/dm-0"},
{Key: "rootflags", Value: "data=ordered,errors=remount-ro ro"},
{Key: "rootfstype", Value: string(EXT4)},
},
disableNvdimm: true,
dax: false,
verityParams: "root_hash=abc,salt=def,data_blocks=1,data_block_size=4096,hash_block_size=4096",
error: false,
},
{
rootfstype: string(EXT4),
expected: []Param{},
disableNvdimm: false,
dax: false,
verityParams: "root_hash=abc,data_blocks=1,data_block_size=4096,hash_block_size=4096",
error: true,
},
{
rootfstype: string(EXT4),
expected: []Param{},
disableNvdimm: false,
dax: false,
verityParams: "root_hash=abc,salt=def,data_block_size=4096,hash_block_size=4096",
error: true,
},
{
rootfstype: string(EXT4),
expected: []Param{},
disableNvdimm: false,
dax: false,
verityParams: "root_hash=abc,salt=def,data_blocks=foo,data_block_size=4096,hash_block_size=4096",
error: true,
},
{
rootfstype: string(EXT4),
expected: []Param{},
disableNvdimm: false,
dax: false,
verityParams: "root_hash=abc,salt=def,data_blocks=1,data_block_size=4096,hash_block_size=4096,badfield",
error: true,
},
}
for _, t := range tests {
kernelRootParams, err := GetKernelRootParams(t.rootfstype, t.disableNvdimm, t.dax)
kernelRootParams, err := GetKernelRootParams(t.rootfstype, t.disableNvdimm, t.dax, t.verityParams)
if t.error {
assert.Error(err)
continue

View File

@@ -84,6 +84,9 @@ const (
// KernelParams is a sandbox annotation for passing additional guest kernel parameters.
KernelParams = kataAnnotHypervisorPrefix + "kernel_params"
// KernelVerityParams is a sandbox annotation for passing guest dm-verity parameters.
KernelVerityParams = kataAnnotHypervisorPrefix + "kernel_verity_params"
// MachineType is a sandbox annotation to specify the type of machine being emulated by the hypervisor.
MachineType = kataAnnotHypervisorPrefix + "machine_type"

View File

@@ -773,7 +773,12 @@ func (q *qemuArchBase) setEndpointDevicePath(endpoint Endpoint, bridgeAddr int,
func (q *qemuArchBase) handleImagePath(config HypervisorConfig) error {
if config.ImagePath != "" {
kernelRootParams, err := GetKernelRootParams(config.RootfsType, q.disableNvdimm, false)
kernelRootParams, err := GetKernelRootParams(
config.RootfsType,
q.disableNvdimm,
false,
config.KernelVerityParams,
)
if err != nil {
return err
}
@@ -781,7 +786,12 @@ func (q *qemuArchBase) handleImagePath(config HypervisorConfig) error {
q.qemuMachine.Options = strings.Join([]string{
q.qemuMachine.Options, qemuNvdimmOption,
}, ",")
kernelRootParams, err = GetKernelRootParams(config.RootfsType, q.disableNvdimm, q.dax)
kernelRootParams, err = GetKernelRootParams(
config.RootfsType,
q.disableNvdimm,
q.dax,
config.KernelVerityParams,
)
if err != nil {
return err
}

View File

@@ -83,7 +83,12 @@ func newQemuArch(config HypervisorConfig) (qemuArch, error) {
}
if config.ImagePath != "" {
kernelParams, err := GetKernelRootParams(config.RootfsType, true, false)
kernelParams, err := GetKernelRootParams(
config.RootfsType,
true,
false,
config.KernelVerityParams,
)
if err != nil {
return nil, err
}

View File

@@ -337,7 +337,12 @@ func (s *stratovirt) getKernelParams(machineType string, initrdPath string) (str
var kernelParams []Param
if initrdPath == "" {
params, err := GetKernelRootParams(s.config.RootfsType, true, false)
params, err := GetKernelRootParams(
s.config.RootfsType,
true,
false,
s.config.KernelVerityParams,
)
if err != nil {
return "", err
}

View File

@@ -155,13 +155,13 @@ EOF
# End-to-End Tests (require cluster with kata-deploy)
# =============================================================================
@test "E2E: Custom RuntimeClass exists with correct properties" {
@test "E2E: Custom RuntimeClass exists and can run a pod" {
# Check RuntimeClass exists
run kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" -o name
if [[ "${status}" -ne 0 ]]; then
echo "# RuntimeClass not found. kata-deploy logs:" >&3
kubectl -n kube-system logs -l name=kata-deploy --tail=50 2>/dev/null || true
fail "Custom RuntimeClass ${CUSTOM_RUNTIME_HANDLER} not found"
die "Custom RuntimeClass ${CUSTOM_RUNTIME_HANDLER} not found"
fi
echo "# RuntimeClass ${CUSTOM_RUNTIME_HANDLER} exists" >&3
@@ -195,15 +195,6 @@ EOF
echo "# Label app.kubernetes.io/managed-by: ${label}" >&3
[[ "${label}" == "Helm" ]]
BATS_TEST_COMPLETED=1
}
@test "E2E: Custom runtime can run a pod" {
# Check if the custom RuntimeClass exists
if ! kubectl get runtimeclass "${CUSTOM_RUNTIME_HANDLER}" &>/dev/null; then
skip "Custom RuntimeClass ${CUSTOM_RUNTIME_HANDLER} not found"
fi
# Create a test pod using the custom runtime
cat <<EOF | kubectl apply -f -
apiVersion: v1
@@ -239,7 +230,7 @@ EOF
Failed)
echo "# Pod failed" >&3
kubectl describe pod "${TEST_POD_NAME}" >&3
fail "Pod failed to run with custom runtime"
die "Pod failed to run with custom runtime"
;;
*)
local current_time
@@ -247,7 +238,7 @@ EOF
if (( current_time - start_time > timeout )); then
echo "# Timeout waiting for pod" >&3
kubectl describe pod "${TEST_POD_NAME}" >&3
fail "Timeout waiting for pod to be ready"
die "Timeout waiting for pod to be ready"
fi
sleep 5
;;
@@ -262,7 +253,7 @@ EOF
echo "# Pod ran successfully with custom runtime" >&3
BATS_TEST_COMPLETED=1
else
fail "Pod did not complete successfully (exit code: ${exit_code})"
die "Pod did not complete successfully (exit code: ${exit_code})"
fi
}

View File

@@ -115,7 +115,7 @@ deploy_kata() {
kubectl -n "${HELM_NAMESPACE}" rollout status daemonset/kata-deploy --timeout=300s
# Give it a moment to configure runtimes
sleep 10
sleep 60
return 0
}

View File

@@ -151,26 +151,30 @@ setup() {
[[ -n "$qemu_cmd" ]] || { echo "Could not find QEMU command line"; return 1; }
kernel_path=$(echo "$qemu_cmd" | grep -oP -- '-kernel \K[^ ]+')
initrd_path=$(echo "$qemu_cmd" | grep -oP -- '-initrd \K[^ ]+')
initrd_path=$(echo "$qemu_cmd" | grep -oP -- '-initrd \K[^ ]+' || true)
firmware_path=$(echo "$qemu_cmd" | grep -oP -- '-bios \K[^ ]+')
vcpu_count=$(echo "$qemu_cmd" | grep -oP -- '-smp \K\d+')
append=$(echo "$qemu_cmd" | sed -n 's/.*-append \(.*\) -bios.*/\1/p')
# Remove escape backslashes for quotes from output for dm-mod.create parameters
append="${append//\\\"/\"}"
launch_measurement=$(PATH="${PATH}:${HOME}/.local/bin" sev-snp-measure \
--mode=snp \
--vcpus="${vcpu_count}" \
--vcpu-type=EPYC-v4 \
--output-format=base64 \
--ovmf="${firmware_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
measure_args=(
--mode=snp
--vcpus="${vcpu_count}"
--vcpu-type=EPYC-v4
--output-format=base64
--ovmf="${firmware_path}"
--kernel="${kernel_path}"
--append="${append}"
)
if [[ -n "${initrd_path}" ]]; then
measure_args+=(--initrd="${initrd_path}")
fi
launch_measurement=$(PATH="${PATH}:${HOME}/.local/bin" sev-snp-measure "${measure_args[@]}")
# set launch measurement as reference value
# set launch measurement as reference value
kbs_config_command set-sample-reference-value snp_launch_measurement "${launch_measurement}"
# Get the reported firmware version(s) for this machine
firmware=$(sudo snphost show tcb | grep -A 5 "Reported TCB")
@@ -204,7 +208,6 @@ setup() {
kbs_config_command set-sample-reference-value snp_launch_measurement abcd
[ "$result" -eq 0 ]
}

View File

@@ -9,6 +9,11 @@ load "${BATS_TEST_DIRNAME}/../../common.bash"
load "${BATS_TEST_DIRNAME}/lib.sh"
load "${BATS_TEST_DIRNAME}/tests_common.sh"
# Currently only the Go runtime provides the config path used here.
# If a Rust hypervisor runs this test, mirror the enabling_hypervisor
# pattern in tests/common.bash to select the correct runtime-rs config.
shim_config_file="/opt/kata/share/defaults/kata-containers/configuration-${KATA_HYPERVISOR}.toml"
check_and_skip() {
case "${KATA_HYPERVISOR}" in
qemu-tdx|qemu-coco-dev)
@@ -29,26 +34,29 @@ setup() {
setup_common || die "setup_common failed"
}
@test "Test cannnot launch pod with measured boot enabled and incorrect hash" {
@test "Test cannot launch pod with measured boot enabled and incorrect hash" {
pod_config="$(new_pod_config nginx "kata-${KATA_HYPERVISOR}")"
auto_generate_policy "${pod_config_dir}" "${pod_config}"
incorrect_hash="1111111111111111111111111111111111111111111111111111111111111111"
# To avoid editing that file on the worker node, here it will be
# enabled via pod annotations.
# Read verity parameters from config, then override via annotations.
kernel_verity_params=$(exec_host "$node" "sed -n 's/^kernel_verity_params = \"\\(.*\\)\"/\\1/p' ${shim_config_file}" || true)
[ -n "${kernel_verity_params}" ] || die "Missing kernel_verity_params in ${shim_config_file}"
kernel_verity_params=$(printf '%s\n' "$kernel_verity_params" | sed -E "s/root_hash=[^,]*/root_hash=${incorrect_hash}/")
set_metadata_annotation "$pod_config" \
"io.katacontainers.config.hypervisor.kernel_params" \
"rootfs_verity.scheme=dm-verity rootfs_verity.hash=$incorrect_hash"
"io.katacontainers.config.hypervisor.kernel_verity_params" \
"${kernel_verity_params}"
# Run on a specific node so we know from where to inspect the logs
set_node "$pod_config" "$node"
# For debug sake
echo "Pod $pod_config file:"
cat $pod_config
kubectl apply -f $pod_config
waitForProcess "60" "3" "exec_host $node journalctl -t kata | grep \"verity: .* metadata block .* is corrupted\""
assert_pod_container_creating "$pod_config"
assert_logs_contain "$node" kata "${node_start_time}" "verity: .* metadata block .* is corrupted"
}
teardown() {

View File

@@ -48,12 +48,59 @@ KBS_AUTH_CONFIG_JSON=$(
)
export KBS_AUTH_CONFIG_JSON
# Base64 encoding for use as Kubernetes Secret in pod manifests
# Base64 encoding for use as Kubernetes Secret in pod manifests (non-TEE)
NGC_API_KEY_BASE64=$(
echo -n "${NGC_API_KEY}" | base64 -w0
)
export NGC_API_KEY_BASE64
# Sealed secret format for TEE pods (vault type pointing to KBS resource)
# Format: sealed.<base64url JWS header>.<base64url payload>.<base64url signature>
# IMPORTANT: JWS uses base64url encoding WITHOUT padding (no trailing '=')
# We use tr to convert standard base64 (+/) to base64url (-_) and remove padding (=)
# For vault type, header and signature can be placeholders since the payload
# contains the KBS resource path where the actual secret is stored.
#
# Vault type sealed secret payload for instruct pod:
# {
# "version": "0.1.0",
# "type": "vault",
# "name": "kbs:///default/ngc-api-key/instruct",
# "provider": "kbs",
# "provider_settings": {},
# "annotations": {}
# }
NGC_API_KEY_SEALED_SECRET_INSTRUCT_PAYLOAD=$(
echo -n '{"version":"0.1.0","type":"vault","name":"kbs:///default/ngc-api-key/instruct","provider":"kbs","provider_settings":{},"annotations":{}}' |
base64 -w0 | tr '+/' '-_' | tr -d '='
)
NGC_API_KEY_SEALED_SECRET_INSTRUCT="sealed.fakejwsheader.${NGC_API_KEY_SEALED_SECRET_INSTRUCT_PAYLOAD}.fakesignature"
export NGC_API_KEY_SEALED_SECRET_INSTRUCT
# Base64 encode the sealed secret for use in Kubernetes Secret data field
# (genpolicy only supports the 'data' field which expects base64 values)
NGC_API_KEY_SEALED_SECRET_INSTRUCT_BASE64=$(echo -n "${NGC_API_KEY_SEALED_SECRET_INSTRUCT}" | base64 -w0)
export NGC_API_KEY_SEALED_SECRET_INSTRUCT_BASE64
# Vault type sealed secret payload for embedqa pod:
# {
# "version": "0.1.0",
# "type": "vault",
# "name": "kbs:///default/ngc-api-key/embedqa",
# "provider": "kbs",
# "provider_settings": {},
# "annotations": {}
# }
NGC_API_KEY_SEALED_SECRET_EMBEDQA_PAYLOAD=$(
echo -n '{"version":"0.1.0","type":"vault","name":"kbs:///default/ngc-api-key/embedqa","provider":"kbs","provider_settings":{},"annotations":{}}' |
base64 -w0 | tr '+/' '-_' | tr -d '='
)
NGC_API_KEY_SEALED_SECRET_EMBEDQA="sealed.fakejwsheader.${NGC_API_KEY_SEALED_SECRET_EMBEDQA_PAYLOAD}.fakesignature"
export NGC_API_KEY_SEALED_SECRET_EMBEDQA
NGC_API_KEY_SEALED_SECRET_EMBEDQA_BASE64=$(echo -n "${NGC_API_KEY_SEALED_SECRET_EMBEDQA}" | base64 -w0)
export NGC_API_KEY_SEALED_SECRET_EMBEDQA_BASE64
setup_langchain_flow() {
# shellcheck disable=SC1091 # Sourcing virtual environment activation script
source "${HOME}"/.cicd/venv/bin/activate
@@ -66,18 +113,56 @@ setup_langchain_flow() {
[[ "$(pip show beautifulsoup4 2>/dev/null | awk '/^Version:/{print $2}')" = "4.13.4" ]] || pip install beautifulsoup4==4.13.4
}
setup_kbs_credentials() {
# Get KBS address and export it for pod template substitution
export CC_KBS_ADDR="$(kbs_k8s_svc_http_addr)"
# Create initdata TOML file for genpolicy with CDH configuration.
# This file is used by genpolicy via --initdata-path. Genpolicy will add the
# generated policy.rego to it and set it as the cc_init_data annotation.
# We must overwrite the default empty file AFTER create_tmp_policy_settings_dir()
# copies it to the temp directory.
create_nim_initdata_file() {
local output_file="$1"
local cc_kbs_address
cc_kbs_address=$(kbs_k8s_svc_http_addr)
kbs_set_gpu0_resource_policy
cat > "${output_file}" << EOF
version = "0.1.0"
algorithm = "sha256"
[data]
"aa.toml" = '''
[token_configs]
[token_configs.kbs]
url = "${cc_kbs_address}"
'''
"cdh.toml" = '''
[kbc]
name = "cc_kbc"
url = "${cc_kbs_address}"
[image]
authenticated_registry_credentials_uri = "kbs:///default/credentials/nvcr"
'''
EOF
}
setup_kbs_credentials() {
# Export KBS address for use in pod YAML templates (aa_kbc_params)
CC_KBS_ADDR=$(kbs_k8s_svc_http_addr)
export CC_KBS_ADDR
# Set up Kubernetes secret for the containerd metadata pull
kubectl delete secret ngc-secret-instruct --ignore-not-found
kubectl create secret docker-registry ngc-secret-instruct --docker-server="nvcr.io" --docker-username="\$oauthtoken" --docker-password="${NGC_API_KEY}"
kbs_set_gpu0_resource_policy
# KBS_AUTH_CONFIG_JSON is already base64 encoded
kbs_set_resource_base64 "default" "credentials" "nvcr" "${KBS_AUTH_CONFIG_JSON}"
# Store the actual NGC_API_KEY in KBS for sealed secret unsealing.
# The sealed secrets in the pod YAML point to these KBS resource paths.
kbs_set_resource "default" "ngc-api-key" "instruct" "${NGC_API_KEY}"
kbs_set_resource "default" "ngc-api-key" "embedqa" "${NGC_API_KEY}"
}
create_inference_pod() {
@@ -122,10 +207,6 @@ setup_file() {
export POD_EMBEDQA_YAML_IN="${pod_config_dir}/${POD_NAME_EMBEDQA}.yaml.in"
export POD_EMBEDQA_YAML="${pod_config_dir}/${POD_NAME_EMBEDQA}.yaml"
if [ "${TEE}" = "true" ]; then
setup_kbs_credentials
fi
dpkg -s jq >/dev/null 2>&1 || sudo apt -y install jq
export PYENV_ROOT="${HOME}/.pyenv"
@@ -140,6 +221,14 @@ setup_file() {
policy_settings_dir="$(create_tmp_policy_settings_dir "${pod_config_dir}")"
add_requests_to_policy_settings "${policy_settings_dir}" "ReadStreamRequest"
if [ "${TEE}" = "true" ]; then
setup_kbs_credentials
# Overwrite the empty default-initdata.toml with our CDH configuration.
# This must happen AFTER create_tmp_policy_settings_dir() copies the empty
# file and BEFORE auto_generate_policy() runs.
create_nim_initdata_file "${policy_settings_dir}/default-initdata.toml"
fi
create_inference_pod
if [ "${SKIP_MULTI_GPU_TESTS}" != "true" ]; then

View File

@@ -282,7 +282,7 @@ teardown() {
# Debugging information. Don't print the "Message:" line because it contains a truncated policy log.
kubectl describe pod "${pod_name}" | grep -v "Message:"
teardown_common "${node}" "${node_start_time:-}"
# Clean-up
kubectl delete pod "${pod_name}"
kubectl delete configmap "${configmap_name}"
@@ -291,4 +291,6 @@ teardown() {
rm -f "${incorrect_configmap_yaml}"
rm -f "${testcase_pre_generate_pod_yaml}"
rm -f "${testcase_pre_generate_configmap_yaml}"
teardown_common "${node}" "${node_start_time:-}"
}

View File

@@ -62,9 +62,11 @@ teardown() {
# Debugging information. Don't print the "Message:" line because it contains a truncated policy log.
kubectl describe pod "${pod_name}" | grep -v "Message:"
teardown_common "${node}" "${node_start_time:-}"
# Clean-up
kubectl delete -f "${correct_pod_yaml}"
kubectl delete -f "${pvc_yaml}"
rm -f "${incorrect_pod_yaml}"
teardown_common "${node}" "${node_start_time:-}"
}

View File

@@ -179,7 +179,7 @@ assert_pod_fail() {
local container_config="$1"
local duration="${2:-120}"
echo "In assert_pod_fail: $container_config"
echo "In assert_pod_fail: ${container_config}"
echo "Attempt to create the container but it should fail"
retry_kubectl_apply "${container_config}"
@@ -192,13 +192,13 @@ assert_pod_fail() {
local sleep_time=5
while true; do
echo "Waiting for a container to fail"
sleep ${sleep_time}
sleep "${sleep_time}"
elapsed_time=$((elapsed_time+sleep_time))
if [[ $(kubectl get pod "${pod_name}" \
-o jsonpath='{.status.containerStatuses[0].state.waiting.reason}') = *BackOff* ]]; then
return 0
fi
if [ $elapsed_time -gt $duration ]; then
if [[ "${elapsed_time}" -gt "${duration}" ]]; then
echo "The container does not get into a failing state" >&2
break
fi
@@ -207,6 +207,46 @@ assert_pod_fail() {
}
# Create a pod then assert it remains in ContainerCreating.
#
# Parameters:
# $1 - the pod configuration file.
# $2 - the duration to wait (seconds). Defaults to 60. (optional)
#
assert_pod_container_creating() {
local container_config="$1"
local duration="${2:-60}"
echo "In assert_pod_container_creating: ${container_config}"
echo "Attempt to create the container but it should stay in creating state"
retry_kubectl_apply "${container_config}"
if ! pod_name=$(kubectl get pods -o jsonpath='{.items..metadata.name}'); then
echo "Failed to create the pod"
return 1
fi
local elapsed_time=0
local sleep_time=5
while true; do
sleep "${sleep_time}"
elapsed_time=$((elapsed_time+sleep_time))
reason=$(kubectl get pod "${pod_name}" -o jsonpath='{.status.containerStatuses[0].state.waiting.reason}' 2>/dev/null || true)
phase=$(kubectl get pod "${pod_name}" -o jsonpath='{.status.phase}' 2>/dev/null || true)
if [[ "${phase}" != "Pending" ]]; then
echo "Expected pod to remain Pending, got phase: ${phase}" >&2
return 1
fi
if [[ -n "${reason}" && "${reason}" != "ContainerCreating" ]]; then
echo "Expected ContainerCreating, got: ${reason}" >&2
return 1
fi
if [[ "${elapsed_time}" -ge "${duration}" ]]; then
return 0
fi
done
}
# Check the pulled rootfs on host for given node and sandbox_id
#
# Parameters:
@@ -381,4 +421,3 @@ get_node_kata_sandbox_id() {
done
echo $kata_sandbox_id
}

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: busybox
spec:
terminationGracePeriodSeconds: 0
shareProcessNamespace: true
runtimeClassName: kata
containers:

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: POD_NAME
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
shareProcessNamespace: true
containers:

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: initcontainer-shared-volume
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
initContainers:
- name: first

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: busybox
spec:
terminationGracePeriodSeconds: 0
shareProcessNamespace: true
runtimeClassName: kata
initContainers:

View File

@@ -16,7 +16,6 @@ spec:
labels:
jobgroup: jobtest
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test

View File

@@ -10,7 +10,6 @@ metadata:
spec:
template:
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: pi

View File

@@ -23,7 +23,6 @@ spec:
role: master
tier: backend
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 2000

View File

@@ -23,7 +23,6 @@ spec:
role: master
tier: backend
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 2000

View File

@@ -23,7 +23,6 @@ spec:
role: master
tier: backend
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 65534

View File

@@ -23,7 +23,6 @@ spec:
role: master
tier: backend
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 2000

View File

@@ -23,7 +23,6 @@ spec:
role: master
tier: backend
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 1000

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: hard-coded-policy-pod
spec:
terminationGracePeriodSeconds: 0
shareProcessNamespace: true
runtimeClassName: kata
containers:

View File

@@ -10,7 +10,6 @@ metadata:
spec:
template:
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: hello

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: policy-pod-pvc
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: busybox

View File

@@ -9,7 +9,6 @@ metadata:
name: policy-pod
uid: policy-pod-uid
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: prometheus

View File

@@ -17,7 +17,6 @@ spec:
labels:
app: policy-nginx-rc
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: nginxtest

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: set-keys-test
spec:
terminationGracePeriodSeconds: 0
shareProcessNamespace: true
runtimeClassName: kata
containers:

View File

@@ -9,7 +9,6 @@ kind: Pod
metadata:
name: handlers
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: handlers-container

View File

@@ -17,7 +17,6 @@ spec:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: nginx

View File

@@ -10,7 +10,11 @@ metadata:
labels:
app: ${POD_NAME_INSTRUCT}
annotations:
io.katacontainers.config.hypervisor.kernel_params: "agent.image_registry_auth=kbs:///default/credentials/nvcr agent.aa_kbc_params=cc_kbc::${CC_KBS_ADDR}"
# Start CDH process and configure AA for KBS communication
# aa_kbc_params tells the Attestation Agent where KBS is located
io.katacontainers.config.hypervisor.kernel_params: "agent.guest_components_procs=confidential-data-hub agent.aa_kbc_params=cc_kbc::${CC_KBS_ADDR}"
# cc_init_data annotation will be added by genpolicy with CDH configuration
# from the custom default-initdata.toml created by create_nim_initdata_file()
spec:
restartPolicy: Never
runtimeClassName: kata
@@ -58,7 +62,7 @@ spec:
- name: NGC_API_KEY
valueFrom:
secretKeyRef:
name: ngc-api-key-instruct
name: ngc-api-key-sealed-instruct
key: api-key
# GPU resource limit (for NVIDIA GPU)
resources:
@@ -78,7 +82,9 @@ data:
apiVersion: v1
kind: Secret
metadata:
name: ngc-api-key-instruct
name: ngc-api-key-sealed-instruct
type: Opaque
data:
api-key: "${NGC_API_KEY_BASE64}"
# Sealed secret pointing to kbs:///default/ngc-api-key/instruct
# CDH will unseal this by fetching the actual key from KBS
api-key: "${NGC_API_KEY_SEALED_SECRET_INSTRUCT_BASE64}"

View File

@@ -10,7 +10,11 @@ metadata:
labels:
app: ${POD_NAME_EMBEDQA}
annotations:
io.katacontainers.config.hypervisor.kernel_params: "agent.image_registry_auth=kbs:///default/credentials/nvcr agent.aa_kbc_params=cc_kbc::${CC_KBS_ADDR}"
# Start CDH process and configure AA for KBS communication
# aa_kbc_params tells the Attestation Agent where KBS is located
io.katacontainers.config.hypervisor.kernel_params: "agent.guest_components_procs=confidential-data-hub agent.aa_kbc_params=cc_kbc::${CC_KBS_ADDR}"
# cc_init_data annotation will be added by genpolicy with CDH configuration
# from the custom default-initdata.toml created by create_nim_initdata_file()
spec:
restartPolicy: Always
runtimeClassName: kata
@@ -29,7 +33,7 @@ spec:
- name: NGC_API_KEY
valueFrom:
secretKeyRef:
name: ngc-api-key-embedqa
name: ngc-api-key-sealed-embedqa
key: api-key
- name: NIM_HTTP_API_PORT
value: "8000"
@@ -88,7 +92,9 @@ data:
apiVersion: v1
kind: Secret
metadata:
name: ngc-api-key-embedqa
name: ngc-api-key-sealed-embedqa
type: Opaque
data:
api-key: "${NGC_API_KEY_BASE64}"
# Sealed secret pointing to kbs:///default/ngc-api-key/embedqa
# CDH will unseal this by fetching the actual key from KBS
api-key: "${NGC_API_KEY_SEALED_SECRET_EMBEDQA_BASE64}"

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: besteffort-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: qos-besteffort

View File

@@ -3,7 +3,6 @@ kind: Pod
metadata:
name: pod-block-pv
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: my-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: burstable-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: qos-burstable

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: pod-caps
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: config-env-test-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: default-cpu-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: default-cpu-demo-ctr

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: constraints-cpu-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: first-cpu-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: custom-dns-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: sharevol-kata
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: test-env
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: test-file-volume
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
nodeName: NODE

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: footubuntu
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
volumes:
- name: runv

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: qos-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: qos-guaranteed

View File

@@ -9,7 +9,6 @@ kind: Pod
metadata:
name: test-pod-hostname
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
containers:

View File

@@ -9,7 +9,6 @@ kind: Pod
metadata:
name: hostpath-kmsg
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
volumes:

View File

@@ -10,7 +10,6 @@ metadata:
test: liveness-test
name: liveness-http
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: liveness

View File

@@ -10,7 +10,6 @@ metadata:
test: liveness
name: liveness-exec
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: liveness

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: memory-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: memory-test-ctr

View File

@@ -23,7 +23,6 @@ kind: Pod
metadata:
name: nested-configmap-secret-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: cpu-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: c1

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: optional-empty-config-test-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: optional-empty-secret-test-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -9,7 +9,6 @@ kind: Pod
metadata:
name: privileged
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
containers:

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: test-projected-volume
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-projected-volume

View File

@@ -17,7 +17,6 @@ spec:
labels:
purpose: quota-demo
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: pod-quota-demo

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: test-readonly-volume
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
restartPolicy: Never
volumes:

View File

@@ -11,7 +11,6 @@ metadata:
io.katacontainers.config.runtime.disable_guest_seccomp: "false"
spec:
runtimeClassName: kata
terminationGracePeriodSeconds: 0
restartPolicy: Never
containers:
- name: busybox

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: secret-envars-test-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: envars-test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: secret-test-pod
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
containers:
- name: test-container

View File

@@ -8,7 +8,6 @@ kind: Pod
metadata:
name: security-context-test
spec:
terminationGracePeriodSeconds: 0
runtimeClassName: kata
securityContext:
runAsUser: 1000

Some files were not shown because too many files have changed in this diff Show More