Compare commits

...

136 Commits

Author SHA1 Message Date
Alex Lyn
8c2e32f075 kata-types: Support create_container_timeout set within configuration
Since it aligns with the create_container_timeout definition in
runtime-go, we need to set the value in configuration.toml in seconds,
not milliseconds. We must also convert it to milliseconds when the
configuration is loaded for request_timeout_ms.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-29 17:17:23 +08:00
Xuewei Niu
f6ff9cf717 Merge pull request #11689 from Caspian443/fix-devmapper-selinux-mount-issue
runtime-rs: Empty block-rootfs Storage.options and align with Go runtime
2025-08-29 15:29:46 +08:00
Aurélien Bombo
754f07cff2 Merge pull request #11614 from kata-containers/workflow-permissions-tightening
Workflow permissions tightening
2025-08-28 10:56:03 -05:00
Fabiano Fidêncio
08d2ba1969 cgroups: Fix "." parent cgroup special case
ef642fe890 added a special case to avoid
moving cgroups that are on the "default" slice in case of deletion.

However, this special check should be done in the Parent() method
instead, which ensures that the default resource controller ID is
returned, instead of ".".

Fixes: #11599

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-27 08:15:15 +02:00
Caspian443
617af4cb3b runtime-rs: Empty block-rootfs Storage.options and align with Go runtime
- Set guest Storage.options for block rootfs to empty (do not propagate host mount options).
- Align behavior with Go runtime: only add xfs nouuid when needed.

Signed-off-by: Caspian443 <scrisis843@gmail.com>
2025-08-26 01:27:21 +00:00
Caspian443
9a7aadaaca libs: Introduce rootfs fs types
- Add new kata-types::fs module with:
  - VM_ROOTFS_FILESYSTEM_EXT4
  - VM_ROOTFS_FILESYSTEM_XFS
  - VM_ROOTFS_FILESYSTEM_EROFS
- Export fs module in src/libs/kata-types/src/lib.rs
- Remove duplicated filesystem constants from src/runtime-rs/crates/hypervisor/src/lib.rs
- Update src/runtime-rs/crates/hypervisor/src/kernel_param.rs (and tests) to import from kata_types::fs

Signed-off-by: Caspian443 <scrisis843@gmail.com>
2025-08-26 01:26:53 +00:00
Fabiano Fidêncio
63f6dcdeb9 kata-manager: Support xz and zst suffixes for the kata tarball
We moved to `.zst`, but users still use the upstream kata-manager to
download older versions of the project, thus we need to support both
suffixes.

Fixes: #11714

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-25 21:15:06 +02:00
Fupan Li
687d0bf94a Merge pull request #11715 from fidencio/topic/backport-qemu-reclaim-guest-freed-memory
runtime: qemu: Add reclaim_guest_freed_memory [BACKPORT]
2025-08-25 16:59:29 +08:00
Fabiano Fidêncio
fd1b8ceed1 runtime: qemu: Add reclaim_guest_freed_memory [BACKPORT]
Similar to what we've done for Cloud Hypervisor in the commit
9f76467cb7, we're backporting a runtime-rs
feature that would be benificial to have as part of the go runtime.

This allows users to use virito-balloon for the hypervisor to reclaim
memory freed by the guest.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-22 23:56:47 +02:00
stevenhorsman
b4545da15d workflows: Set top-level permissions to empty
The default suggestion for top-level permissions was
`contents: read`, but scorecard notes anything other than empty,
so try updating it and see if there are any issues. I think it's
only needed if we run workflows from other repos.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 14:13:21 +01:00
stevenhorsman
f79e453313 workflows: Tighten up workflow permissions
Since the previous tightening a few workflow updates have
gone in and the zizmor job isn't flagging them as issues,
so address this to remove potential attack vectors

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 14:13:21 +01:00
Fabiano Fidêncio
e396a460bc Revert "local-build: Enforce USE_CACHE=no"
This reverts commit cb5f143b1b, as the
cached packages have been regenerated after the switch to using zstd.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-22 14:03:36 +02:00
Steve Horsman
23d2dfaedc Merge pull request #11707 from fidencio/topic/switch-to-use-zstd-when-possible
kata-deploy: local-build: Use zstd instead of xz
2025-08-22 10:06:00 +01:00
stevenhorsman
8cbb1a4357 runtime: Fix non constant Errorf formatting
As part of the go 1.24.6 bump there are errors about the incorrect
use of a errorf, so switch to the non-formatting version, or add
the format string as appropriate

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 10:44:15 +02:00
stevenhorsman
381da9e603 versions: Bump golang to 1.24.6
golang 1.25 has been released, so 1.23 is EoL,
so we should update to ensure we don't end up with security issues

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 10:44:15 +02:00
stevenhorsman
0ccf429a3d workflows: Switch workflows to use install_go.sh
Update the two workflows that used setup-go to
instead call `install_go.sh` script, which handles
installing the correct version of golang

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 10:44:15 +02:00
stevenhorsman
5f7525f099 build: Add darwin support to arch_to_golang
Avoid the error `ERROR: unsupported architecture: arm64`
in install_go.sh on darwin

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 10:44:15 +02:00
stevenhorsman
3391c6f1c5 ci: Make install_go.sh more portable
`${kernel_name,,}`  is bash 4.0 and not posix compliant, so doesn't
work on macos, so switch to `tr` which is more widely
supported

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-22 10:44:15 +02:00
Alex Lyn
91913f9e82 Merge pull request #11711 from stevenhorsman/remote-allow-cc_init_data-annotation
runtime: Enable init_data annotation
2025-08-22 14:41:53 +08:00
Fupan Li
1a0fbbfa32 Merge pull request #11699 from Apokleos/support-nonprotection
runtime-rs: Support initdata within NonProtection scenarios
2025-08-22 10:24:47 +08:00
Hyounggyu Choi
41dcfb4a9f Merge pull request #11321 from BbolroC/reconnect-timeout-qemu-se
runtime-rs: Adjust VSOCK timeouts for IBM SEL
2025-08-22 00:34:05 +02:00
Fabiano Fidêncio
cb5f143b1b local-build: Enforce USE_CACHE=no
We need that to regenerate the tarballs that are already cached in the
zstd format.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-21 21:00:20 +02:00
stevenhorsman
081823b388 runtime: Enable init_data annotation
In #11693 the cc_init_data annotation was changes to be hypervisor
scoped, so each hypervisor needs to explicitly allow it in order to
use it now, so add this to both the go and rust runtime's remote
configurations

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-21 19:26:10 +01:00
Fabiano Fidêncio
f8d7ff40b4 local-build: Fix shim-v2 no cache build with measured rootfs
We need to get the root_hash.txt file from the image build, otherwise
there's no way to build the shim using those values for the
configuration files.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-21 19:56:01 +02:00
Fabiano Fidêncio
ad240a39e6 kata-deploy: tools: tests: Use zstd instead of xz
Although the compress ratio is not as optimal as using xz, it's way
faster to compress / uncompress, and it's "good enough".

This change is not small, but it's still self-contained, and has to get
in at once, in order to help bisects in the future.

Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
2025-08-21 19:53:55 +02:00
Fabiano Fidêncio
9cc97ad35c kata-deploy: Bump image to use alpine 3.22
As 3.18 is already EOL.

We need to add `--break-system-packages` to enforce the install of the
installation of the yq version that we rely on.  The tests have shown
that no breakage actually happens, fortunately.

Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
2025-08-21 19:53:55 +02:00
Fabiano Fidêncio
1329ce355e versions: image / initrd: Bump to alpine 3.22
As the 3.18 is EOL'ed.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-21 19:53:55 +02:00
Fabiano Fidêncio
c32fc409ec rootfs-builder: Bump alpine to 3.22
As we were using a very old non-supported version.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-21 19:53:55 +02:00
Zvonko Kaiser
60d87b7785 gpu: Add more debugging to CI/CD
Capture NVRC logs via journalctl

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-21 18:09:20 +02:00
Alex Lyn
e430727cb6 runtime-rs: Change the initdata device driver with block_device_driver
Currently, we change vm_rootfs_driver as the initdata device driver
with block_device_driver.

Fixes #11697

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-21 18:56:26 +08:00
Alex Lyn
5cc028a8b1 runtime-rs: Support initdata within NonProtection scenarios
we also need support initdat within nonprotection even though the
platform is detected as NonProtection or usually is called nontee
host. Within these cases, there's no need to validate the item of
`confidential_guest=true`, we believe the result of the method
`available_guest_protection()?`.

Fixes #11697

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-21 18:56:23 +08:00
Hyounggyu Choi
faf5aed965 runtime-rs: Adjust VSOCK timeouts for IBM SEL
The default `reconnect_timeout` (3 seconds) was found to be insufficient for
IBM SEL when using VSOCK. This commit updates the timeouts as follows:

- `dial_timeout_ms`: Set to 90ms to match the value used in go-runtime for IBM SEL
- `reconnect_timeout_ms`: Increased to 5000ms based on empirical testing

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-21 12:35:44 +02:00
Hyounggyu Choi
b7d2973ce5 Merge pull request #11696 from BbolroC/enable-initdata-ibm-sel-runtime-rs
runtime-rs Enable initdata IBM SEL
2025-08-21 09:23:46 +02:00
Hyounggyu Choi
c4b4a3d8bb tests: Add hypervisor qemu-se-runtime-rs for initdata
This commit adds a new hypervisor `qemu-se-runtime-rs`
to test initdata for IBM SEL (s390x).

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 18:57:50 +02:00
Hyounggyu Choi
2ec70bc8e2 runtime-rs: Enable initdata spec for IBM SEL
Add support for the `InitData` resource config on IBM SEL,
so that a corresponding block device is created and the
initdata is passed to the guest through this device.

Note that we skip passing the initdata hash via QEMU’s
object, since the hypervisor does not yet support this
mechanism for IBM SEL. It will be introduced separately
once QEMU adds the feature.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 18:57:50 +02:00
Zvonko Kaiser
c980b6e191 release: Bump version to 3.20.0
Bump VERSION and helm-chart versions

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-20 18:18:05 +02:00
Markus Rudy
30aff429df Merge pull request #11647 from Park-Jiyeonn/opt/sealed-secret-prefix-check
Optimize sealed secret scanning to avoid full file reads
2025-08-20 17:18:20 +02:00
Alex Lyn
014ab2fce6 Merge pull request #11693 from BbolroC/revert-initdata-annotation
runtime-rs: Fix issues for initdata
2025-08-20 21:17:52 +08:00
Fabiano Fidêncio
dd1752ac1c Merge pull request #11634 from mythi/coco-kernel-v6.16
versions: update kernel-confidential to Linux v6.16.1
2025-08-20 13:01:05 +02:00
Fupan Li
29ab8df881 Merge pull request #11514 from Apokleos/ci-for-libs
CI: Introduce CI for libs to Improve code quality and reduce noises
2025-08-20 18:59:27 +08:00
Hyounggyu Choi
0ac8f1f70e Merge pull request #11705 from Apokleos/remove-default-guesthookpath
kata-types: remove default setting of guest_hook_path
2025-08-20 11:15:25 +02:00
Mikko Ylinen
a0ae1b6608 packaging: kernel: libdw-dev and python3 to builder image
These new dependencies are needed by Linux 6.16+.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2025-08-20 11:34:09 +03:00
Mikko Ylinen
412a384aad versions: update kernel-confidential to Linux v6.16.1
Linux v6.16 brings some useful features for the confidential guests.
Most importantly, it adds an ABI to extend runtime measurement registers
(RTMR) for the TEE platforms supporting it. This is currently enabled
on Intel TDX only.

The kernel version bump from v6.12.x to v6.16 forces some CONFIG_*
changes too:

MEMORY_HOTPLUG_DEFAULT_ONLINE was dropped in favor of more config
choices. The equivalent option is MHP_DEFAULT_ONLINE_TYPE_ONLINE_AUTO.

X86_5LEVEL was made unconditional. Since this was only a TDX
configuration, dropping it completely as part of v6.16 is fine.

CRYPTO_NULL2 was merged with CRYPTO_NULL. This was only added in
confidential guest fragments (cryptsetup) so we can drop it in this update.

CRYPTO_FIPS now depends on CRYPTO_SELFTESTS which further depends on
EXPERT which we don't have. Enable both in a separate config fragment
for confidential guests. This can be moved to a common setting once
other targets bump to post v6.16.

CRYPTO_SHA256_SSE3 arch optimizations were reworked and are now enabled
by default. Instead of adding it to whitelist.conf, just drop it completely
since it was only enabled as part of "measured boot" feature for
confidential guests. CONFIG_CRYPTO_CRC32_S390 was reworked the same way.
In this case, whitelist.conf is needed.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2025-08-20 11:32:48 +03:00
Hyounggyu Choi
0daafecef2 Revert "runtime-rs: Correct the coresponding initdata annotation const"
This reverts commit 37685c41c7.

This renames the relevant constant for initdata.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 10:15:23 +02:00
Hyounggyu Choi
f0db4032f2 Revert "kata-types: Align the initdata annotation with kata-runtime's definition"
This reverts commit ede773db17.

`cc_init_data` should be under a hypervisor category because
it is a hypervisor-specific feature. The annotation including
`runtime` also breaks a logic for `is_annotation_enabled()`.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 10:15:23 +02:00
Hyounggyu Choi
208cec429a runtime-rs: Introduce CoCo-specific enable_annotations
We need to include `cc_init_data` in the enable_annotations
array to pass the data. Since initdata is a CoCo-specific
feature, this commit introduces a new array,
`DEFENABLEANNOTATIONS_COCO`, which contains the required
string and applies it to the relevant CoCo configuration.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 10:15:23 +02:00
Hyounggyu Choi
1f978ecc31 runtime-rs: Fix issues for empty initdata annotation test
Currently, there are 2 issues for the empty initdata annotation
test:

- Empty string handling
- "\[CDH\] \[ERROR\]: Get Resource failed" not appearing

`add_hypervisor_initdata_overrides()` does not handle
an empty string, which might lead to panic like:

```
called `Result::unwrap()` on an `Err` value: gz decoder failed
Caused by:
    failed to fill whole buffer
```

This commit makes the function return an empty string
for a given empty input and updates the assertion string
to one that appears in both go-runtime and runtime-rs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-20 10:15:23 +02:00
alex.lyn
b23d094928 CI: Introduce CI for libs to Improve code quality and reduce noises
Currently, runtime-rs related code within the libs directory lacks
sufficient CI protection. We frequently observe the following issues:
- Inconsistent Code Formatting: Code that has not been properly
  formatted
is merged.
- Failing Tests: Code with failing unit or integration tests is merged.

To address these issues, we need introduce stricter CI checks for the
libs directory. This may specifically include:
- Code Formatting Checks
- Mandatory Test Runs

Fixes #11512

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
0f19465b3a shim-interface: Do cargo check and reduce warnings
Reduce shim-interface's warings caused by non-formatted or unchecked operations.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
e05197e81c safe-path: Do cargo check and reduce warnings
Reduce warings caused by non-formatted or unchecked operations.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
683d673f4f protocols: Do cargo format to make codes clean
Fix protocols' warings by correctly do cargo check/format.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
38242d3a61 kata-types: Do cargo check and reduce warnings
Reduce noises caused by non-formated codes.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
283fd45045 kata-sys-utils: fix warnings for s390x
The warning reports as bwlow:
```
   --> kata-sys-util/src/protection.rs:145:9
    |
145 |         return Err(ProtectionError::NoPerms)?;
    |         ^^^^^^^ help: remove it
    |
...
error: `to_string` applied to a type that implements `Display` in
`format!` args
   --> kata-sys-util/src/protection.rs:151:16
    |
151 |             err.to_string()
    |                ^^^^^^^^^^^^ help: remove this
```

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:36:09 +08:00
alex.lyn
730b0f1769 kata-sys-utils: Do cargo check codes and reduce warnings
Fix kata-sys-utils warings by correctly do cargo check and test it well.

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2025-08-20 15:35:42 +08:00
Fabiano Fidêncio
585d0be342 Merge pull request #11691 from alextibbles/update-lts-kernel
versions: update to latest LTS kernel 6.12.42
2025-08-20 08:55:06 +02:00
Fupan Li
b748688e69 Merge pull request #11698 from Apokleos/filter-arpneibhors
runtime-rs: Add only static ARP entries with handle_neighours
2025-08-20 14:05:20 +08:00
Alex Lyn
c4af9be411 kata-types: remove default setting of guest_hook_path
To make it aligned with the setting of runtime-go, we should keep
it as empty when users doesn't enable and set its specified path.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-20 13:56:42 +08:00
Zvonko Kaiser
bce8efca67 gpu: Rebuild initrd and image for kernel bump
We need to make sure that we use the latest kernel
and rebuild the initrd and image for the nvidia-gpu
use-cases otherwise the tests will fail since
the modules are not build against the new kernel and
they simply fail to load.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-19 17:32:42 -04:00
Alex Tibbles
e20f6b2f9d versions: update to latest LTS kernel 6.12.42
Fixes #11690

Signed-off-by: Alex Tibbles <alex@bleg.org>
2025-08-19 17:32:42 -04:00
Fabiano Fidêncio
3503bcdb50 Merge pull request #11701 from alextibbles/go-stdlib-#11700
versions: sync go.mod with versions.yaml for go 1.23.12
2025-08-19 22:14:57 +02:00
Alex Tibbles
a03dc3129d versions: sync go.mod with versions.yaml for go 1.23.12
OSV-Scanner highlights go.mod references to go stdlib 1.23.0 contrary to intention in versions.yaml, so synchronize them.
Make a converse comment for versions.yaml.

Fixes: #11700

Signed-off-by: Alex Tibbles <alex@bleg.org>
2025-08-19 11:30:19 -04:00
Hyounggyu Choi
93ec470928 runtime/tests: Update annotation for initdata
Let's rename the runtime-rs initdata annotation from
`io.katacontainers.config.runtime.cc_init_data` to
`io.katacontainers.config.hypervisor.cc_init_data`.

Rationale:
- initdata itself is a hypervisor-specific feature
- the new name aligns with the annotation handling logic:
c92bb1aa88/src/libs/kata-types/src/annotations/mod.rs (L514-L968)

This commit updates the annotation for go-runtime and tests accordingly.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-08-19 15:17:01 +02:00
Alex Lyn
903e608c23 runtime-rs: Add only static ARP entries with handle_neighours
To make it aligned with runtime-go, we need add only static ARP
entries into the targets.

Fixes #11697

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-19 20:09:20 +08:00
Steve Horsman
c92bb1aa88 Merge pull request #11684 from zvonkok/gpu-required
gatekeeper: GPU test required
2025-08-15 10:30:19 +01:00
Hyounggyu Choi
28bd0cf405 Merge pull request #11640 from rafsal-rahim/bm-initdata-s390x
Feat | Implement initdata for bare-metal/qemu for s390x
2025-08-15 10:42:32 +02:00
Zvonko Kaiser
3a4e1917d2 gatekeeper: Make GPU test required
We now run a simple RAG pipeline with each PR to make
sure we do not break GPU support.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-14 18:35:39 -04:00
Aurélien Bombo
3a5e2060aa Merge pull request #11683 from kata-containers/sprt/static-checks-default-branch
ci: static-checks: Don't hardcode default repo branch
2025-08-14 17:01:18 -05:00
Zvonko Kaiser
55ee8abf0b Merge pull request #11658 from kata-containers/amd64-nvidia-gpu-cicd-step2
gpu: AMD64 NVIDIA GPU CI/CD Part 2
2025-08-14 17:51:26 -04:00
Aurélien Bombo
0fa7d5b293 ci: static-checks: Don't hardcode default repo branch
This would cause weird issues for downstreams which default branch is not
"main".

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2025-08-14 13:22:20 -05:00
Zvonko Kaiser
dcb62a7f91 Merge pull request #11525 from was-saw/qemu-seccomp
runtime-rs: add seccomp support for qemu
2025-08-14 12:35:32 -04:00
Zvonko Kaiser
8be41a4e80 gpu: Add embeding service
For a simple RAG pipeline add a embeding service

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-14 16:34:21 +00:00
RuoqingHe
65a9fe0063 Merge pull request #11670 from kevinzs2048/add-aavmf
CI: change the directory for Arm64 firmware
2025-08-14 21:30:21 +08:00
stevenhorsman
43cdde4c5d test/k8s: Extend initdata tests to run on s390x
Enable testing of initdata on the qemu-coco-dev and qemu-se
runtime classes, so we can validate the function on s390x

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-14 17:10:58 +05:30
rafsalrahim
9891b111d1 runtime: Add initdata support to s390x
- Added support for initdata device on s390x.
- Generalized devno generation for QEMU CCW devices.

Signed-off-by: rafsalrahim <rafsal.rahim@ibm.com>
2025-08-14 17:10:58 +05:30
wangxinge
d147e2491b runtime-rs: add seccomp support for qemu
This commit support the seccomp_sandbox option from the configuration.toml file
and add the logic for appending command-line arguments based on this new configuration parameter.

Fixes: #11524

Signed-off-by: wangxinge <wangxinge@bupt.edu.cn>
2025-08-14 18:45:03 +08:00
Xuewei Niu
479cce8406 Merge pull request #11536 from was-saw/clh/fc-seccomp
runtime-rs: add seccomp support for cloud hypervisor and firecracker
2025-08-14 18:23:14 +08:00
Dan Mihai
ea74024b93 Merge pull request #11663 from burgerdev/arp
genpolicy: support AddARPNeighbors
2025-08-13 14:54:36 -07:00
Kevin Zhao
aadad0c9b6 CI: change the directory for Arm64 firmware
Previouly it is reusing the ovmf, which will enter some
issue for path checking, so move to aavmf as it should
be.

Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
2025-08-13 23:39:44 +02:00
Fabiano Fidêncio
cfd0ebe85f Merge pull request #11675 from katexochen/snp-guest-policy
runtime: make SNP guest policy configurable
2025-08-13 22:20:51 +02:00
Steve Horsman
c7f4c9a3bb Merge pull request #11676 from stevenhorsman/golang-1.23.12-bump
versions: Bump golang to 1.23.12
2025-08-13 15:24:17 +01:00
Park.Jiyeon
2f50c85b12 agent: avoid full file reads when scanning sealed secrets.
Read only the sealed secret prefix instead of the whole file.
Improves performance and reduces memory usage in I/O-heavy environments.

Fixes: #11643

Signed-off-by: Park.Jiyeon <jiyeonnn2@icloud.com>
2025-08-13 20:32:03 +08:00
Paul Meyer
5635410dd3 runtime: make SNP guest policy configurable
Dependening on the platform configuration, users might want to
set a more secure policy than the QEMU default.

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-08-13 09:06:36 +02:00
stevenhorsman
1a6f1fc3ac versions: Bump golang to 1.23.12
Bump go version to remediate vuln GO-2025-3849

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-08-12 14:46:29 +01:00
Dan Mihai
9379a18c8a Merge pull request #11565 from Sumynwa/sumsharma/agent_ctl_vm_boot_support
agent-ctl: Add option "--vm" to boot pod VM for testing.
2025-08-11 09:36:23 -07:00
Sumedh Alok Sharma
c7c811071a agent-ctl: Add option --vm to boot pod VM for testing.
This change introduces a new command line option `--vm`
to boot up a pod VM for testing. The tool connects with
kata agent running inside the VM to send the test commands.
The tool uses `hypervisor` crates from runtime-rs for VM
lifecycle management. Current implementation supports
Qemu & Cloud Hypervisor as VMMs.

In summary:
- tool parses the VMM specific runtime-rs kata config file in
/opt/kata/share/defaults/kata-containers/runtime-rs/*
- prepares and starts a VM using runtime-rs::hypervisor vm APIs
- retrieves agent's server address to setup connection
- tests the requested commands & shutdown the VM

Fixes #11566

Signed-off-by: Sumedh Alok Sharma <sumsharma@microsoft.com>
2025-08-11 11:03:18 +00:00
wangxinge
f3a669ee2d runtime-rs: add seccomp support for cloud hypervisor and firecracker
The seccomp feature for Cloud Hypervisor and Firecracker is enabled by default.
This commit introduces an option to disable seccomp for both and updates the built-in configuration.toml file accordingly.

Fixes: #11535

Signed-off-by: wangxinge <wangxinge@bupt.edu.cn>
2025-08-11 17:59:30 +08:00
Hyounggyu Choi
407252a863 Merge pull request #11641 from Apokleos/kata-log
runtime-rs: Label system journal log with kata
2025-08-11 08:44:31 +02:00
Alex Lyn
196d7d674d runtime-rs: Label system journal log with kata
Route kata-shim logs directly to systemd-journald under 'kata' identifier.

This refactoring enables `kata-shim` logs to be properly attributed to
'kata' in systemd-journald, instead of inheriting the 'containerd'
identifier.

Previously, `kata-shim` logs were challenging to filter and debug as
they
appeared under the `containerd.service` unit.

This commit resolves this by:
1.  Introducing a `LogDestination` enum to explicitly define logging
targets (File or Journal).
2.  Modifying logger creation to set `SYSLOG_IDENTIFIER=kata` when
logging
to Journald.
3.  Ensuring type safety and correct ownership handling for different
logging backends.

This significantly enhances the observability and debuggability of Kata
Containers, making it easier to monitor and troubleshoot Kata-specific
events.

Fixes: #11590

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-10 16:00:36 +08:00
Aurélien Bombo
be148c7f72 Merge pull request #11666 from kata-containers/sprt/static-check-exclude-security-md
ci: static-checks: add SECURITY.md to exclude list
2025-08-08 12:50:29 -05:00
Fabiano Fidêncio
dcbdf56281 Merge pull request #11660 from zvonkok/remove-stable
ci: Remove stable
2025-08-08 14:18:25 +02:00
Xuewei Niu
1d2f2d6350 Merge pull request #11219 from fidencio/topic/version-qemu-bump-to-10.0.0
version: Bump QEMU to v10.0.0
2025-08-08 19:04:45 +08:00
RuoqingHe
aaf8de3dbf Merge pull request #11669 from kevinzs2048/add-timeout
ci: cri-containerd: add 5s timeout for creating sanbox with crictl
2025-08-08 18:25:58 +08:00
Alex Lyn
9816ffdac7 Merge pull request #11653 from Apokleos/align-initdata-annoation
Align initdata annoation with kata-runtime
2025-08-08 16:24:09 +08:00
Kevin Zhao
1aa65167d7 CI: cri-containerd: add 5s timeout for creating sanbox with crictl
After moving Arm64 CI nodes to new one, we do faced an interesting
issue for timeout when it executes the command with crictl runp,
the error is usally: code = DeadlineExceeded

Fixes: #11662

Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
2025-08-08 15:41:39 +08:00
Fupan Li
b50777a174 Merge pull request #10580 from pmores/make-vcpu-allocation-more-accurate
runtime-rs: make vcpu allocation more accurate
2025-08-08 14:14:40 +08:00
Xuewei Niu
beea0c34c5 Merge pull request #11060 from kata-containers/sprt/vfsd-metadata
runtime: virtio-fs: Support "metadata" cache mode
2025-08-08 11:13:57 +08:00
Fabiano Fidêncio
f9e16431c1 version: Bump QEMU to v10.0.3
As the new release of QEMU is out, let's switch to it and take advantage
of bug fixes and improvements.

QEMU changelog: https://wiki.qemu.org/ChangeLog/10.0

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-08-07 22:31:30 +02:00
Greg Kurz
f9a6359674 Merge pull request #11667 from c3d/bug/11633-qmp
qemu: Respect the JSON schema for hot plug
2025-08-07 16:04:12 +02:00
Aurélien Bombo
6d96875d04 runtime: virtio-fs: Support "metadata" cache mode
The Rust virtiofsd supports a "metadata" cache mode [1] that wasn't
present in the C version [2], so this PR adds support for that.

 [1] https://gitlab.com/virtio-fs/virtiofsd
 [2] https://qemu.weilnetz.de/doc/5.1/tools/virtiofsd.html#cmdoption-virtiofsd-cache

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2025-08-07 21:24:40 +08:00
Pavel Mores
69f21692ed runtime-rs: enable vcpu allocation tests in CI
This series should make runtime-rs's vcpu allocation behaviour match the
behaviour of runtime-go so we can now enable pertinent tests which were
skipped so far due the difference between both shims.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-08-07 10:32:44 +02:00
Pavel Mores
00bfa3fa02 runtime-rs: re-adjust config after modifying it with annotations
Configuration information is adjusted after loading from file but so
far, there has been no similar check for configuration coming from
annotations.  This commit introduces re-adjusting config after
annotations have been processed.

A small refactor was necessary as a prerequisite which introduces
function TomlConfig::adjust_config() to make it easier to invoke
the adjustment for a whole TomlConfig instance.  This function is
analogous to the existing validate() function.

The immediate motivation for this change is to make sure that 0
in "default_vcpus" annotation will be properly adjusted to 1 as
is the case if 0 is loaded from a config file.  This is required
to match the golang runtime behaviour.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-08-07 10:32:44 +02:00
Pavel Mores
e2156721fd runtime-rs: add tests to exercise floating-point 'default_vcpus'
Also included (as commented out) is a test that does not pass although
it should.  See source code comment for explanation why fixing this seems
beyond the scope of this PR.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-08-07 10:32:44 +02:00
Pavel Mores
1f95d9401b runtime-rs: change representation of default_vcpus from i32 to f32
This commit focuses purely on the formal change of type.  If any subsequent
changes in semantics are needed they are purposely avoided here so that the
commit can be reviewed as a 100% formal and 0% semantic change.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-08-07 10:32:44 +02:00
Pavel Mores
cdc0eab8e4 runtime-rs: make sandbox vcpu allocation more accurate
This commit addresses a part of the same problem as PR #7623 did for the
golang runtime.  So far we've been rounding up individual containers'
vCPU requests and then summing them up which can lead to allocation of
excess vCPUs as described in the mentioned PR's cover letter.  We address
this by reversing the order of operations, we sum the (possibly fractional)
container requests and only then round up the total.

We also align runtime-rs's behaviour with runtime-go in that we now
include the default vcpu request from the config file ('default_vcpu')
in the total.

We diverge from PR #7623 in that `default_vcpu` is still treated as an
integer (this will be a topic of a separate commit), and that this
implementation avoids relying on 32-bit floating point arithmetic as there
are some potential problems with using f32.  For instance, some numbers
commonly used in decimal, notably all of single-decimal-digit numbers
0.1, 0.2 .. 0.9 except 0.5, are periodic in binary and thus fundamentally
not representable exactly.  Arithmetics performed on such numbers can lead
to surprising results, e.g. adding 0.1 ten times gives 1.0000001, not 1,
and taking a ceil() results in 2, clearly a wrong answer in vcpu
allocation.

So instead, we take advantage of the fact that container requests happen
to be expressed as a quota/period fraction so we can sum up quotas,
fundamentally integral numbers (possibly fractional only due to the need
to rewrite them with a common denominator) with much less danger of
precision loss.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-08-07 10:32:44 +02:00
Christophe de Dinechin
ec480dc438 qemu: Respect the JSON schema for hot plug
When hot-plugging CPUs on QEMU, we send a QMP command with JSON
arguments. QEMU 9.2 recently became more strict[1] enforcing the
JSON schema for QMP parameters. As a result, running Kata Containers
with QEMU 9.2 results in a message complaining that the core-id
parameter is expected to be an integer:

```
qmp hotplug cpu, cpuID=cpu-0 socketID=1, error:
QMP command failed:
Invalid parameter type for 'core-id', expected: integer
```

Fix that by changing the core-id, socket-id and thread-id to be
integer values.

[1]: be93fd5372

Fixes: #11633

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2025-08-07 09:13:57 +02:00
Alex Lyn
37685c41c7 runtime-rs: Correct the coresponding initdata annotation const
As we have changed the initdata annotation definition, Accordingly, we also
need correct its const definition with KATA_ANNO_CFG_RUNTIME_INIT_DATA.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-07 10:45:28 +08:00
Alex Lyn
163f04a918 Merge pull request #11651 from microsoft/danmihai1/debug-kubectl-logs
tests: k8s-sandbox-vcpus-allocation debug info
2025-08-07 10:27:29 +08:00
Aurélien Bombo
e3b4d87b6d ci: static-checks: add SECURITY.md to exclude list
This adds SECURITY.md to the list of GH-native files that should be excluded by
the reference checker.

Today this is useful for downstreams who already have a SECURITY.md file for
compliance reasons. When Kata onboards that file, this commit will also be
required.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2025-08-06 11:24:52 -05:00
Markus Rudy
3eb0641431 genpolicy: add rule for AddARPNeighbors
When the network interface provisioned by the CNI has static ARP table entries,
the runtime calls AddARPNeighbor to propagate these to the agent. As of today,
these calls are simply rejected.

In order to allow the calls, we do some sanity checks on the arguments:

We must ensure that we don't unexpectedly route traffic to the host that was
not intended to leave the VM. In a first approximation, this applies to
loopback IPs and devices. However, there may be other sensitive ranges (for
example, VPNs between VMs), so there should be some flexibility for users to
restrict this further. This is why we introduce a setting, similar to
UpdateRoutes, that allows restricting the neighbor IPs further.

The only valid state of an ARP neighbor entry is NUD_PERMANENT, which has a
value of 128 [1]. This is already enforced by the runtime.

According to rtnetlink(7), valid flag values are 8 and 128, respectively [2],
thus we allow any combination of these.

[1]: https://github.com/torvalds/linux/blob/4790580/include/uapi/linux/neighbour.h#L72
[2]: https://github.com/torvalds/linux/blob/4790580/include/uapi/linux/neighbour.h#L49C20-L53

Fixes: #11664

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-08-06 17:24:36 +02:00
Zvonko Kaiser
1b1b3af9ab ci: Remove trigger for stable branch
We do not support stable branches anymore,
remove the trigger for it.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-08-06 09:22:24 +08:00
Hyounggyu Choi
af01434226 Merge pull request #11646 from kata-containers/sprt/param-static-checks
ci: static-checks: Auto-detect repo by default
2025-08-05 22:13:20 +02:00
Alex Lyn
ede773db17 kata-types: Align the initdata annotation with kata-runtime's definition
To make it work within CI, we do alignment with kata-runtime's definition
with "io.katacontainers.config.runtime.cc_init_data".

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-08-03 22:51:39 +08:00
Dan Mihai
05eca5ca25 tests: k8s-sandbox-vcpus-allocation debug info
Print more details about the behavior of "kubectl logs", trying to understand
errors like:

https://github.com/kata-containers/kata-containers/actions/runs/16662887973/job/47164791712

not ok 1 Check the number vcpus are correctly allocated to the sandbox
 (in test file k8s-sandbox-vcpus-allocation.bats, line 37)
   `[ `kubectl logs ${pods[$i]}` -eq ${expected_vcpus[$i]} ]' failed with status 2
 No resources found in kata-containers-k8s-tests namespace.
...
 k8s-sandbox-vcpus-allocation.bats: line 37: [: -eq: unary operator expected

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-08-01 20:09:17 +00:00
Aurélien Bombo
c47bff6d6a Merge pull request #11637 from kata-containers/sprt/remove-install-az-cli
gha: Remove unnecessary install-azure-cli step
2025-08-01 09:34:46 -05:00
Fabiano Fidêncio
82f141a02e Merge pull request #11632 from burgerdev/codegen
runtime: reproducible generation of Golang proto bindings
2025-07-31 23:49:18 +02:00
Fabiano Fidêncio
7198c8789e Merge pull request #11639 from zvonkok/gpu_guest_components
gpu: guest components
2025-07-31 21:42:31 +02:00
Aurélien Bombo
9585e608e5 ci: static-checks: Auto-detect repo by default
This auto-detects the repo by default (instead of having to specify
KATA_DEV_MODE=true) so that forked repos can leverage the static-checks.yaml CI
check without modification.

An alternative would have been to pass the repo in static-checks.yaml. However,
because of the matrix, this would've changed the check name, which is a pain to
handle in either the gatekeeper/GH UI.

Example fork failure:
https://github.com/microsoft/kata-containers/actions/runs/16656407213/job/47142421739#step:8:75

I've tested this change to work in a fork.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2025-07-31 14:33:24 -05:00
Zvonko Kaiser
8422411d91 gpu: Add coco guest components
The second stage needs to consider the coco guest components

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-31 17:11:21 +00:00
Markus Rudy
3fd354b991 ci: add codegen to static-checks
Signed-off-by: Markus Rudy <mr@edgeless.systems>

Fixes: #11631

Co-authored-by: Steve Horsman <steven@uk.ibm.com>
2025-07-31 17:58:25 +01:00
Markus Rudy
9e38fd2562 tools: add image for Go proto bindings
In order to have a reproducible code generation process, we need to pin
the versions of the tools used. This is accomplished easiest by
generating inside a container.

This commit adds a container image definition with fixed dependencies
for Golang proto/ttrpc code generation, and changes the agent Makefile
to invoke the update-generated-proto.sh script from within that
container.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-07-31 17:58:25 +01:00
Markus Rudy
f7a36df290 runtime: generate proto files
The generated Go bindings for the agent are out of date. This commit
was produced by running
src/agent/src/libs/protocols/hack/update-generated-proto.sh with
protobuf compiler versions matching those of the last run, according to
the generated code comments.

Since there are new RPC methods, those needed to be added to the
HybridVSockTTRPCMockImp.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-07-31 17:58:25 +01:00
Fabiano Fidêncio
d077ed4c1e Merge pull request #11645 from kata-containers/topic/fix-kbuild-sign-pin-issue
build: nvidia: Fix KBUILD_SIGN_PIN breakage
2025-07-31 18:31:34 +02:00
Fabiano Fidêncio
8d30b84abd build: nvidia: Fix KBUILD_SIGN_PIN breakage
We only need KBUILD_SIGN_PIN exported when building nvidia related
artefacts.

Signed-off-by: Fabiano Fidêncio <fidencio@northflank.com>
2025-07-31 16:39:20 +02:00
Fabiano Fidêncio
20bef41347 Merge pull request #11236 from kata-containers/amd64-nvidia-gpu-cicd
gpu: AMD64 NVIDIA GPU CI/CD
2025-07-31 14:52:01 +02:00
Aurélien Bombo
96f1d95de5 gha: Remove unnecessary install-azure-cli step
az cli is already installed by the azure/login action.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2025-07-30 10:42:56 -05:00
Zvonko Kaiser
fbb0e7f2f2 gpu: Add secrets passthrough to the workflow
We need to pass-through the secrets in all the needed workflows
ci, ci-on-push, ci-nightly, ci-devel

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:51:01 +00:00
Zvonko Kaiser
30778594d0 gpu: Add arm64-nvidia-a100 to actionlint.yaml
Make zizmor happy about our custom runner label

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
8768e08258 gpu: Add embeding service
For a simple RAG pipeline add a embeding service

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
254dbd9b45 gpu: Add Pod spec for NIM llama
Pod spec for the NIM inferencing service

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
568b13400a gpu: Add NIM bats test
We're running a simple NIM container to test if the GPUs
are working properly

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
6188b7f79f gpu: Add run_kubernetes_nv_tests.sh
Replicate what we have for run_tests and run .bats files

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
9a829107ba gpu: Add selector for k8s tests
We want to reuse the current run_tests with GPUs, introduce a var
that will define what to run.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
7669f1fbd1 gpu: Add NVIDIA GPU test block for amd64
Once we have the amd64 artifacts we can run some arm64 k8s tests.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:59 +00:00
Zvonko Kaiser
97d7575d41 gpu: Disable metrics tests
We are not running the metrics tests anyway for now
lets make room to run the GPU tests.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-07-30 13:45:58 +00:00
Anastassios Nanos
00e0db99a3 Merge pull request #11627 from itsmohitnarayan/FirecrackerVersionUpdate 2025-07-30 13:59:55 +03:00
Kumar Mohit
5cccbb9f41 versions: Upgrade Firecracker Version to 1.12.1
Updated versions.yaml to use Firecracker v1.12.1.
Replaced firecracker and jailer binaries under /opt/kata/bin.

Tested with kata-fc runtime on Kubernetes:
- Deployed pods using gitpod/openvscode-server
- Verified microVM startup, container access, and Firecracker usage
- Confirmed Firecracker and jailer versions via CLI

Signed-off-by: Kumar Mohit <68772712+itsmohitnarayan@users.noreply.github.com>
2025-07-30 12:51:08 +05:30
215 changed files with 5350 additions and 1842 deletions

View File

@@ -23,3 +23,4 @@ self-hosted-runner:
- s390x
- s390x-large
- tdx
- amd64-nvidia-a100

View File

@@ -9,8 +9,7 @@ on:
- labeled
- unlabeled
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -11,8 +11,8 @@ on:
paths:
- '.github/workflows/**'
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -13,8 +13,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-containerd-sandboxapi:

View File

@@ -13,8 +13,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-containerd-sandboxapi:

View File

@@ -12,8 +12,7 @@ on:
required: true
type: string
permissions:
contents: read
permissions: {}
name: Build checks preview riscv64
jobs:

View File

@@ -5,8 +5,8 @@ on:
required: true
type: string
permissions:
contents: read
permissions: {}
name: Build checks
jobs:
@@ -42,6 +42,10 @@ jobs:
path: src/runtime-rs
needs:
- rust
- name: libs
path: src/libs
needs:
- rust
- name: agent-ctl
path: src/tools/agent-ctl
needs:

View File

@@ -23,9 +23,10 @@ on:
secrets:
QUAY_DEPLOYER_PASSWORD:
required: false
KBUILD_SIGN_PIN:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-asset:
@@ -95,6 +96,7 @@ jobs:
- name: Build ${{ matrix.asset }}
id: build
run: |
[[ "${KATA_ASSET}" == *"nvidia"* ]] && echo "KBUILD_SIGN_PIN=${{ secrets.KBUILD_SIGN_PIN }}" >> "${GITHUB_ENV}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
@@ -141,7 +143,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -150,7 +152,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}-headers${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.zst
retention-days: 15
if-no-files-found: error
@@ -201,6 +203,7 @@ jobs:
- name: Build ${{ matrix.asset }}
id: build
run: |
[[ "${KATA_ASSET}" == *"nvidia"* ]] && echo "KBUILD_SIGN_PIN=${{ secrets.KBUILD_SIGN_PIN }}" >> "${GITHUB_ENV}"
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
@@ -220,7 +223,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -312,7 +315,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.xz
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
@@ -349,6 +352,6 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-static.tar.xz
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -24,8 +24,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: false
permissions:
contents: read
permissions: {}
jobs:
build-asset:
@@ -121,7 +120,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -130,7 +129,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}-headers${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}-headers.tar.zst
retention-days: 15
if-no-files-found: error
@@ -195,7 +194,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -282,7 +281,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.xz
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
@@ -319,6 +318,6 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-arm64${{ inputs.tarball-suffix }}
path: kata-static.tar.xz
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -24,8 +24,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-asset:
@@ -83,7 +82,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 1
if-no-files-found: error
@@ -148,7 +147,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 1
if-no-files-found: error
@@ -221,7 +220,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.xz
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 1
if-no-files-found: error
@@ -262,6 +261,6 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-ppc64le${{ inputs.tarball-suffix }}
path: kata-static.tar.xz
path: kata-static.tar.zst
retention-days: 1
if-no-files-found: error

View File

@@ -24,8 +24,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-asset:
@@ -81,6 +80,6 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-riscv64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -27,8 +27,7 @@ on:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-asset:
@@ -115,7 +114,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -182,7 +181,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
@@ -230,7 +229,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x${{ inputs.tarball-suffix }}
path: kata-build/kata-static-boot-image-se.tar.xz
path: kata-build/kata-static-boot-image-se.tar.zst
retention-days: 1
if-no-files-found: error
@@ -307,7 +306,7 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.xz
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
@@ -348,6 +347,6 @@ jobs:
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-static.tar.xz
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -11,8 +11,7 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
permissions: {}
jobs:
cargo-deny-runner:

View File

@@ -9,8 +9,7 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
permissions: {}
jobs:
kata-containers-ci-on-push:
@@ -31,3 +30,4 @@ jobs:
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -2,8 +2,7 @@ name: Kata Containers CI (manually triggered)
on:
workflow_dispatch:
permissions:
contents: read
permissions: {}
jobs:
kata-containers-ci-on-push:
@@ -27,6 +26,8 @@ jobs:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-checks:
uses: ./.github/workflows/build-checks.yaml

View File

@@ -4,8 +4,7 @@ on:
name: Nightly CI for s390x
permissions:
contents: read
permissions: {}
jobs:
check-internal-test-result:

View File

@@ -7,8 +7,7 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
permissions: {}
jobs:
kata-containers-ci-on-push:
@@ -31,3 +30,5 @@ jobs:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -3,7 +3,6 @@ on:
pull_request_target:
branches:
- 'main'
- 'stable-*'
types:
# Adding 'labeled' to the list of activity types that trigger this event
# (default: opened, synchronize, reopened) so that we can run this
@@ -14,8 +13,7 @@ on:
- reopened
- labeled
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -52,3 +50,5 @@ jobs:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -27,9 +27,10 @@ on:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
KBUILD_SIGN_PIN:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-kata-static-tarball-amd64:
@@ -43,6 +44,8 @@ jobs:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
publish-kata-deploy-payload-amd64:
needs: build-kata-static-tarball-amd64

View File

@@ -35,10 +35,12 @@ on:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
NGC_API_KEY:
required: true
KBUILD_SIGN_PIN:
required: true
permissions:
contents: read
id-token: write
permissions: {}
jobs:
build-kata-static-tarball-amd64:
@@ -52,6 +54,8 @@ jobs:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
publish-kata-deploy-payload-amd64:
needs: build-kata-static-tarball-amd64
@@ -286,6 +290,10 @@ jobs:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-aks.yaml
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
@@ -323,6 +331,21 @@ jobs:
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-k8s-tests-on-nvidia-gpu:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-nvidia-gpu.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
secrets:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
run-kata-coco-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs:
@@ -330,6 +353,9 @@ jobs:
- build-and-publish-tee-confidential-unencrypted-image
- publish-csi-driver-amd64
uses: ./.github/workflows/run-kata-coco-tests.yaml
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
@@ -383,20 +409,6 @@ jobs:
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-metrics-tests:
# Skip metrics tests whilst runner is broken
if: false
# if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-metrics.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-basic-amd64-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-amd64

View File

@@ -4,13 +4,13 @@ on:
- cron: "0 0 * * *"
workflow_dispatch:
permissions:
contents: read
id-token: write
permissions: {}
jobs:
cleanup-resources:
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

View File

@@ -19,8 +19,8 @@ on:
schedule:
- cron: '45 0 * * 1'
permissions:
contents: read
permissions: {}
jobs:
analyze:

View File

@@ -6,8 +6,7 @@ on:
- reopened
- synchronize
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -6,8 +6,7 @@ on:
- reopened
- synchronize
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -18,13 +17,15 @@ jobs:
test:
runs-on: macos-latest
steps:
- name: Install Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version: 1.23.10
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "${GITHUB_PATH}"
- name: Build utils
run: ./ci/darwin-test.sh

View File

@@ -2,8 +2,7 @@ on:
schedule:
- cron: '0 23 * * 0'
permissions:
contents: read
permissions: {}
name: Docs URL Alive Check
jobs:
@@ -14,23 +13,21 @@ jobs:
env:
target_branch: ${{ github.base_ref }}
steps:
- name: Install Go
uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
with:
go-version: 1.23.10
env:
GOPATH: ${{ github.workspace }}/kata-containers
- name: Set env
run: |
echo "GOPATH=${{ github.workspace }}" >> "$GITHUB_ENV"
echo "${{ github.workspace }}/bin" >> "$GITHUB_PATH"
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
path: ./src/github.com/${{ github.repository }}
# docs url alive check
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "${GITHUB_PATH}"
- name: Docs URL Alive Check
run: |
cd "${GOPATH}/src/github.com/${{ github.repository }}" && make docs-url-alive-check

View File

@@ -31,8 +31,7 @@ on:
skip_static:
value: ${{ jobs.skipper.outputs.skip_static }}
permissions:
contents: read
permissions: {}
jobs:
skipper:

View File

@@ -12,8 +12,7 @@ on:
- reopened
- labeled
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -3,8 +3,7 @@ on:
name: Govulncheck
permissions:
contents: read
permissions: {}
jobs:
govulncheck:
@@ -14,12 +13,12 @@ jobs:
include:
- binary: "kata-runtime"
make_target: "runtime"
- binary: "containerd-shim-kata-v2"
- binary: "containerd-shim-kata-v2"
make_target: "containerd-shim-v2"
- binary: "kata-monitor"
make_target: "monitor"
fail-fast: false
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4

View File

@@ -6,8 +6,7 @@ on:
- reopened
- synchronize
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -15,6 +15,8 @@ on:
push:
branches: [ "main" ]
permissions: {}
jobs:
scan-scheduled:
permissions:

View File

@@ -5,8 +5,7 @@ on:
- main
workflow_dispatch:
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -25,6 +24,7 @@ jobs:
target-branch: ${{ github.ref_name }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-assets-arm64:
permissions:

View File

@@ -34,8 +34,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
kata-payload:
@@ -85,6 +84,6 @@ jobs:
TAG: ${{ inputs.tag }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)/kata-static.tar.xz" \
"$(pwd)/kata-static.tar.zst" \
"${REGISTRY}/${REPO}" \
"${TAG}"

View File

@@ -8,9 +8,10 @@ on:
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
KBUILD_SIGN_PIN:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-kata-static-tarball-amd64:
@@ -20,6 +21,7 @@ jobs:
stage: release
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
permissions:
contents: read
packages: write
@@ -71,9 +73,9 @@ jobs:
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "ghcr.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "quay.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -9,8 +9,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-kata-static-tarball-arm64:
@@ -71,9 +70,9 @@ jobs:
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "ghcr.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "quay.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -9,8 +9,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-kata-static-tarball-ppc64le:
@@ -71,9 +70,9 @@ jobs:
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "ghcr.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "quay.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -11,8 +11,7 @@ on:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
build-kata-static-tarball-s390x:
@@ -75,9 +74,9 @@ jobs:
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "ghcr.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.xz "quay.io/kata-containers/kata-deploy" \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -2,8 +2,7 @@ name: Release Kata Containers
on:
workflow_dispatch
permissions:
contents: read
permissions: {}
jobs:
release:
@@ -35,6 +34,7 @@ jobs:
target-arch: amd64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-and-push-assets-arm64:
needs: release
@@ -126,7 +126,7 @@ jobs:
- name: Set KATA_STATIC_TARBALL env var
run: |
tarball=$(pwd)/kata-static.tar.xz
tarball=$(pwd)/kata-static.tar.zst
echo "KATA_STATIC_TARBALL=${tarball}" >> "$GITHUB_ENV"
- name: Download amd64 artifacts

View File

@@ -1,7 +1,6 @@
name: CI | Run cri-containerd tests
permissions:
contents: read
permissions: {}
on:
workflow_call:

View File

@@ -34,9 +34,7 @@ on:
required: true
permissions:
contents: read
id-token: write
permissions: {}
jobs:
run-k8s-tests:
@@ -71,6 +69,9 @@ jobs:
instance-type: normal
auto-generate-policy: yes
runs-on: ubuntu-22.04
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}

View File

@@ -22,8 +22,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-k8s-tests-amd64:

View File

@@ -22,8 +22,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-k8s-tests-on-arm64:

View File

@@ -0,0 +1,89 @@
name: CI | Run NVIDIA GPU kubernetes tests on arm64
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
NGC_API_KEY:
required: true
permissions: {}
jobs:
run-nvidia-gpu-tests-on-amd64:
strategy:
fail-fast: false
matrix:
vmm:
- qemu-nvidia-gpu
k8s:
- kubeadm
runs-on: amd64-nvidia-a100
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
USING_NFD: "false"
K8S_TEST_HOST_TYPE: all
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-nv-tests
env:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: k8s-tests-${{ matrix.vmm }}-${{ matrix.k8s }}-${{ inputs.tag }}
path: /tmp/artifacts
retention-days: 1
- name: Delete kata-deploy
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh cleanup

View File

@@ -22,8 +22,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-k8s-tests:

View File

@@ -25,8 +25,7 @@ on:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
permissions:
contents: read
permissions: {}
jobs:
run-k8s-tests:

View File

@@ -35,9 +35,7 @@ on:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
permissions:
contents: read
id-token: write
permissions: {}
jobs:
# Generate jobs for testing CoCo on non-TEE environments
@@ -52,6 +50,9 @@ jobs:
pull-type:
- guest-pull
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
@@ -91,9 +92,6 @@ jobs:
- name: Install kata
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Download Azure CLI
run: bash tests/integration/kubernetes/gha-run.sh install-azure-cli
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:

View File

@@ -36,9 +36,7 @@ on:
ITA_KEY:
required: true
permissions:
contents: read
id-token: write
permissions: {}
jobs:
run-k8s-tests-on-tdx:
@@ -223,6 +221,8 @@ jobs:
pull-type:
- guest-pull
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
@@ -268,9 +268,6 @@ jobs:
- name: Install kata
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-artifacts
- name: Download Azure CLI
run: bash tests/integration/kubernetes/gha-run.sh install-azure-cli
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:

View File

@@ -29,9 +29,7 @@ on:
AZ_SUBSCRIPTION_ID:
required: true
permissions:
contents: read
id-token: write
permissions: {}
jobs:
run-kata-deploy-tests:
@@ -50,6 +48,8 @@ jobs:
vmm: clh
runs-on: ubuntu-22.04
environment: ci
permissions:
id-token: write # Used for OIDC access to log into Azure
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
@@ -72,9 +72,6 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Download Azure CLI
run: bash tests/functional/kata-deploy/gha-run.sh install-azure-cli
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:

View File

@@ -22,8 +22,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-kata-deploy-tests:

View File

@@ -13,8 +13,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-monitor:

View File

@@ -22,8 +22,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-metrics:

View File

@@ -13,8 +13,7 @@ on:
type: string
default: ""
permissions:
contents: read
permissions: {}
jobs:
run-runk:

View File

@@ -10,8 +10,7 @@ on:
- reopened
- synchronize
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -11,8 +11,7 @@ on:
- reopened
- synchronize
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -4,8 +4,7 @@ on:
- cron: '0 0 * * *'
workflow_dispatch:
permissions:
contents: read
permissions: {}
jobs:
stale:

View File

@@ -6,8 +6,7 @@ on:
- reopened
- labeled # a workflow runs only when the 'ok-to-test' label is added
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -7,8 +7,7 @@ on:
- synchronize
workflow_dispatch:
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
@@ -150,3 +149,36 @@ jobs:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
uses: ./.github/workflows/govulncheck.yaml
codegen:
runs-on: ubuntu-22.04
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
permissions:
contents: read # for checkout
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: generate
run: make -C src/agent generate-protocols
- name: check for diff
run: |
diff=$(git diff)
if [[ -z "${diff}" ]]; then
echo "No diff detected."
exit 0
fi
cat << EOF >> "${GITHUB_STEP_SUMMARY}"
Run \`make -C src/agent generate-protocols\` to update protobuf bindings.
\`\`\`diff
${diff}
\`\`\`
EOF
echo "::error::Golang protobuf bindings need to be regenerated (see Github step summary for diff)."
exit 1

View File

@@ -5,8 +5,7 @@ on:
branches: ["main"]
pull_request:
permissions:
contents: read
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}

View File

@@ -42,7 +42,7 @@ generate-protocols:
# Some static checks rely on generated source files of components.
static-checks: static-checks-build
bash tests/static-checks.sh github.com/kata-containers/kata-containers
bash tests/static-checks.sh
docs-url-alive-check:
bash ci/docs-url-alive-check.sh

View File

@@ -1 +1 @@
3.19.1
3.20.0

View File

@@ -306,7 +306,7 @@ tarball to the newly created VM that will be used for debugging purposes.
> [!NOTE]
> Those artifacts are only available (for 15 days) when all jobs are finished.
Once you have the `kata-static.tar.xz` in your VM, you can login to the VM with
Once you have the `kata-static.tar.zst` in your VM, you can login to the VM with
`kcli ssh debug-nerdctl-pr8070`, go ahead and then clone your development branch
```bash
@@ -323,15 +323,15 @@ $ git config --global user.name "Your Name"
$ git rebase upstream/main
```
Now copy the `kata-static.tar.xz` into your `kata-containers/kata-artifacts` directory
Now copy the `kata-static.tar.zst` into your `kata-containers/kata-artifacts` directory
```bash
$ mkdir kata-artifacts
$ cp ../kata-static.tar.xz kata-artifacts/
$ cp ../kata-static.tar.zst kata-artifacts/
```
> [!NOTE]
> If you downloaded the .zip from GitHub you need to uncompress first to see `kata-static.tar.xz`
> If you downloaded the .zip from GitHub you need to uncompress first to see `kata-static.tar.zst`
And finally run the tests following what's in the yaml file for the test you're
debugging.
@@ -363,11 +363,11 @@ and have fun debugging and hacking!
Steps for debugging the Kubernetes tests are very similar to the ones for
debugging non-Kubernetes tests, with the caveat that what you'll need, this
time, is not the `kata-static.tar.xz` tarball, but rather a payload to be used
time, is not the `kata-static.tar.zst` tarball, but rather a payload to be used
with kata-deploy.
In order to generate your own kata-deploy image you can generate your own
`kata-static.tar.xz` and then take advantage of the following script. Be aware
`kata-static.tar.zst` and then take advantage of the following script. Be aware
that the image generated and uploaded must be accessible by the VM where you'll
be performing your tests.

View File

@@ -89,16 +89,16 @@ However, if any of these components are absent, they must be built from the
$ # Assume that the project is cloned at $GOPATH/src/github.com/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers
$ make rootfs-initrd-confidential-tarball
$ tar -tf build/kata-static-kernel-confidential.tar.xz | grep vmlinuz
$ tar --zstd -tf build/kata-static-kernel-confidential.tar.zst | grep vmlinuz
./opt/kata/share/kata-containers/vmlinuz-confidential.container
./opt/kata/share/kata-containers/vmlinuz-6.7-136-confidential
$ kernel_version=6.7-136
$ tar -tf build/kata-static-rootfs-initrd-confidential.tar.xz | grep initrd
$ tar --zstd -tf build/kata-static-rootfs-initrd-confidential.tar.zst | grep initrd
./opt/kata/share/kata-containers/kata-containers-initrd-confidential.img
./opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd
$ mkdir artifacts
$ tar -xvf build/kata-static-kernel-confidential.tar.xz -C artifacts ./opt/kata/share/kata-containers/vmlinuz-${kernel_version}-confidential
$ tar -xvf build/kata-static-rootfs-initrd-confidential.tar.xz -C artifacts ./opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd
$ tar --zstd -xvf build/kata-static-kernel-confidential.tar.zst -C artifacts ./opt/kata/share/kata-containers/vmlinuz-${kernel_version}-confidential
$ tar --zstd -xvf build/kata-static-rootfs-initrd-confidential.tar.zst -C artifacts ./opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd
$ ls artifacts/opt/kata/share/kata-containers/
kata-ubuntu-20.04-confidential.initrd vmlinuz-${kernel_version}-confidential
```
@@ -190,8 +190,8 @@ can be easily accomplished by issuing the following make target:
$ cd $GOPATH/src/github.com/kata-containers/kata-containers
$ mkdir hkd_dir && cp $host_key_document hkd_dir
$ HKD_PATH=hkd_dir SE_KERNEL_PARAMS="agent.log=debug" make boot-image-se-tarball
$ ls build/kata-static-boot-image-se.tar.xz
build/kata-static-boot-image-se.tar.xz
$ ls build/kata-static-boot-image-se.tar.zst
build/kata-static-boot-image-se.tar.zst
```
`SE_KERNEL_PARAMS` could be used to add any extra kernel parameters. If no additional kernel configuration is required, this can be omitted.
@@ -344,18 +344,18 @@ $ make virtiofsd-tarball
$ make shim-v2-tarball
$ mkdir kata-artifacts
$ build_dir=$(readlink -f build)
$ cp -r $build_dir/*.tar.xz kata-artifacts
$ cp -r $build_dir/*.tar.zst kata-artifacts
$ ls -1 kata-artifacts
kata-static-agent.tar.xz
kata-static-boot-image-se.tar.xz
kata-static-coco-guest-components.tar.xz
kata-static-kernel-confidential-modules.tar.xz
kata-static-kernel-confidential.tar.xz
kata-static-pause-image.tar.xz
kata-static-qemu.tar.xz
kata-static-rootfs-initrd-confidential.tar.xz
kata-static-shim-v2.tar.xz
kata-static-virtiofsd.tar.xz
kata-static-agent.tar.zst
kata-static-boot-image-se.tar.zst
kata-static-coco-guest-components.tar.zst
kata-static-kernel-confidential-modules.tar.zst
kata-static-kernel-confidential.tar.zst
kata-static-pause-image.tar.zst
kata-static-qemu.tar.zst
kata-static-rootfs-initrd-confidential.tar.zst
kata-static-shim-v2.tar.zst
kata-static-virtiofsd.tar.zst
$ ./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
```
@@ -369,7 +369,7 @@ command before running `kata-deploy-merge-builds.sh`:
$ make rootfs-image-tarball
```
At this point, you should have an archive file named `kata-static.tar.xz` at the project root,
At this point, you should have an archive file named `kata-static.tar.zst` at the project root,
which will be used to build a payload image. If you are using a local container registry at
`localhost:5000`, proceed with the following:
@@ -381,7 +381,7 @@ Build and push a payload image with the name `localhost:5000/build-kata-deploy`
`latest` using the following:
```
$ ./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh kata-static.tar.xz localhost:5000/build-kata-deploy latest
$ ./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh kata-static.tar.zst localhost:5000/build-kata-deploy latest
... logs ...
Pushing the image localhost:5000/build-kata-deploy:latest to the registry
The push refers to repository [localhost:5000/build-kata-deploy]

114
src/agent/Cargo.lock generated
View File

@@ -508,6 +508,15 @@ dependencies = [
"wyz",
]
[[package]]
name = "block-buffer"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4152116fd6e9dadb291ae18fc1ec3575ed6d84c29642d97890f4b4a3417297e4"
dependencies = [
"generic-array",
]
[[package]]
name = "block-buffer"
version = "0.10.4"
@@ -889,6 +898,16 @@ dependencies = [
"typenum",
]
[[package]]
name = "crypto-mac"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1d1a86f49236c215f271d40892d5fc950490551400b02ef360692c29815c714"
dependencies = [
"generic-array",
"subtle",
]
[[package]]
name = "darling"
version = "0.14.4"
@@ -1033,13 +1052,22 @@ dependencies = [
"syn 2.0.101",
]
[[package]]
name = "digest"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3dd60d1080a57a05ab032377049e0591415d2b31afd7028356dbf3cc6dcb066"
dependencies = [
"generic-array",
]
[[package]]
name = "digest"
version = "0.10.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
dependencies = [
"block-buffer",
"block-buffer 0.10.4",
"crypto-common",
]
@@ -1543,6 +1571,16 @@ version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hmac"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a2a2320eb7ec0ebe8da8f744d7812d9fc4cb4d09344ac01898dbcb6a20ae69b"
dependencies = [
"crypto-mac",
"digest 0.9.0",
]
[[package]]
name = "home"
version = "0.5.9"
@@ -2049,7 +2087,7 @@ dependencies = [
"serde",
"serde_json",
"serial_test",
"sha2",
"sha2 0.10.9",
"slog",
"slog-scope",
"slog-stdlog",
@@ -2133,7 +2171,7 @@ dependencies = [
"serde",
"serde-enum-str",
"serde_json",
"sha2",
"sha2 0.10.9",
"slog",
"slog-scope",
"sysinfo",
@@ -2210,6 +2248,23 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a7cbbd4ad467251987c6e5b47d53b11a5a05add08f2447a9e2d70aef1e0d138"
[[package]]
name = "libsystemd"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f4f0b5b062ba67aa075e331de778082c09e66b5ef32970ea5a1e9c37c9555d1"
dependencies = [
"hmac",
"libc",
"log",
"nix 0.23.2",
"once_cell",
"serde",
"sha2 0.9.9",
"thiserror 1.0.69",
"uuid 0.8.2",
]
[[package]]
name = "libz-sys"
version = "1.1.22"
@@ -2273,6 +2328,7 @@ dependencies = [
"serde_json",
"slog",
"slog-async",
"slog-journald",
"slog-json",
"slog-scope",
"slog-term",
@@ -2734,6 +2790,12 @@ version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "opaque-debug"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08d65885ee38876c4f86fa503fb49d7b507c2b62552df7c70b2fce627e06381"
[[package]]
name = "opentelemetry"
version = "0.14.0"
@@ -3498,7 +3560,7 @@ dependencies = [
"rkyv_derive",
"seahash",
"tinyvec",
"uuid",
"uuid 1.16.0",
]
[[package]]
@@ -3911,7 +3973,20 @@ checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
dependencies = [
"cfg-if",
"cpufeatures",
"digest",
"digest 0.10.7",
]
[[package]]
name = "sha2"
version = "0.9.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d58a1e1bf39749807d89cf2d98ac2dfa0ff1cb3faa38fbb64dd88ac8013d800"
dependencies = [
"block-buffer 0.9.0",
"cfg-if",
"cpufeatures",
"digest 0.9.0",
"opaque-debug",
]
[[package]]
@@ -3922,7 +3997,7 @@ checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
"digest",
"digest 0.10.7",
]
[[package]]
@@ -3994,6 +4069,16 @@ dependencies = [
"thread_local",
]
[[package]]
name = "slog-journald"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83e14eb8c2f5d0c8fc9fbac40e6391095e4dc5cb334f7dce99c75cb1919eb39c"
dependencies = [
"libsystemd",
"slog",
]
[[package]]
name = "slog-json"
version = "2.6.1"
@@ -4133,6 +4218,12 @@ dependencies = [
"winapi",
]
[[package]]
name = "subtle"
version = "2.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601"
[[package]]
name = "syn"
version = "1.0.109"
@@ -4694,6 +4785,15 @@ version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "uuid"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
dependencies = [
"serde",
]
[[package]]
name = "uuid"
version = "1.16.0"
@@ -4707,7 +4807,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23b082222b4f6619906941c17eb2297fff4c2fb96cb60164170522942a200bd8"
dependencies = [
"outref",
"uuid",
"uuid 1.16.0",
"vsimd",
]

View File

@@ -217,4 +217,11 @@ codecov-html: check_tarpaulin
##TARGET generate-protocols: generate/update grpc agent protocols
generate-protocols:
image=$$(docker build -q \
--build-arg GO_VERSION=$$(yq '.languages.golang.version' $(CURDIR)/../../versions.yaml) \
--build-arg PROTOC_VERSION=$$(yq '.externals.protoc.version' $(CURDIR)/../../versions.yaml | grep -oE "[0-9.]+") \
--build-arg PROTOC_GEN_GO_VERSION=$$(yq '.externals.protoc-gen-go.version' $(CURDIR)/../../versions.yaml) \
--build-arg TTRPC_VERSION=$$(yq '.externals.ttrpc.version' $(CURDIR)/../../versions.yaml) \
$(CURDIR)/../../tools/packaging/static-build/codegen) && \
docker run --rm --workdir /kata/src/agent -v $(CURDIR)/../..:/kata --user $(shell id -u) $$image \
../libs/protocols/hack/update-generated-proto.sh all

View File

@@ -22,6 +22,8 @@ use protocols::{
};
use safe_path::scoped_join;
use std::fs;
use std::fs::File;
use std::io::{self, Read};
use std::path::Path;
use std::{os::unix::fs::symlink, path::PathBuf};
use tokio::sync::OnceCell;
@@ -235,8 +237,8 @@ pub async fn unseal_file(path: &str) -> Result<()> {
}
let secret_name = entry.file_name();
let contents = fs::read_to_string(&target_path)?;
if contents.starts_with(SEALED_SECRET_PREFIX) {
if content_starts_with_prefix(&target_path, SEALED_SECRET_PREFIX).await? {
let contents = fs::read_to_string(&target_path)?;
// Get the directory name of the sealed secret file
let dir_name = target_path
.parent()
@@ -262,6 +264,17 @@ pub async fn unseal_file(path: &str) -> Result<()> {
Ok(())
}
pub async fn content_starts_with_prefix(path: &Path, prefix: &str) -> io::Result<bool> {
let mut file = File::open(path)?;
let mut buffer = vec![0u8; prefix.len()];
match file.read_exact(&mut buffer) {
Ok(()) => Ok(buffer == prefix.as_bytes()),
Err(ref e) if e.kind() == io::ErrorKind::UnexpectedEof => Ok(false),
Err(e) => Err(e),
}
}
pub async fn secure_mount(
volume_type: &str,
options: &std::collections::HashMap<String, String>,
@@ -294,7 +307,7 @@ mod tests {
use std::fs::File;
use std::io::{Read, Write};
use std::sync::Arc;
use tempfile::tempdir;
use tempfile::{tempdir, NamedTempFile};
use test_utils::skip_if_not_root;
use tokio::signal::unix::{signal, SignalKind};
struct TestService;
@@ -416,4 +429,34 @@ mod tests {
rt.shutdown_background();
std::thread::sleep(std::time::Duration::from_secs(2));
}
#[tokio::test]
async fn test_content_starts_with_prefix() {
// Normal case: content matches the prefix
let mut f = NamedTempFile::new().unwrap();
write!(f, "sealed.hello_world").unwrap();
assert!(content_starts_with_prefix(f.path(), "sealed.")
.await
.unwrap());
// Does not match the prefix
let mut f2 = NamedTempFile::new().unwrap();
write!(f2, "notsealed.hello_world").unwrap();
assert!(!content_starts_with_prefix(f2.path(), "sealed.")
.await
.unwrap());
// File length < prefix.len()
let mut f3 = NamedTempFile::new().unwrap();
write!(f3, "seal").unwrap();
assert!(!content_starts_with_prefix(f3.path(), "sealed.")
.await
.unwrap());
// Empty file
let f4 = NamedTempFile::new().unwrap();
assert!(!content_starts_with_prefix(f4.path(), "sealed.")
.await
.unwrap());
}
}

View File

@@ -34,8 +34,7 @@ pub fn is_ephemeral_volume(mount: &Mount) -> bool {
mount.destination(),
),
(Some("bind"), Some(source), dest) if get_linux_mount_info(source)
.map_or(false, |info| info.fs_type == "tmpfs") &&
(Some("bind"), Some(source), dest) if get_linux_mount_info(source).is_ok_and(|info| info.fs_type == "tmpfs") &&
(is_empty_dir(source) || dest.as_path() == Path::new("/dev/shm"))
)
}

View File

@@ -823,11 +823,11 @@ mod tests {
#[test]
fn test_get_linux_mount_info() {
let info = get_linux_mount_info("/sys/fs/cgroup").unwrap();
let info = get_linux_mount_info("/dev/shm").unwrap();
assert_eq!(&info.device, "tmpfs");
assert_eq!(&info.fs_type, "tmpfs");
assert_eq!(&info.path, "/sys/fs/cgroup");
assert_eq!(&info.path, "/dev/shm");
assert!(matches!(
get_linux_mount_info(""),

View File

@@ -168,7 +168,7 @@ pub fn is_valid_numa_cpu(cpus: &[u32]) -> Result<bool> {
let numa_nodes = get_numa_nodes()?;
for cpu in cpus {
if numa_nodes.get(cpu).is_none() {
if !numa_nodes.contains_key(cpu) {
return Ok(false);
}
}

View File

@@ -66,7 +66,7 @@ impl PCIDevices for NvidiaPCIDevice {
}
}
return nvidia_devices;
nvidia_devices
}
}

View File

@@ -7,7 +7,7 @@
use std::collections::HashMap;
use std::fs;
use std::io;
use std::path::PathBuf;
use std::path::{Path, PathBuf};
use mockall::automock;
use pci_ids::{Classes, Vendors};
@@ -61,24 +61,22 @@ pub(crate) trait MemoryResourceTrait {
impl MemoryResourceTrait for MemoryResources {
fn get_total_addressable_memory(&self, round_up: bool) -> (u64, u64) {
let mut num_bar = 0;
let mut mem_size_32bit = 0u64;
let mut mem_size_64bit = 0u64;
let mut keys: Vec<_> = self.keys().cloned().collect();
keys.sort();
for key in keys {
if key as usize >= PCI_IOV_NUM_BAR || num_bar == PCI_IOV_NUM_BAR {
for (num_bar, key) in keys.into_iter().enumerate() {
if key >= PCI_IOV_NUM_BAR || num_bar == PCI_IOV_NUM_BAR {
break;
}
num_bar += 1;
if let Some(region) = self.get(&key) {
let flags = region.flags & PCI_BASE_ADDRESS_MEM_TYPE_MASK;
let mem_type_32bit = flags == PCI_BASE_ADDRESS_MEM_TYPE32;
let mem_type_64bit = flags == PCI_BASE_ADDRESS_MEM_TYPE64;
let mem_size = (region.end - region.start + 1) as u64;
let mem_size = region.end - region.start + 1;
if mem_type_32bit {
mem_size_32bit += mem_size;
@@ -138,10 +136,10 @@ impl PCIDeviceManager {
for entry in device_dirs {
let device_dir = entry?;
let device_address = device_dir.file_name().to_string_lossy().to_string();
if let Ok(device) = self.get_device_by_pci_bus_id(&device_address, vendor, &mut cache) {
if let Some(dev) = device {
pci_devices.push(dev);
}
if let Ok(Some(dev)) =
self.get_device_by_pci_bus_id(&device_address, vendor, &mut cache)
{
pci_devices.push(dev);
}
}
@@ -238,7 +236,7 @@ impl PCIDeviceManager {
Ok(Some(pci_device))
}
fn parse_resources(&self, device_path: &PathBuf) -> io::Result<MemoryResources> {
fn parse_resources(&self, device_path: &Path) -> io::Result<MemoryResources> {
let content = fs::read_to_string(device_path.join("resource"))?;
let mut resources: MemoryResources = MemoryResources::new();
for (i, line) in content.lines().enumerate() {
@@ -405,6 +403,8 @@ mod tests {
#[test]
fn test_parse_resources() {
setup_mock_device_files();
let manager = PCIDeviceManager::new(MOCK_PCI_DEVICES_ROOT);
let device_path = PathBuf::from(MOCK_PCI_DEVICES_ROOT).join("0000:ff:1f.0");
@@ -418,6 +418,8 @@ mod tests {
assert_eq!(resource.start, 0x00000000);
assert_eq!(resource.end, 0x0000ffff);
assert_eq!(resource.flags, 0x00000404);
cleanup_mock_device_files();
}
#[test]
@@ -435,10 +437,7 @@ mod tests {
file.write_all(&vec![0; 512]).unwrap();
// It should be true
assert!(is_pcie_device(
&format!("ff:00.0"),
MOCK_SYS_BUS_PCI_DEVICES
));
assert!(is_pcie_device("ff:00.0", MOCK_SYS_BUS_PCI_DEVICES));
// Clean up
let _ = fs::remove_file(config_path);

View File

@@ -142,14 +142,11 @@ pub fn arch_guest_protection(
#[allow(dead_code)]
pub fn available_guest_protection() -> Result<GuestProtection, ProtectionError> {
if !Uid::effective().is_root() {
return Err(ProtectionError::NoPerms)?;
Err(ProtectionError::NoPerms)?;
}
let facilities = crate::cpu::retrieve_cpu_facilities().map_err(|err| {
ProtectionError::CheckFailed(format!(
"Error retrieving cpu facilities file : {}",
err.to_string()
))
ProtectionError::CheckFailed(format!("Error retrieving cpu facilities file : {}", err))
})?;
// Secure Execution

View File

@@ -8,7 +8,6 @@ use std::collections::HashMap;
use std::fs::File;
use std::io::{self, BufReader, Result};
use std::result::{self};
use std::u32;
use serde::Deserialize;
@@ -463,12 +462,12 @@ impl Annotation {
/// update config info by annotation
pub fn update_config_by_annotation(&self, config: &mut TomlConfig) -> Result<()> {
if let Some(hv) = self.annotations.get(KATA_ANNO_CFG_RUNTIME_HYPERVISOR) {
if config.hypervisor.get(hv).is_some() {
if config.hypervisor.contains_key(hv) {
config.runtime.hypervisor_name = hv.to_string();
}
}
if let Some(ag) = self.annotations.get(KATA_ANNO_CFG_RUNTIME_AGENT) {
if config.agent.get(ag).is_some() {
if config.agent.contains_key(ag) {
config.runtime.agent_name = ag.to_string();
}
}
@@ -635,13 +634,13 @@ impl Annotation {
KATA_ANNO_CFG_HYPERVISOR_CPU_FEATURES => {
hv.cpu_info.cpu_features = value.to_string();
}
KATA_ANNO_CFG_HYPERVISOR_DEFAULT_VCPUS => match self.get_value::<i32>(key) {
KATA_ANNO_CFG_HYPERVISOR_DEFAULT_VCPUS => match self.get_value::<f32>(key) {
Ok(num_cpus) => {
let num_cpus = num_cpus.unwrap_or_default();
if num_cpus
> get_hypervisor_plugin(hypervisor_name)
.unwrap()
.get_max_cpus() as i32
.get_max_cpus() as f32
{
return Err(io::Error::new(
io::ErrorKind::InvalidData,
@@ -944,8 +943,7 @@ impl Annotation {
}
}
KATA_ANNO_CFG_HYPERVISOR_VIRTIO_FS_EXTRA_ARGS => {
let args: Vec<String> =
value.to_string().split(',').map(str::to_string).collect();
let args: Vec<String> = value.split(',').map(str::to_string).collect();
for arg in args {
hv.shared_fs.virtio_fs_extra_args.push(arg.to_string());
}
@@ -971,7 +969,7 @@ impl Annotation {
// update agent config
KATA_ANNO_CFG_KERNEL_MODULES => {
let kernel_mod: Vec<String> =
value.to_string().split(';').map(str::to_string).collect();
value.split(';').map(str::to_string).collect();
for modules in kernel_mod {
ag.kernel_modules.push(modules.to_string());
}
@@ -992,14 +990,16 @@ impl Annotation {
return Err(u32_err);
}
},
KATA_ANNO_CFG_RUNTIME_CREATE_CONTAINTER_TIMEOUT => match self.get_value::<u32>(key) {
Ok(v) => {
ag.request_timeout_ms = v.unwrap_or_default() * 1000;
KATA_ANNO_CFG_RUNTIME_CREATE_CONTAINTER_TIMEOUT => {
match self.get_value::<u32>(key) {
Ok(v) => {
ag.request_timeout_ms = v.unwrap_or_default() * 1000;
}
Err(_e) => {
return Err(u32_err);
}
}
Err(_e) => {
return Err(u32_err);
}
},
}
// update runtime config
KATA_ANNO_CFG_RUNTIME_NAME => {
let runtime = vec!["virt-container", "linux-container", "wasm-container"];
@@ -1032,8 +1032,7 @@ impl Annotation {
}
},
KATA_ANNO_CFG_EXPERIMENTAL => {
let args: Vec<String> =
value.to_string().split(',').map(str::to_string).collect();
let args: Vec<String> = value.split(',').map(str::to_string).collect();
for arg in args {
config.runtime.experimental.push(arg.to_string());
}
@@ -1079,6 +1078,9 @@ impl Annotation {
}
}
}
config.adjust_config()?;
Ok(())
}
}

View File

@@ -6,6 +6,7 @@
use std::io::Result;
use crate::config::{ConfigOps, TomlConfig};
use serde::{Deserialize, Deserializer};
pub use vendor::AgentVendor;
@@ -115,7 +116,11 @@ pub struct Agent {
/// This timeout value is used to set the maximum duration for the agent to process a CreateContainerRequest.
/// It's also used to ensure that workloads, especially those involving large image pulls within the guest,
/// have sufficient time to complete.
#[serde(default = "default_request_timeout", rename = "create_container_timeout")]
#[serde(
default = "default_request_timeout",
rename = "create_container_timeout",
deserialize_with = "deserialize_secs_to_millis"
)]
pub request_timeout_ms: u32,
/// Agent health check request timeout value in millisecond
@@ -127,12 +132,12 @@ pub struct Agent {
/// These modules will be loaded in the guest kernel using modprobe(8).
/// The following example can be used to load two kernel modules with parameters:
/// - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
/// The first word is considered as the module name and the rest as its parameters.
/// Container will not be started when:
/// The first word is considered as the module name and the rest as its parameters.
/// Container will not be started when:
/// - A kernel module is specified and the modprobe command is not installed in the guest
/// or it fails loading the module.
/// - The module is not available in the guest or it doesn't met the guest kernel
/// requirements, like architecture and version.
/// requirements, like architecture and version.
#[serde(default)]
pub kernel_modules: Vec<String>,
@@ -202,6 +207,15 @@ fn default_health_check_timeout() -> u32 {
90_000
}
fn deserialize_secs_to_millis<'de, D>(deserializer: D) -> std::result::Result<u32, D::Error>
where
D: Deserializer<'de>,
{
let secs = u32::deserialize(deserializer)?;
Ok(secs.saturating_mul(1000))
}
impl Agent {
fn validate(&self) -> Result<()> {
if self.dial_timeout_ms == 0 {

View File

@@ -369,7 +369,7 @@ mod drop_in_directory_handling {
config.hypervisor["qemu"].path,
"/usr/bin/qemu-kvm".to_string()
);
assert_eq!(config.hypervisor["qemu"].cpu_info.default_vcpus, 2);
assert_eq!(config.hypervisor["qemu"].cpu_info.default_vcpus, 2.0);
assert_eq!(config.hypervisor["qemu"].device_info.default_bridges, 4);
assert_eq!(
config.hypervisor["qemu"].shared_fs.shared_fs.as_deref(),

View File

@@ -109,7 +109,7 @@ impl ConfigPlugin for CloudHypervisorConfig {
return Err(eother!("Both guest boot image and initrd for CH are empty"));
}
if (ch.cpu_info.default_vcpus > 0
if (ch.cpu_info.default_vcpus > 0.0
&& ch.cpu_info.default_vcpus as u32 > default::MAX_CH_VCPUS)
|| ch.cpu_info.default_maxvcpus > default::MAX_CH_VCPUS
{

View File

@@ -6,7 +6,6 @@
use std::io::Result;
use std::path::Path;
use std::sync::Arc;
use std::u32;
use super::{default, register_hypervisor_plugin};
use crate::config::default::MAX_DRAGONBALL_VCPUS;
@@ -66,7 +65,7 @@ impl ConfigPlugin for DragonballConfig {
}
if db.cpu_info.default_vcpus as u32 > db.cpu_info.default_maxvcpus {
db.cpu_info.default_vcpus = db.cpu_info.default_maxvcpus as i32;
db.cpu_info.default_vcpus = db.cpu_info.default_maxvcpus as f32;
}
if db.machine_info.entropy_source.is_empty() {
@@ -135,7 +134,7 @@ impl ConfigPlugin for DragonballConfig {
));
}
if (db.cpu_info.default_vcpus > 0
if (db.cpu_info.default_vcpus > 0.0
&& db.cpu_info.default_vcpus as u32 > default::MAX_DRAGONBALL_VCPUS)
|| db.cpu_info.default_maxvcpus > default::MAX_DRAGONBALL_VCPUS
{

View File

@@ -93,7 +93,7 @@ impl ConfigPlugin for FirecrackerConfig {
));
}
if (firecracker.cpu_info.default_vcpus > 0
if (firecracker.cpu_info.default_vcpus > 0.0
&& firecracker.cpu_info.default_vcpus as u32 > default::MAX_FIRECRACKER_VCPUS)
|| firecracker.cpu_info.default_maxvcpus > default::MAX_FIRECRACKER_VCPUS
{

File diff suppressed because it is too large Load Diff

View File

@@ -128,7 +128,7 @@ impl ConfigPlugin for QemuConfig {
}
}
if (qemu.cpu_info.default_vcpus > 0
if (qemu.cpu_info.default_vcpus > 0.0
&& qemu.cpu_info.default_vcpus as u32 > default::MAX_QEMU_VCPUS)
|| qemu.cpu_info.default_maxvcpus > default::MAX_QEMU_VCPUS
{

View File

@@ -9,7 +9,6 @@ use std::fs;
use std::io::{self, Result};
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use std::u32;
use lazy_static::lazy_static;
@@ -131,9 +130,7 @@ impl TomlConfig {
pub fn load_from_file<P: AsRef<Path>>(config_file: P) -> Result<(TomlConfig, PathBuf)> {
let mut result = Self::load_raw_from_file(config_file);
if let Ok((ref mut config, _)) = result {
Hypervisor::adjust_config(config)?;
Runtime::adjust_config(config)?;
Agent::adjust_config(config)?;
config.adjust_config()?;
info!(sl!(), "get kata config: {:?}", config);
}
@@ -175,13 +172,20 @@ impl TomlConfig {
/// drop-in config file fragments in config.d/.
pub fn load(content: &str) -> Result<TomlConfig> {
let mut config: TomlConfig = toml::from_str(content)?;
Hypervisor::adjust_config(&mut config)?;
Runtime::adjust_config(&mut config)?;
Agent::adjust_config(&mut config)?;
config.adjust_config()?;
info!(sl!(), "get kata config: {:?}", config);
Ok(config)
}
/// Adjust Kata configuration information.
pub fn adjust_config(&mut self) -> Result<()> {
Hypervisor::adjust_config(self)?;
Runtime::adjust_config(self)?;
Agent::adjust_config(self)?;
Ok(())
}
/// Validate Kata configuration information.
pub fn validate(&self) -> Result<()> {
Hypervisor::validate(self)?;

View File

@@ -0,0 +1,15 @@
// Copyright 2025 Kata Contributors
//
// SPDX-License-Identifier: Apache-2.0
//
//! Filesystem-related constants shared across Kata components.
/// Root filesystem type: ext4
pub const VM_ROOTFS_FILESYSTEM_EXT4: &str = "ext4";
/// Root filesystem type: xfs
pub const VM_ROOTFS_FILESYSTEM_XFS: &str = "xfs";
/// Root filesystem type: erofs
pub const VM_ROOTFS_FILESYSTEM_EROFS: &str = "erofs";

View File

@@ -3,6 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
//
use crate::sl;
use anyhow::{anyhow, Context, Result};
use flate2::read::GzDecoder;
use serde::{Deserialize, Serialize};
@@ -23,6 +24,8 @@ pub enum ProtectedPlatform {
Snp,
/// Cca platform for ARM CCA
Cca,
/// Se platform for IBM SEL
Se,
/// Default with no protection
#[default]
NoProtection,
@@ -129,20 +132,20 @@ fn calculate_digest(algorithm: &str, data: &str) -> Result<Vec<u8>> {
let digest = match algorithm {
"sha256" => {
let mut hasher = Sha256::new();
hasher.update(&data);
hasher.update(data);
hasher.finalize().to_vec()
}
"sha384" => {
let mut hasher = Sha384::new();
hasher.update(&data);
hasher.update(data);
hasher.finalize().to_vec()
}
"sha512" => {
let mut hasher = Sha512::new();
hasher.update(&data);
hasher.update(data);
hasher.finalize().to_vec()
}
_ => return Err(anyhow!("unsupported Hash algorithm: {}", algorithm).into()),
_ => return Err(anyhow!("unsupported Hash algorithm: {}", algorithm)),
};
Ok(digest)
@@ -154,6 +157,7 @@ fn adjust_digest(digest: &[u8], platform: ProtectedPlatform) -> Vec<u8> {
ProtectedPlatform::Tdx => 48,
ProtectedPlatform::Snp => 32,
ProtectedPlatform::Cca => 64,
ProtectedPlatform::Se => 256,
ProtectedPlatform::NoProtection => digest.len(),
};
@@ -172,7 +176,7 @@ fn adjust_digest(digest: &[u8], platform: ProtectedPlatform) -> Vec<u8> {
/// Parse initdata
fn parse_initdata(initdata_str: &str) -> Result<InitData> {
let initdata: InitData = toml::from_str(&initdata_str)?;
let initdata: InitData = toml::from_str(initdata_str)?;
initdata.validate()?;
Ok(initdata)
@@ -192,7 +196,7 @@ pub fn calculate_initdata_digest(
let algorithm: &str = &initdata.algorithm;
// 2. Calculate Digest
let digest = calculate_digest(algorithm, &initdata_toml).context("calculate digest")?;
let digest = calculate_digest(algorithm, initdata_toml).context("calculate digest")?;
// 3. Adjust Digest with Platform
let digest_platform = adjust_digest(&digest, platform);
@@ -203,12 +207,18 @@ pub fn calculate_initdata_digest(
Ok(b64encoded_digest)
}
/// The argument `initda_annotation` is a Standard base64 encoded string containing a TOML formatted content.
/// The argument `initdata_annotation` is a Standard base64 encoded string containing a TOML formatted content.
/// This function decodes the base64 string, parses the TOML content into an InitData structure.
pub fn add_hypervisor_initdata_overrides(initda_annotation: &str) -> Result<String> {
pub fn add_hypervisor_initdata_overrides(initdata_annotation: &str) -> Result<String> {
// If the initdata is empty, return an empty string
if initdata_annotation.is_empty() {
info!(sl!(), "initdata_annotation is empty");
return Ok("".to_string());
}
// Base64 decode the annotation value
let b64_decoded =
base64::decode_config(initda_annotation, base64::STANDARD).context("base64 decode")?;
base64::decode_config(initdata_annotation, base64::STANDARD).context("base64 decode")?;
// Gzip decompress the decoded data
let mut gz_decoder = GzDecoder::new(&b64_decoded[..]);
@@ -231,6 +241,139 @@ mod tests {
use flate2::Compression;
use std::io::Write;
// create gzipped and base64 encoded string
fn create_encoded_input(content: &str) -> String {
let mut encoder = GzEncoder::new(Vec::new(), Compression::default());
encoder.write_all(content.as_bytes()).unwrap();
let compressed = encoder.finish().unwrap();
base64::encode_config(&compressed, base64::STANDARD)
}
#[test]
fn test_empty_annotation() {
// Test with empty string input
let result = add_hypervisor_initdata_overrides("");
assert!(result.is_ok());
assert_eq!(result.unwrap(), "");
}
#[test]
fn test_empty_data_section() {
// Test with empty data section
let toml_content = r#"
algorithm = "sha384"
version = "0.1.0"
[data]
"#;
let encoded = create_encoded_input(toml_content);
let result = add_hypervisor_initdata_overrides(&encoded);
assert!(result.is_ok());
}
#[test]
fn test_valid_complete_initdata() {
// Test with complete InitData structure
let toml_content = r#"
algorithm = "sha384"
version = "0.1.0"
[data]
"aa.toml" = '''
[token_configs]
[token_configs.coco_as]
url = 'http://kbs-service.xxx.cluster.local:8080'
[token_configs.kbs]
url = 'http://kbs-service.xxx.cluster.local:8080'
'''
"cdh.toml" = '''
socket = 'unix:///run/guest-services/cdh.sock'
credentials = []
[kbc]
name = 'cc_kbc'
url = 'http://kbs-service.xxx.cluster.local:8080'
'''
"#;
let encoded = create_encoded_input(toml_content);
let result = add_hypervisor_initdata_overrides(&encoded);
assert!(result.is_ok());
let output = result.unwrap();
assert!(!output.is_empty());
assert!(output.contains("algorithm"));
assert!(output.contains("version"));
}
#[test]
fn test_invalid_base64() {
// Test with invalid base64 string
let invalid_base64 = "This is not valid base64!";
let result = add_hypervisor_initdata_overrides(invalid_base64);
assert!(result.is_err());
let error = result.unwrap_err();
assert!(error.to_string().contains("base64 decode"));
}
#[test]
fn test_valid_base64_invalid_gzip() {
// Test with valid base64 but invalid gzip content
let not_gzipped = "This is not gzipped content";
let encoded = base64::encode_config(not_gzipped.as_bytes(), base64::STANDARD);
let result = add_hypervisor_initdata_overrides(&encoded);
assert!(result.is_err());
let error = result.unwrap_err();
assert!(error.to_string().contains("gz decoder failed"));
}
#[test]
fn test_missing_algorithm() {
// Test with missing algorithm field
let toml_content = r#"
version = "0.1.0"
[data]
"test.toml" = '''
key = "value"
'''
"#;
let encoded = create_encoded_input(toml_content);
let result = add_hypervisor_initdata_overrides(&encoded);
// This might fail depending on whether algorithm is required
if result.is_err() {
assert!(result.unwrap_err().to_string().contains("parse initdata"));
}
}
#[test]
fn test_missing_version() {
// Test with missing version field
let toml_content = r#"
algorithm = "sha384"
[data]
"test.toml" = '''
key = "value"
'''
"#;
let encoded = create_encoded_input(toml_content);
let result = add_hypervisor_initdata_overrides(&encoded);
// This might fail depending on whether version is required
if result.is_err() {
assert!(result.unwrap_err().to_string().contains("parse initdata"));
}
}
/// Test InitData creation and serialization
#[test]
fn test_init_data() {
@@ -292,6 +435,12 @@ mod tests {
assert_eq!(cca_result.len(), 64);
assert_eq!(&cca_result[..32], &short_digest[..]);
assert_eq!(&cca_result[32..], vec![0u8; 32]);
// Test SE platform (requires 256 bytes)
let long_digest = vec![0xAA; 256];
let se_result = adjust_digest(&long_digest, ProtectedPlatform::Se);
assert_eq!(se_result.len(), 256);
assert_eq!(&se_result[..256], &long_digest[..256]);
}
/// Test hypervisor initdata processing with compression

View File

@@ -40,6 +40,9 @@ pub(crate) mod utils;
/// hypervisor capabilities
pub mod capabilities;
/// Filesystem-related constants
pub mod fs;
/// The Initdata specification defines the key data structures and algorithms for injecting
/// any well-defined data from an untrusted host into a TEE (Trusted Execution Environment).
pub mod initdata;

View File

@@ -205,47 +205,48 @@ pub struct NydusImageVolume {
pub snapshot_dir: String,
}
/// Kata virtual volume to encapsulate information for extra mount options and direct volumes.
/// Represents a Kata virtual volume, encapsulating information for extra mount options and direct volumes.
///
/// It's very expensive to build direct communication channels to pass information:
/// - between snapshotters and kata-runtime/kata-agent/image-rs
/// - between CSI drivers and kata-runtime/kata-agent
/// Direct communication channels between components like snapshotters, `kata-runtime`, `kata-agent`,
/// `image-rs`, and CSI drivers are often expensive to build and maintain.
///
/// So `KataVirtualVolume` is introduced to encapsulate extra mount options and direct volume
/// information, so we can build a common infrastructure to handle them.
/// `KataVirtualVolume` is a superset of `NydusExtraOptions` and `DirectVolumeMountInfo`.
/// Therefore, `KataVirtualVolume` is introduced as a common infrastructure to encapsulate
/// additional mount options and direct volume information. It serves as a superset of
/// `NydusExtraOptions` and `DirectVolumeMountInfo`.
///
/// Value of `volume_type` determines how to interpret other fields in the structure.
/// The interpretation of other fields within this structure is determined by the `volume_type` field.
///
/// - `KATA_VIRTUAL_VOLUME_IGNORE`
/// -- all other fields should be ignored/unused.
/// # Volume Types:
///
/// - `KATA_VIRTUAL_VOLUME_DIRECT_BLOCK`
/// -- `source`: the directly assigned block device
/// -- `fs_type`: filesystem type
/// -- `options`: mount options
/// -- `direct_volume`: additional metadata to pass to the agent regarding this volume.
/// - `KATA_VIRTUAL_VOLUME_IGNORE`:
/// All other fields should be ignored/unused.
///
/// - `KATA_VIRTUAL_VOLUME_IMAGE_RAW_BLOCK` or `KATA_VIRTUAL_VOLUME_LAYER_RAW_BLOCK`
/// -- `source`: path to the raw block image for the container image or layer.
/// -- `fs_type`: filesystem type
/// -- `options`: mount options
/// -- `dm_verity`: disk dm-verity information
/// - `KATA_VIRTUAL_VOLUME_DIRECT_BLOCK`:
/// - `source`: The directly assigned block device path.
/// - `fs_type`: Filesystem type.
/// - `options`: Mount options.
/// - `direct_volume`: Additional metadata to pass to the agent regarding this volume.
///
/// - `KATA_VIRTUAL_VOLUME_IMAGE_NYDUS_BLOCK` or `KATA_VIRTUAL_VOLUME_LAYER_NYDUS_BLOCK`
/// -- `source`: path to nydus meta blob
/// -- `fs_type`: filesystem type
/// -- `nydus_image`: configuration information for nydus image.
/// -- `dm_verity`: disk dm-verity information
/// - `KATA_VIRTUAL_VOLUME_IMAGE_RAW_BLOCK` or `KATA_VIRTUAL_VOLUME_LAYER_RAW_BLOCK`:
/// - `source`: Path to the raw block image for the container image or layer.
/// - `fs_type`: Filesystem type.
/// - `options`: Mount options.
/// - `dm_verity`: Disk `dm-verity` information.
///
/// - `KATA_VIRTUAL_VOLUME_IMAGE_NYDUS_FS` or `KATA_VIRTUAL_VOLUME_LAYER_NYDUS_FS`
/// -- `source`: path to nydus meta blob
/// -- `fs_type`: filesystem type
/// -- `nydus_image`: configuration information for nydus image.
/// - `KATA_VIRTUAL_VOLUME_IMAGE_NYDUS_BLOCK` or `KATA_VIRTUAL_VOLUME_LAYER_NYDUS_BLOCK`:
/// - `source`: Path to nydus meta blob.
/// - `fs_type`: Filesystem type.
/// - `nydus_image`: Configuration information for nydus image.
/// - `dm_verity`: Disk `dm-verity` information.
///
/// - `KATA_VIRTUAL_VOLUME_IMAGE_GUEST_PULL`
/// -- `source`: image reference
/// -- `image_pull`: metadata for image pulling
/// - `KATA_VIRTUAL_VOLUME_IMAGE_NYDUS_FS` or `KATA_VIRTUAL_VOLUME_LAYER_NYDUS_FS`:
/// - `source`: Path to nydus meta blob.
/// - `fs_type`: Filesystem type.
/// - `nydus_image`: Configuration information for nydus image.
///
/// - `KATA_VIRTUAL_VOLUME_IMAGE_GUEST_PULL`:
/// - `source`: Image reference.
/// - `image_pull`: Metadata for image pulling.
#[derive(Debug, Clone, Eq, PartialEq, Default, Serialize, Deserialize)]
pub struct KataVirtualVolume {
/// Type of virtual volume.
@@ -275,7 +276,7 @@ pub struct KataVirtualVolume {
}
impl KataVirtualVolume {
/// Create a new instance of `KataVirtualVolume` with specified type.
/// Creates a new instance of `KataVirtualVolume` with the specified type.
pub fn new(volume_type: String) -> Self {
Self {
volume_type,
@@ -283,7 +284,7 @@ impl KataVirtualVolume {
}
}
/// Validate virtual volume object.
/// Validates the virtual volume object.
pub fn validate(&self) -> Result<()> {
match self.volume_type.as_str() {
KATA_VIRTUAL_VOLUME_DIRECT_BLOCK => {
@@ -365,25 +366,25 @@ impl KataVirtualVolume {
Ok(())
}
/// Serialize the virtual volume object to json.
/// Serializes the virtual volume object to a JSON string.
pub fn to_json(&self) -> Result<String> {
Ok(serde_json::to_string(self)?)
}
/// Deserialize a virtual volume object from json string.
/// Deserializes a virtual volume object from a JSON string.
pub fn from_json(value: &str) -> Result<Self> {
let volume: KataVirtualVolume = serde_json::from_str(value)?;
volume.validate()?;
Ok(volume)
}
/// Serialize the virtual volume object to json and encode the string with base64.
/// Serializes the virtual volume object to a JSON string and encodes the string with base64.
pub fn to_base64(&self) -> Result<String> {
let json = self.to_json()?;
Ok(base64::encode(json))
}
/// Decode and deserialize a virtual volume object from base64 encoded json string.
/// Decodes and deserializes a virtual volume object from a base64 encoded JSON string.
pub fn from_base64(value: &str) -> Result<Self> {
let json = base64::decode(value)?;
let volume: KataVirtualVolume = serde_json::from_slice(&json)?;
@@ -453,18 +454,18 @@ impl TryFrom<&NydusExtraOptions> for KataVirtualVolume {
}
}
/// Trait object for storage device.
/// Trait object for a storage device.
pub trait StorageDevice: Send + Sync {
/// Path
/// Returns the path of the storage device, if available.
fn path(&self) -> Option<&str>;
/// Clean up resources related to the storage device.
/// Cleans up resources related to the storage device.
fn cleanup(&self) -> Result<()>;
}
/// Join user provided volume path with kata direct-volume root path.
/// Joins a user-provided volume path with the Kata direct-volume root path.
///
/// The `volume_path` is base64-url-encoded and then safely joined to the `prefix`
/// The `volume_path` is base64-url-encoded and then safely joined to the `prefix`.
pub fn join_path(prefix: &str, volume_path: &str) -> Result<PathBuf> {
if volume_path.is_empty() {
return Err(anyhow!(std::io::ErrorKind::NotFound));
@@ -474,7 +475,7 @@ pub fn join_path(prefix: &str, volume_path: &str) -> Result<PathBuf> {
Ok(safe_path::scoped_join(prefix, b64_url_encoded_path)?)
}
/// get DirectVolume mountInfo from mountinfo.json.
/// Gets `DirectVolumeMountInfo` from `mountinfo.json`.
pub fn get_volume_mount_info(volume_path: &str) -> Result<DirectVolumeMountInfo> {
let volume_path = join_path(KATA_DIRECT_VOLUME_ROOT_PATH, volume_path)?;
let mount_info_file_path = volume_path.join(KATA_MOUNT_INFO_FILE_NAME);
@@ -484,28 +485,30 @@ pub fn get_volume_mount_info(volume_path: &str) -> Result<DirectVolumeMountInfo>
Ok(mount_info)
}
/// Check whether a mount type is a marker for Kata specific volume.
/// Checks whether a mount type is a marker for a Kata specific volume.
pub fn is_kata_special_volume(ty: &str) -> bool {
ty.len() > KATA_VOLUME_TYPE_PREFIX.len() && ty.starts_with(KATA_VOLUME_TYPE_PREFIX)
}
/// Check whether a mount type is a marker for Kata guest mount volume.
/// Checks whether a mount type is a marker for a Kata guest mount volume.
pub fn is_kata_guest_mount_volume(ty: &str) -> bool {
ty.len() > KATA_GUEST_MOUNT_PREFIX.len() && ty.starts_with(KATA_GUEST_MOUNT_PREFIX)
}
/// Check whether a mount type is a marker for Kata ephemeral volume.
/// Checks whether a mount type is a marker for a Kata ephemeral volume.
pub fn is_kata_ephemeral_volume(ty: &str) -> bool {
ty == KATA_EPHEMERAL_VOLUME_TYPE
}
/// Check whether a mount type is a marker for Kata hostdir volume.
/// Checks whether a mount type is a marker for a Kata hostdir volume.
pub fn is_kata_host_dir_volume(ty: &str) -> bool {
ty == KATA_HOST_DIR_VOLUME_TYPE
}
/// sandbox bindmount format: /path/to/dir, or /path/to/dir:ro[:rw]
/// the real path is without suffix ":ro" or ":rw".
/// Splits a sandbox bindmount string into its real path and mode.
///
/// The `bindmount` format is typically `/path/to/dir` or `/path/to/dir:ro[:rw]`.
/// This function extracts the real path (without the suffix ":ro" or ":rw") and the mode.
pub fn split_bind_mounts(bindmount: &str) -> (&str, &str) {
let (real_path, mode) = if bindmount.ends_with(SANDBOX_BIND_MOUNTS_RO) {
(
@@ -525,12 +528,14 @@ pub fn split_bind_mounts(bindmount: &str) -> (&str, &str) {
(real_path, mode)
}
/// This function, adjust_rootfs_mounts, manages the root filesystem mounts based on guest-pull mechanism.
/// - the function disregards any provided rootfs_mounts.
/// Instead, it forcefully creates a single, default KataVirtualVolume specifically for guest-pull operations.
/// This volume's representation is then base64-encoded and added as the only option to a new, singular Mount entry,
/// which becomes the sole item in the returned Vec<Mount>.
/// This ensures that when guest pull is active, the root filesystem is exclusively configured via this virtual volume.
/// Adjusts the root filesystem mounts based on the guest-pull mechanism.
///
/// This function disregards any provided `rootfs_mounts`. Instead, it forcefully creates
/// a single, default `KataVirtualVolume` specifically for guest-pull operations.
/// This volume's representation is then base64-encoded and added as the only option
/// to a new, singular `Mount` entry, which becomes the sole item in the returned `Vec<Mount>`.
/// This ensures that when guest pull is active, the root filesystem is exclusively
/// configured via this virtual volume.
pub fn adjust_rootfs_mounts() -> Result<Vec<Mount>> {
// We enforce a single, default KataVirtualVolume as the exclusive rootfs mount.
let volume = KataVirtualVolume::new(KATA_VIRTUAL_VOLUME_IMAGE_GUEST_PULL.to_string());

View File

@@ -186,7 +186,7 @@ mod tests {
"./test_hypervisor_hook_path"
);
assert!(!hv.memory_info.enable_mem_prealloc);
assert_eq!(hv.cpu_info.default_vcpus, 12);
assert_eq!(hv.cpu_info.default_vcpus, 12.0);
assert!(!hv.memory_info.enable_guest_swap);
assert_eq!(hv.memory_info.default_memory, 100);
assert!(!hv.enable_iothreads);

View File

@@ -22,6 +22,7 @@ slog-json = "2.4.0"
slog-term = "2.9.1"
slog-async = "2.7.0"
slog-scope = "4.4.0"
slog-journald = "2.2.0"
lazy_static = "1.3.0"
arc-swap = "1.5.0"

View File

@@ -81,6 +81,11 @@ pub fn create_term_logger(level: slog::Level) -> (slog::Logger, slog_async::Asyn
(logger, guard)
}
pub enum LogDestination {
File(Box<dyn Write + Send + Sync>),
Journal,
}
// Creates a logger which prints output as JSON
// XXX: 'writer' param used to make testing possible.
pub fn create_logger<W>(
@@ -92,13 +97,43 @@ pub fn create_logger<W>(
where
W: Write + Send + Sync + 'static,
{
let json_drain = slog_json::Json::new(writer)
.add_default_keys()
.build()
.fuse();
create_logger_with_destination(name, source, level, LogDestination::File(Box::new(writer)))
}
// Creates a logger which prints output as JSON or to systemd journal
pub fn create_logger_with_destination(
name: &str,
source: &str,
level: slog::Level,
destination: LogDestination,
) -> (slog::Logger, slog_async::AsyncGuard) {
// Check the destination type before consuming it.
// The `matches` macro performs a non-consuming check (it borrows).
let is_journal_destination = matches!(destination, LogDestination::Journal);
// The target type for boxed drain. Note that Err = slog::Never.
// Both `.fuse()` and `.ignore_res()` convert potential errors into a non-returning path
// (panic or ignore), so they never return an Err.
let drain: Box<dyn Drain<Ok = (), Err = slog::Never> + Send> = match destination {
LogDestination::File(writer) => {
// `destination` is `File`.
let json_drain = slog_json::Json::new(writer)
.add_default_keys()
.build()
.fuse();
Box::new(json_drain)
}
LogDestination::Journal => {
// `destination` is `Journal`.
let journal_drain = slog_journald::JournaldDrain.ignore_res();
Box::new(journal_drain)
}
};
// Ensure only a unique set of key/value fields is logged
let unique_drain = UniqueDrain::new(json_drain).fuse();
let unique_drain = UniqueDrain::new(drain).fuse();
// Adjust the level which will be applied to the log-system
// Info is the default level, but if Debug flag is set, the overall log level will be changed to Debug here
@@ -119,16 +154,28 @@ where
.thread_name("slog-async-logger".into())
.build_with_guard();
// Add some "standard" fields
let logger = slog::Logger::root(
// Create a base logger with common fields.
let base_logger = slog::Logger::root(
async_drain.fuse(),
o!("version" => env!("CARGO_PKG_VERSION"),
o!(
"version" => env!("CARGO_PKG_VERSION"),
"subsystem" => DEFAULT_SUBSYSTEM,
"pid" => process::id().to_string(),
"name" => name.to_string(),
"source" => source.to_string()),
"source" => source.to_string()
),
);
// If not journal destination, the logger remains the base_logger.
let logger = if is_journal_destination {
// Use the .new() method to build a child logger which inherits all existing
// key-value pairs from its parent and supplements them with additional ones.
// This is the idiomatic way.
base_logger.new(o!("SYSLOG_IDENTIFIER" => "kata"))
} else {
base_logger
};
(logger, guard)
}
@@ -502,7 +549,12 @@ mod tests {
let record_key = "record-key-1";
let record_value = "record-key-2";
let (logger, guard) = create_logger(name, source, level, writer);
let (logger, guard) = create_logger_with_destination(
name,
source,
level,
LogDestination::File(Box::new(writer)),
);
let msg = "foo, bar, baz";
@@ -661,7 +713,12 @@ mod tests {
.reopen()
.unwrap_or_else(|_| panic!("{:?}: failed to clone tempfile", msg));
let (logger, logger_guard) = create_logger(name, source, d.slog_level, writer);
let (logger, logger_guard) = create_logger_with_destination(
name,
source,
d.slog_level,
LogDestination::File(Box::new(writer)),
);
// Call the logger (which calls the drain)
(d.closure)(&logger, d.msg.to_owned());

View File

@@ -115,7 +115,7 @@ impl From<oci::PosixRlimit> for grpc::POSIXRlimit {
impl From<oci::Process> for grpc::Process {
fn from(from: oci::Process) -> Self {
grpc::Process {
Terminal: from.terminal().map_or(false, |t| t),
Terminal: from.terminal().is_some_and(|t| t),
ConsoleSize: from_option(from.console_size()),
User: from_option(Some(from.user().clone())),
Args: option_vec_to_vec(from.args()),
@@ -161,7 +161,7 @@ impl From<oci::LinuxMemory> for grpc::LinuxMemory {
Kernel: from.kernel().map_or(0, |t| t),
KernelTCP: from.kernel_tcp().map_or(0, |t| t),
Swappiness: from.swappiness().map_or(0, |t| t),
DisableOOMKiller: from.disable_oom_killer().map_or(false, |t| t),
DisableOOMKiller: from.disable_oom_killer().is_some_and(|t| t),
..Default::default()
}
}

View File

@@ -355,6 +355,7 @@ mod tests {
.read(false)
.write(true)
.create(true)
.truncate(true)
.mode(0o200)
.open(&path)
.unwrap();
@@ -376,6 +377,7 @@ mod tests {
.read(false)
.write(true)
.create(true)
.truncate(true)
.mode(0o200)
.open(&path)
.unwrap();

View File

@@ -90,7 +90,7 @@ pub fn mgmt_socket_addr(sid: &str) -> Result<String> {
));
}
get_uds_with_sid(sid, &sb_storage_path()?)
get_uds_with_sid(sid, sb_storage_path()?)
}
#[cfg(test)]

View File

@@ -851,6 +851,16 @@ dependencies = [
"typenum",
]
[[package]]
name = "crypto-mac"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1d1a86f49236c215f271d40892d5fc950490551400b02ef360692c29815c714"
dependencies = [
"generic-array",
"subtle",
]
[[package]]
name = "darling"
version = "0.14.4"
@@ -1787,6 +1797,16 @@ version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hmac"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a2a2320eb7ec0ebe8da8f744d7812d9fc4cb4d09344ac01898dbcb6a20ae69b"
dependencies = [
"crypto-mac",
"digest 0.9.0",
]
[[package]]
name = "hmac"
version = "0.12.1"
@@ -2304,6 +2324,23 @@ version = "0.2.172"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
[[package]]
name = "libsystemd"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f4f0b5b062ba67aa075e331de778082c09e66b5ef32970ea5a1e9c37c9555d1"
dependencies = [
"hmac 0.11.0",
"libc",
"log",
"nix 0.23.2",
"once_cell",
"serde",
"sha2 0.9.3",
"thiserror 1.0.69",
"uuid 0.8.2",
]
[[package]]
name = "libz-sys"
version = "1.1.22"
@@ -2390,6 +2427,7 @@ dependencies = [
"serde_json",
"slog",
"slog-async",
"slog-journald",
"slog-json",
"slog-scope",
"slog-term",
@@ -2797,7 +2835,7 @@ dependencies = [
"bitflags 1.3.2",
"fuse-backend-rs",
"hex",
"hmac",
"hmac 0.12.1",
"httpdate",
"lazy_static",
"libc",
@@ -4464,6 +4502,16 @@ dependencies = [
"thread_local",
]
[[package]]
name = "slog-journald"
version = "2.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83e14eb8c2f5d0c8fc9fbac40e6391095e4dc5cb334f7dce99c75cb1919eb39c"
dependencies = [
"libsystemd",
"slog",
]
[[package]]
name = "slog-json"
version = "2.6.1"
@@ -4617,9 +4665,9 @@ dependencies = [
[[package]]
name = "subtle"
version = "2.5.0"
version = "2.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
checksum = "6bdef32e8150c2a081110b42772ffe7d7c9032b606bc226c8260fd97e0976601"
[[package]]
name = "syn"
@@ -5195,6 +5243,15 @@ dependencies = [
"rand 0.3.23",
]
[[package]]
name = "uuid"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
dependencies = [
"serde",
]
[[package]]
name = "uuid"
version = "1.16.0"

View File

@@ -147,6 +147,7 @@ DEFMAXMEMSZ := 0
##VAR DEFBRIDGES=<number> Default number of bridges
DEFBRIDGES := 0
DEFENABLEANNOTATIONS := [\"kernel_params\"]
DEFENABLEANNOTATIONS_COCO := [\"kernel_params\",\"cc_init_data\"]
DEFDISABLEGUESTSECCOMP := true
DEFDISABLEGUESTEMPTYDIR := false
##VAR DEFAULTEXPFEATURES=[features] Default experimental features enabled
@@ -482,6 +483,7 @@ USER_VARS += DEFVIRTIOFSCACHE
USER_VARS += DEFVIRTIOFSQUEUESIZE
USER_VARS += DEFVIRTIOFSEXTRAARGS
USER_VARS += DEFENABLEANNOTATIONS
USER_VARS += DEFENABLEANNOTATIONS_COCO
USER_VARS += DEFENABLEIOTHREADS
USER_VARS += DEFSECCOMPSANDBOXPARAM
USER_VARS += DEFGUESTSELINUXLABEL

View File

@@ -195,6 +195,9 @@ block_device_driver = "virtio-blk-pci"
# result in memory pre allocation
#enable_hugepages = true
# Disable the 'seccomp' feature from Cloud Hypervisor or firecracker, default false
# disable_seccomp = true
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available.
#

View File

@@ -45,7 +45,7 @@ confidential_guest = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
enable_annotations = @DEFENABLEANNOTATIONS_COCO@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
@@ -541,7 +541,7 @@ kernel_modules=[]
# Agent dial timeout in millisecond.
# (default: 10)
dial_timeout_ms = 30
dial_timeout_ms = 90
# Agent reconnect timeout in millisecond.
# Retry times = reconnect_timeout_ms / dial_timeout_ms (default: 300)
@@ -550,7 +550,7 @@ dial_timeout_ms = 30
# You'd better not change the value of dial_timeout_ms, unless you have an
# idea of what you are doing.
# (default: 3000)
#reconnect_timeout_ms = 3000
reconnect_timeout_ms = 5000
# Create Container Request Timeout
# This timeout value is used to set the maximum duration for the agent to process a CreateContainerRequest.

View File

@@ -29,7 +29,7 @@ remote_hypervisor_timeout = 600
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
@@ -41,7 +41,7 @@ remote_hypervisor_timeout = 600
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
# Note: Remote hypervisor is only handling the following annotations
enable_annotations = ["machine_type", "default_memory", "default_vcpus", "default_gpus", "default_gpu_model"]
enable_annotations = ["machine_type", "default_memory", "default_vcpus", "default_gpus", "default_gpu_model", "cc_init_data"]
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
@@ -251,7 +251,7 @@ disable_guest_seccomp=true
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# (default: false)
# Note: The remote hypervisor has a different networking model, which requires true
# Note: The remote hypervisor has a different networking model, which requires true
disable_new_netns = false
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.

View File

@@ -145,6 +145,9 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@"
# result in memory pre allocation
#enable_hugepages = true
# Disable the 'seccomp' feature from Cloud Hypervisor or firecracker, default false
# disable_seccomp = true
# Enable vIOMMU, default false
# Enabling this will result in the VM having a vIOMMU device
# This will also add the following options to the kernel's

View File

@@ -319,12 +319,12 @@ impl TryFrom<(CpuInfo, GuestProtection)> for CpusConfig {
let guest_protection_to_use = args.1;
// This can only happen if runtime-rs fails to set default values.
if cpu.default_vcpus <= 0 {
if cpu.default_vcpus <= 0.0 {
return Err(CpusConfigError::BootVCPUsTooSmall);
}
let default_vcpus =
u8::try_from(cpu.default_vcpus).map_err(CpusConfigError::BootVCPUsTooBig)?;
let default_vcpus = u8::try_from(cpu.default_vcpus.ceil() as u32)
.map_err(CpusConfigError::BootVCPUsTooBig)?;
// This can only happen if runtime-rs fails to set default values.
if cpu.default_maxvcpus == 0 {
@@ -611,7 +611,7 @@ mod tests {
};
let cpu_info = CpuInfo {
default_vcpus: cpu_default as i32,
default_vcpus: cpu_default as f32,
default_maxvcpus,
..Default::default()
@@ -1159,7 +1159,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: -1,
default_vcpus: -1.0,
..Default::default()
},
@@ -1168,7 +1168,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 0,
..Default::default()
@@ -1178,7 +1178,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: 9,
default_vcpus: 9.0,
default_maxvcpus: 7,
..Default::default()
@@ -1188,7 +1188,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 1,
..Default::default()
},
@@ -1208,7 +1208,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 3,
..Default::default()
},
@@ -1228,7 +1228,7 @@ mod tests {
},
TestData {
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 13,
..Default::default()
},
@@ -1823,7 +1823,7 @@ mod tests {
cfg: HypervisorConfig {
cpu_info: CpuInfo {
default_vcpus: 0,
default_vcpus: 0.0,
..cpu_info.clone()
},
@@ -1939,7 +1939,7 @@ mod tests {
vsock_socket_path: "vsock_socket_path".into(),
cfg: HypervisorConfig {
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 1,
..Default::default()
@@ -1963,7 +1963,7 @@ mod tests {
..Default::default()
},
cpu_info: CpuInfo {
default_vcpus: 1,
default_vcpus: 1.0,
default_maxvcpus: 1,
..Default::default()

View File

@@ -103,6 +103,9 @@ impl FcInner {
cmd.args(["--api-sock", &self.asock_path]);
}
}
if self.config.security_info.disable_seccomp {
cmd.arg("--no-seccomp");
}
debug!(sl(), "Exec: {:?}", cmd);
// Make sure we're in the correct Network Namespace

View File

@@ -8,10 +8,12 @@ use anyhow::{anyhow, Result};
use crate::{
VM_ROOTFS_DRIVER_BLK, VM_ROOTFS_DRIVER_BLK_CCW, VM_ROOTFS_DRIVER_MMIO, VM_ROOTFS_DRIVER_PMEM,
VM_ROOTFS_FILESYSTEM_EROFS, VM_ROOTFS_FILESYSTEM_EXT4, VM_ROOTFS_FILESYSTEM_XFS,
VM_ROOTFS_ROOT_BLK, VM_ROOTFS_ROOT_PMEM,
};
use kata_types::config::LOG_VPORT_OPTION;
use kata_types::fs::{
VM_ROOTFS_FILESYSTEM_EROFS, VM_ROOTFS_FILESYSTEM_EXT4, VM_ROOTFS_FILESYSTEM_XFS,
};
// Port where the agent will send the logs. Logs are sent through the vsock in cases
// where the hypervisor has no console.sock, i.e dragonball
@@ -179,9 +181,10 @@ mod tests {
use super::*;
use crate::{
VM_ROOTFS_DRIVER_BLK, VM_ROOTFS_DRIVER_PMEM, VM_ROOTFS_FILESYSTEM_EROFS,
VM_ROOTFS_FILESYSTEM_EXT4, VM_ROOTFS_FILESYSTEM_XFS, VM_ROOTFS_ROOT_BLK,
VM_ROOTFS_ROOT_PMEM,
VM_ROOTFS_DRIVER_BLK, VM_ROOTFS_DRIVER_PMEM, VM_ROOTFS_ROOT_BLK, VM_ROOTFS_ROOT_PMEM,
};
use kata_types::fs::{
VM_ROOTFS_FILESYSTEM_EROFS, VM_ROOTFS_FILESYSTEM_EXT4, VM_ROOTFS_FILESYSTEM_XFS,
};
#[test]

View File

@@ -47,11 +47,6 @@ const VM_ROOTFS_DRIVER_MMIO: &str = "virtio-blk-mmio";
const VM_ROOTFS_ROOT_BLK: &str = "/dev/vda1";
const VM_ROOTFS_ROOT_PMEM: &str = "/dev/pmem0p1";
// Config which filesystem to use as rootfs type
const VM_ROOTFS_FILESYSTEM_EXT4: &str = "ext4";
const VM_ROOTFS_FILESYSTEM_XFS: &str = "xfs";
const VM_ROOTFS_FILESYSTEM_EROFS: &str = "erofs";
// before using hugepages for VM, we need to mount hugetlbfs
// /dev/hugepages will be the mount point
// mkdir -p /dev/hugepages

View File

@@ -2182,6 +2182,14 @@ impl<'a> QemuCmdLine<'a> {
qemu_cmd_line.add_virtio_balloon();
}
if let Some(seccomp_sandbox) = &config
.security_info
.seccomp_sandbox
.as_ref()
.filter(|s| !s.is_empty())
{
qemu_cmd_line.add_seccomp_sandbox(seccomp_sandbox);
}
Ok(qemu_cmd_line)
}
@@ -2620,6 +2628,11 @@ impl<'a> QemuCmdLine<'a> {
Ok(())
}
pub fn add_seccomp_sandbox(&mut self, param: &str) {
let seccomp_sandbox = SeccompSandbox::new(param);
self.devices.push(Box::new(seccomp_sandbox));
}
pub async fn build(&self) -> Result<Vec<String>> {
let mut result = Vec::new();
@@ -2706,3 +2719,23 @@ impl ToQemuParams for DeviceVirtioBalloon {
])
}
}
#[derive(Debug)]
struct SeccompSandbox {
param: String,
}
impl SeccompSandbox {
fn new(param: &str) -> Self {
SeccompSandbox {
param: param.to_owned(),
}
}
}
#[async_trait]
impl ToQemuParams for SeccompSandbox {
async fn qemu_params(&self) -> Result<Vec<String>> {
Ok(vec!["-sandbox".to_owned(), self.param.clone()])
}
}

Some files were not shown because too many files have changed in this diff Show More