Compare commits

...

102 Commits

Author SHA1 Message Date
Fupan Li
3f45fcd3b3 tests: fix the issue of missing teardown pods
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-17 19:24:26 +08:00
Fupan Li
c74a2650e9 runtime-rs: fix the issue of wrong vcpu number
In commit 1f95d9401b
runtime-rs: change representation of default_vcpus from i32 to f32,

When the vCPU number is less than 1.0, directly converting an integer to
a floating-point number will automatically convert it to 0. Therefore,
it needs to be rounded up before converting it back to an integer.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-17 10:09:51 +08:00
Fabiano Fidêncio
2e000129a9 kata-deploy: tests: Add example values files for easy Kata deployment
Add three example values files to make it easier for users to try out
different Kata Containers configurations:

- try-kata.values.yaml: Enables all available shims
- try-kata-tee.values.yaml: Enables only TEE/confidential computing shims
- try-kata-nvidia-gpu.values.yaml: Enables only NVIDIA GPU shims

These files use the new structured configuration format and serve as
ready-to-use examples for common deployment scenarios.

Also update the README.md to document these example files and how to use them.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
8717312599 tests: Migrate helm_helper to use new structured configuration
Update the helm_helper function in gha-run-k8s-common.sh to use the
new structured configuration format instead of the legacy env.* format.

All possible settings have been migrated to the structured format:
- HELM_DEBUG now sets root-level 'debug' boolean
- HELM_SHIMS now enables shims in structured format with automatic
  architecture detection based on shim name
- HELM_DEFAULT_SHIM now sets per-architecture defaultShim mapping
- HELM_EXPERIMENTAL_SETUP_SNAPSHOTTER now sets snapshotter.setup array
- HELM_ALLOWED_HYPERVISOR_ANNOTATIONS now sets per-shim allowedHypervisorAnnotations
- HELM_SNAPSHOTTER_HANDLER_MAPPING now sets per-shim containerd.snapshotter
- HELM_AGENT_HTTPS_PROXY and HELM_AGENT_NO_PROXY now set per-shim agent proxy settings
- HELM_PULL_TYPE_MAPPING now sets per-shim forceGuestPull/guestPull settings
- HELM_EXPERIMENTAL_FORCE_GUEST_PULL now sets per-shim forceGuestPull/guestPull

The test helper automatically determines supported architectures for
each shim (e.g., qemu-se supports s390x, qemu-cca supports arm64,
qemu-snp/qemu-tdx support amd64, etc.) and applies per-shim settings
to the appropriate shims based on HELM_SHIMS.

Only HELM_HOST_OS remains in legacy env.* format as it doesn't have
a structured equivalent yet.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
aa89fda7fc kata-deploy: Document new structured configuration and deprecation
Add comprehensive documentation for the new structured configuration
format, including:

- Migration guide from legacy env.* format
- List of deprecated fields with removal timeline (2 releases)
- Examples of the new structured format
- Explanation of key benefits
- Backward compatibility notes

The documentation makes it clear that the legacy format is deprecated
but will continue to work during the transition period.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
119893b8e8 kata-deploy: Add backward compatibility for legacy env.* configuration
This commit adds backward compatibility support to ensure existing
configurations using the legacy env.* format continue to work.

The helper functions now check for legacy env.* values first, and
only fall back to the new structured format if legacy values are
not set. This allows for gradual migration without breaking
existing deployments.

Backward compatibility is maintained for:
- env.shims, env.shims_* (per architecture)
- env.defaultShim, env.defaultShim_* (per architecture)
- env.allowedHypervisorAnnotations
- env.snapshotterHandlerMapping_* (per architecture)
- env.pullTypeMapping_* (per architecture)
- env.agentHttpsProxy, env.agentNoProxy
- env._experimentalSetupSnapshotter
- env._experimentalForceGuestPull_* (per architecture)
- env.debug

Legacy env vars (SHIMS, DEFAULT_SHIM, etc.) are still set in the
DaemonSet when using the old format to maintain full compatibility.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
ae3fb45814 kata-deploy: Introduce structured configuration format for shims
This commit introduces a new structured configuration format for
configuring Kata Containers shims in the Helm chart. The new format
provides:

- Per-shim configuration with enabled/supportedArches
- Per-shim snapshotter, guest pull, and agent proxy settings
- Architecture-aware default shim configuration
- Root-level debug and snapshotter setup configuration

All shims are disabled by default and must be explicitly enabled.
This provides better type safety and clearer organization compared
to the legacy env.* string-based format.

The templates are updated to use the new structure exclusively.
Backward compatibility will be added in a follow-up commit.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
e85d584e1c kata-deploy: script: Fix FOR_ARCH handling
As the some of the global vars can be empty, we should actually check
their _FOR_ARCH version instead.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
397289c67c kata-deploy: script: Handle {https,no}_proxy per shim
As we're making the values.yaml more user friendly, we actually have to
handle the https_proxy and no_proxy entries per shim, instead of having
this globally available, as this will only affect images being pulled
inside the guest (as in, when using TEE variations of the shims).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
Fabiano Fidêncio
f62d9435a2 runtimeclasses: firecracker is not a valid one
At least not for now, and it was mistakenly added to the list.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-15 09:36:14 +01:00
nheinemans-asml
3380458269 kata-deploy: Add daemonsets to the RBAC
Add missing rules which are necessary for dealing with
daemonsets as kata-deploy know checks for the NFD
daemonset as part of its script.

fixes #12083

Signed-off-by: nheinemans-asml <97238218+nheinemans-asml@users.noreply.github.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-14 17:16:58 +01:00
Simon Kaegi
716c55abdd kernel: adds nft bridging and filtering support for IPv4 and IPv6
Adds a practical set of kernel config used by docker-in-docker and kind
for network bridging and filtering. It also includes the matching IPv6
support to allow tools like kind that require IPv6 network policies to
work out of the box.

This support includes:
- nftables reject and filtering support for inet/ipv4/ipv6
- Bridge filtering for container-to-container traffic
- IPv6 NAT, filtering, and packet matching rules for network policies
- VXLAN and IPsec crypto support for network tunneling
- TMPFS POSIX ACL support for filesystem permissions

The configs are organized across fragment files:
- common/fs.conf: TMPFS ACL support
- common/crypto.conf: IPsec/VXLAN crypto algorithms
- common/network.conf: VXLAN, IPsec ESP, nftables bridge/ARP/netdev
- common/netfilter.conf: IPv6 netfilter stack and nftables advanced features

Fixes: #11886

Signed-off-by: Simon Kaegi <simon.kaegi@gmail.com>
2025-11-14 15:57:47 +01:00
Dan Mihai
5cc1024936 ci: k8s: AUTO_GENERATE_POLICY for coco-dev
Re-enable AUTO_GENERATE_POLICY for coco-dev Hosts, unless PULL_TYPE is
"experimental-force-guest-pull", or the caller specified a different
value for AUTO_GENERATE_POLICY.

Auto-generated Policy has been disabled accidentally and recently for
these Hosts, by a GHA workflow change.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-14 15:53:34 +01:00
Dan Mihai
73ad83e1cc genpolicy: update workaround for guest pull
Don't skip anymore parsing the pause container image when using the
recently updated AKS pause container handling - i.e. when
pause_container_id_policy == "v2".

This was the easiest CI fix for guest pull + new AKS given the *current*
tests. When adding *new* UID/GID/AdditionalGids tests in the future,
these workarounds might need additional updates.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-14 15:53:34 +01:00
Steve Horsman
7bcb971398 Merge pull request #12075 from burgerdev/genpolicy-archived-deps
retire `adler` dependency
2025-11-14 14:51:47 +00:00
Steve Horsman
1d0d066869 Merge pull request #12069 from Amulyam24/static-checks-ppc
github: run agent checks for Power on ppc64le instead of ubuntu-24.04-ppc64le
2025-11-14 10:18:37 +00:00
Markus Rudy
dd59131924 runtime-rs: update flate2 to 1.1.5
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is negligible, because flate2 is only used for initdata
decompression, which is limited to a couple of MiB anyway.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-14 11:11:44 +01:00
Markus Rudy
3949492f19 genpolicy: update flate2 to 1.1.5
The update removes the deprecated adler crate from our dependencies. In
addition, we're switching to the default backend (miniz_oxide), which is
a pure Rust implementation and thus much more portable. The performance
impact is acceptable for a developer tool.

Signed-off-by: Markus Rudy <mr@edgeless.systems>
2025-11-14 11:10:29 +01:00
Steve Horsman
0ab71771ab Merge pull request #11447 from kata-containers/runtime-rs-qemu-coco-dev-config
Runtime rs qemu coco dev config
2025-11-13 19:12:57 +00:00
stevenhorsman
1ef3e3b929 ci: Switch gatekeeper auth header
The github API suggestions that `Authorization: Bearer <YOUR-TOKEN>`
is the way to set the auth token, but it also mentioned that `token`
should work, so it's unclear if this will help much, but it shouldn't harm.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 19:01:21 +01:00
stevenhorsman
b7abcc4c37 tests: Fix wildcard skip in k8s-cpu-ns
The formatting wasn't quite right, so the `qemu-coco-dev-runtime-rs`
hypervisor wasn't skipping this test

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:21:05 +00:00
Alex Lyn
bda6bbcad3 runtime-rs: Set static_sandbox_resource_mgmt to true within nontee
Introduce a flag `DEFSTATICRESOURCEMGMT_COCO` for setting static sandbox
resource management with default true. And then set it to the item of
`static_sandbox_resource_mgmt` in configuration.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
b51af53bc7 tests/k8s: call teardown_common in some policy tests
The teardown_common will print the description of the running pods, kill
them all and print the system's syslogs afterwards.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Alex Lyn
efc6aee4f6 runtime-rs: Support agent policy
Support agent policy within runtime-rs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
79082171ca workflows: Add Delete AKS cluster timeout
When testing this branch, on several occasions the Delete
AKS cluster step has hung for multiple hours, so add a timeout
to prevent this.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
0335012824 tests/k8s: Enable tests for qemu-coco-dev-runtime-rs
Add the runtime class to the non-tee tests and
enable it to run in the test code

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
a1ddd2c3dd kata-deploy: Add kata-qemu-coco-dev-runtime-rs runtime class
Add the runtime class and shim references for the new
 non-tee runtime-rs class

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Alex Lyn
64da581f6e kata-types: Support create_container_timeout set within configuration
Since it aligns with the create_container_timeout definition in
runtime-go, we need to set the value in configuration.toml in seconds,
not milliseconds. We must also convert it to milliseconds when the
configuration is loaded for request_timeout_ms.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-13 14:18:43 +00:00
stevenhorsman
af2c2d9d00 runtime-rs: Add qemu-coco-dev-runtime-rs
Create non-tee runtime class for runtime-rs qemu CoCo development
without requiring TEE hardware. Based on the qemu-runtime-rs
config, but with updated guest image, kernel and shared_fs

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-13 14:18:43 +00:00
Amulyam24
b32b54c4af github: do not run agent checks for Power on ubuntu-24.04-ppc64le
The new environment of Power runners for agent checks is causing two test case failures
w.r.to selinux and inode which needs further understanding and is mostly an issue
due to environemnt change and not to do with the agent.

Fall back to running agent checks on original ppc64le self hosted runners.

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2025-11-13 15:56:43 +05:30
Gao Xiang
657c4406cd runtime: Add preliminary support for EROFS native rwlayers
So that the writable data will be written to a seperate storage
instead of tmpfs in the guest.

Note that a cleaner way should use new containerd custom mount
type but I don't have time on this for now.

More details, see:
https://github.com/containerd/containerd/blob/v2.2.0/docs/snapshotters/erofs.md#quota-support

Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2025-11-13 09:55:06 +01:00
Steve Horsman
92758a17fe Merge pull request #12078 from kata-containers/switch-to-ubuntu-24.04-arm-runner
workflows: Switch to ubuntu-24.04-arm runner
2025-11-12 16:35:52 +00:00
stevenhorsman
ba56a2c372 workflows: Switch to ubuntu-22.04-arm runner
As the arm 22.04 runner isn't working at the moment, let's test the
24.04 version to see if that is better.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-12 15:37:09 +00:00
Fabiano Fidêncio
a04cdbc40f tests: Enforce qemu-coco-dev for experimental_force_guest_pull
The fact that we were not explicitly setting the VMM was leading to us
testing with the default runtime class (qemu). :-/

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-12 16:07:05 +01:00
Wainer Moschetta
e31313ce9e Merge pull request #11030 from ldoktor/webhook2
tools.kata-webhook: Add support for only-filter
2025-11-12 11:21:23 -03:00
Hyounggyu Choi
2dec247a54 Merge pull request #12038 from lifupan/fix_smaller-memeory
runtime-rs: fix the issue of hot-unplug memory smaller
2025-11-12 11:22:04 +01:00
Markus Rudy
2c8d0688f2 Merge pull request #12068 from katexochen/p/full-controllers
genpolicy: support full DeploymentSpec, JobSpec; cleanup CronJobSpec
2025-11-12 10:35:38 +01:00
Fabiano Fidêncio
6d3c20bc45 riscv: Introduce its own nightly tests
By doing this, the ones interested on RISC-V support can still have a
ood visibility of its state, without the extra noise in our CI.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-12 09:46:17 +01:00
Zvonko Kaiser
d783e59b42 Merge pull request #12055 from fidencio/topic/coco-bump-trustee
versions: Bump Trustee
2025-11-12 02:48:16 -05:00
dependabot[bot]
edacdcb0bc build(deps): bump github.com/opencontainers/selinux in /src/runtime
Bumps [github.com/opencontainers/selinux](https://github.com/opencontainers/selinux) from 1.12.0 to 1.13.0.
- [Release notes](https://github.com/opencontainers/selinux/releases)
- [Commits](https://github.com/opencontainers/selinux/compare/v1.12.0...v1.13.0)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/selinux
  dependency-version: 1.13.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 23:15:40 +01:00
Steve Horsman
1954dfe349 Merge pull request #12071 from stevenhorsman/update-required-test-docker-and-stratovirt
ci: Remove stratovirt & docker tests from required
2025-11-11 21:19:25 +00:00
Zvonko Kaiser
76e4e6bc24 Merge pull request #12061 from Apokleos/correct-unexpected-cap
tests: Correct unexpected capability for policy failure test
2025-11-11 12:20:33 -05:00
Fabiano Fidêncio
d82eb8d0f1 ci: Drop docker tests
We have had those tests broken for months. It's time to get rid of
those.

NOTE that we could easily revert this commit and re-add those tests as
soon as we find someone to maintain and be responsible for such
integration.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 17:02:02 +01:00
stevenhorsman
8b5df4d360 ci: Remove stratovirt & docker tests from required
As stratovirt CI was removed in #12006 we should remove the
jobs from required.
Also the docker tests have been commented out for months, and
we are considering removing them, so clean this file up.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-11 15:38:51 +00:00
Steve Horsman
4b33000c56 Merge pull request #12067 from Apokleos/fix-guest-emptydir
runtime-rs: Fix several incorrect settings with guest empty dir.
2025-11-11 15:21:31 +00:00
Lukáš Doktor
ca91073d83 tools.kata-webhook: Add support for only-filter
sometimes it's hard to enumerate all blacklisted namespaces, lets add a
regular expression based only filter to allow specifying namespaces that
should be mutated.

Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
2025-11-11 15:21:15 +01:00
dependabot[bot]
281f69a540 build(deps): bump github.com/containerd/containerd in /src/runtime
Bumps [github.com/containerd/containerd](https://github.com/containerd/containerd) from 1.7.27 to 1.7.29.
- [Release notes](https://github.com/containerd/containerd/releases)
- [Changelog](https://github.com/containerd/containerd/blob/main/RELEASES.md)
- [Commits](https://github.com/containerd/containerd/compare/v1.7.27...v1.7.29)

---
updated-dependencies:
- dependency-name: github.com/containerd/containerd
  dependency-version: 1.7.29
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 14:23:47 +01:00
Paul Meyer
ec6896e96b genpolicy: remove non-existing field from CronJobSpec
There is no backoffLimit on CronJobSpec, also no additional fields.

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:12:48 +01:00
Paul Meyer
258aed3cd3 genpolicy: support full JobSpec
Based on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#job-v1-batch

The JOB_COMPLETION_INDEX env will be set if completionMode is "indexed".

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:12:48 +01:00
Paul Meyer
f0ffaa9a6b genpolicy: support full DeploymentSpec
The added fields are relevant only to the controller, so they should
not impact security and following aren't of interest for policies.

Adding according to https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.33/#deployment-v1-apps

Signed-off-by: Paul Meyer <katexochen0@gmail.com>
2025-11-11 11:07:18 +01:00
Alex Lyn
79d1a6ed8f runtime-rs: Correct the mount type for emptydir with local storage
Previous set for the Mount.type with `bind` is wrong, and for local
storage, the type of Mount should be `local`.

This commit aims to correct the type with "local".

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 17:09:33 +08:00
Alex Lyn
935ecf2765 runtime-rs: Fix disable_guest_empty_dir parameters order
As the disable_guest_empty_dir order is wrong which causes
the bool value is not correct and it got a wrong result.

This commit aims to correct the parameters order.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 16:59:00 +08:00
Fabiano Fidêncio
9d6f6bac37 agent-ctl: Bump image-rs version
Bump to the same version of CoCo Guest Components.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
a5629a5a6f versions: Bump coco-guest-components
Usual bump before a release that will be consumed by Confidential
Containers.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
2d2b0de160 tests: kbs: Try to get the pod logs on deployment failure
As this helps immensely to figure out what went wrong with the
deployment.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:24 +01:00
Fabiano Fidêncio
58df06d90e versions: Bump Trustee
This is a bump pre-release, which brings several fixes and some
improvements related to initData, and NVIDIA's remote verifier.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-11 08:08:05 +01:00
Alex Lyn
c225cba0e6 tests: Correct unexpected capability for policy failure test
The test case designed to verify policy failures due to an "unexpected
capability" was misconfigured. It was using "CAP_SYS_CHROOT" as the
unexpected capability to be added.

This configuration was flawed for two main reasons:
1.Incorrect Syntax: Kubernetes Pod specs expect capability names without
the "CAP_" prefix (e.g., "SYS_CHROOT", not "CAP_SYS_CHROOT").
This made the test case's premise incorrect from a K8s API perspective.
2.Part of Default Set: "SYS_CHROOT" is already included in the
`default_caps` list for a standard container. Therefore, adding it would
 not trigger a policy violation, defeating the purpose of the
 "unexpected capability" test.

Furthermore, a related issue was observed where a malformed capability
like "CAP_CAP_SYS_CHROOT" was being generated, causing parsing failures
in the `oci-spec-rs` library. This was a symptom of incorrect string
manipulation when handling capabilities.

This commit corrects the test by selecting "SYS_NICE" as the unexpected
capability. "SYS_NICE" is a more suitable choice because:
- It is a valid Linux capability.
- It is relatively harmless.
- It is **not** part of the default capability set defined in
  `genpolicy-settings.json`.

By using "SYS_NICE", the test now accurately simulates a scenario where
a Pod requests a legitimate but non-default capability, which the policy
(generated from a baseline Pod without this capability) should correctly
reject. This change fixes the test's logic and also resolves the
downstream `oci-spec-rs` parsing error by ensuring only valid capability
names are processed.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 14:06:30 +08:00
Alex Lyn
9aaf41a71b Merge pull request #11985 from Apokleos/policy-caps-rs
genpolicy: Correct caps matcher for runtime-rs
2025-11-11 11:08:11 +08:00
Alex Lyn
29fe46bc06 genpolicy: Correct caps matcher for runtime-rs
Detected a format mismatch in OCI Spec Capabilities fields between
`runtime-rs` (no `CAP_` prefix) and `runtime-go` (with `CAP_` prefix).

This introduces a normalization of caps in match_caps(p_caps, i_caps).
This ensures robust and consistent processing of Capabilities regardless
of whether the OCI Spec originates from `runtime-rs` or `runtime-go`.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-11 10:03:54 +08:00
Dan Mihai
f78584e868 Merge pull request #12048 from manuelh-dev/mahuber/bb-build
deploy: Improve busybox build
2025-11-10 11:32:07 -08:00
Alex Lyn
7423eb7a30 agent: Support both virtio-blk and virtio-scsi devices for initdata
Currently, the initdata module only detects virtio-blk devices
(/dev/vd*) when searching for the initdata block device. However,
when using virtio-scsi, the devices appear as /dev/sd* in the
guest, causing the initdata detection to fail.

This commit extends the device detection logic to support both
device types:
- virtio-blk devices: /dev/vda, /dev/vdb, etc.
- virtio-scsi devices: /dev/sda, /dev/sdb, etc.

This commits aims to address issue of theinitdata device not being
found when using virtio-scsi

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-10 18:03:23 +01:00
dependabot[bot]
f699f097f3 build(deps): bump github.com/opencontainers/runc in /src/runtime
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.2.6 to 1.2.8.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.2.8/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.2.6...v1.2.8)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-version: 1.2.8
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 15:43:48 +01:00
Fabiano Fidêncio
92226d0a19 tests: nvidia: Be prepared for TDX
Thankfully there's only one piece that's still SNP specific (for the
supported TEEs).  Let's adjust it so we can have an easy and smooth
execution when adding a TDX CI machine.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
4d314e8676 tests: nvidia: nims: Adjust to CC
There are several changes needed in order to get this test working with
CC, and yet we still are skipping it.

Basically, we need to:
* Pull an authenticated image inside the guest, which requires:
 * Using Trustee to release the credential
   * We still depend on a PR to be merged on Trustee side
     * https://github.com/confidential-containers/trustee/pull/1035
   * We still depend on a Trustee bump (including the PR above) on our
     side

Apart from those changes, I ended up "duplicating" the tests by adding a
"-tee" version of those, which already have:
* The proper kbs annotations set up
* Dropped host mounts
* Increases the memory needed

Last but not least, as "bats" probably means "being a terrible script",
I had to re-arrange a few things otherwise the tests would not even run
due to bats-isms that I am sincerely not able to pin-point.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
8cedd96d54 tests: nvidia: k8s: Enforce experimental_force_guest_pull
We added the tests using virtio-9p as we knew it'd require incremental
changes to be able to use any kind of guest-pull method.

Now, as in the coming commits we'll be actually ensuring that guest-pull
works and is in use, we can enforce the experimental_force_guest_pull
usage for the nvidia cases.

Note: We're using experimental_force_guest_pull instead of
nydus-snapshotter due to stability concerns with the snapshotter.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
464764c7e0 tests: nvidia: kbs: Ensure KBS_INGRESS=nodeport
I've missed doing this doing the KBS deployment set up.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Manuel Huber
a5cd7235cb runtime: Align nvidia TEEs enable_annotations with TEEs
It was just missed when adding those configurations.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
e85cf83573 k8s: tests: Fix default for EXPERIMENTAL_FORCE_GUEST_PULL
It takes either a shim name or "", but we were treating this (thankfully
only in this specific file) as a boolean.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Manuel Huber
8b39468b36 tests: nvidia: Logging for NIM
Adjust output to the setup_file and teardown_file behavior.
With this, we will be able to observe relevant logging rather than
adding to the output variable.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-10 13:01:30 +01:00
Fabiano Fidêncio
812191c1f3 tests: nvidia: Do not deploy NFD on nvidia-gpu cases
As it'll come from the GPU Operator for now.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-10 13:01:30 +01:00
Pavel Mores
74f9fdb11f runtime-rs: remove hardcoding of SEV physical address reduction
Previous commit enabled getting the physical address reduction from
processor but just stored it for later use.  This commit adds handling
of the value to ProtectionDevice and enables the QEMU driver to use it.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-10 13:01:03 +01:00
Pavel Mores
6f9178d290 runtime-rs: get SEV params using CPUID and store them in SevSnpDetails
An implementation of cbitpos acquisition is supplied that was missing
so far.  We also get the physical address reduction value from the same
source (CPUID Fn8000_001f function).  This has been hardcoded at 1 so far,
following the Go runtime example, but it's better to get it from the
processor.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-10 13:01:03 +01:00
Greg Kurz
5810279edf Merge pull request #12008 from microsoft/saulparedes/allow_priv
webhook: allow privileged containers
2025-11-10 11:13:41 +01:00
Zvonko Kaiser
df58972d41 Merge pull request #12051 from microsoft/danmihai1/agent-version
agent: update version.rs when VERSION file changed
2025-11-09 20:34:58 -05:00
Fabiano Fidêncio
37d4eb0b77 ci: nvidia: Ensure K8S_TEST_HOST_TYPE=baremetal
So the proper cleanups are performed in case something goes awry in a
previous run.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-09 10:51:33 +01:00
Dan Mihai
7b10f4c72a agent: update version.rs when VERSION file changed
- version.rs gets generated from version.rs.in
- version.rs.in contains values read from VERSION
- so version.rs (and maybe other Agent files too) must be
  re-generated when the VERSION file changes

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 17:53:09 +00:00
Alex Lyn
83b0a59215 Merge pull request #12046 from Apokleos/disable-guest-emptydir
Disable guest emptydir
2025-11-08 11:54:15 +08:00
Dan Mihai
df7ee2dd38 ci: k8s: AUTO_GENERATE_POLICY for cbl-mariner
Auto-generate policy on cbl-mariner Hosts if the user didn't
specify an AUTO_GENERATE_POLICY value.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
53acb74f26 genpolicy: adapt to new AKS pause container behavior
The new image reference has changed to mcr.microsoft.com/oss/v2/kubernetes/pause:3.6
from mcr.microsoft.com/oss/kubernetes/pause:3.6.

The new image uses by default UID=0, GID=0 while the older. The older image had:
UID=65535, GID=65535.

There is a new pause_container_id_policy field in genpolicy-settings.json, informing
genpolicy about the way AdditionalGids gets updated - "v1" for the older behavior
and "v2" for the newer AKS version:
- When using v1, the default value of AdditionalGids is {65535}.
- When using v2, the default value of AdditionalGids is {}.

UID=65535 and GID=65535 are still hard-coded by default in genpolicy-settings.json.
We might be able to remove/ignore these fields in the future, if we'll stop relying
on policy::KataSpec::get_process_fields to use these fields.

A new CI function adapt_common_policy_settings_for_aks() changes the pause container
UID, GID, pause_container_id_policy, and image ref settings values when testing on
AKS Hosts - i.e., when testing coco-dev or mariner Hosts.

The genpolicy workarounds for the unexpected behavior with guest pull enabled have
been improved to use the current container's GID instead of hard-coding GID=0 as the
guest pull default. Also, AdditionalGids gets updated when the current container's GID is
changing, instead of always changing the AdditionalGids at the very end of
policy::AgentPolicy::get_container_process(), when the relevant evolution of the GID
value was no longer available.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
1f784bb770 genpolicy: improve policy generation comments
Make it easier to understand the source of the UID/GID/AdditionalGids
values from the container in the auto-generated policy.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
969b8e0fb8 genpolicy: more detailed UID/GID debug logs
Add more details to code paths handling UID/GID values, for easier
debugging.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Dan Mihai
cacd37ee6e tests: genpolicy: restore test settings for non-Coco configMap
These settings got broken recently because the non-CoCo tests were
disabled for unrelated reasons.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2025-11-08 00:00:09 +01:00
Manuel Huber
caff6df827 deploy: Improve busybox build
Parallelize busybox builds to build a bit faster and create the
build directory prior to Docker execution, which on my
environment, helps with permission issues when building busybox
without the kata-containers/build directory existing beforehand.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2025-11-07 10:09:57 -08:00
Alex Lyn
23024876b2 runtime-rs: Use the configurable disable_guest_empty_dir
Correct the hardcoded value of disable_guest_empty_dir, instead,
we use the real value of it which comes from the configuration.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:52:11 +08:00
Alex Lyn
382924bdf3 kata-sys-util: Introduce a sandbox annotation for disable guest emptydir
A sandbox annotation that determines if it should create Kubernetes
emptyDir mounts on the guest filesystem.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:48:42 +08:00
Alex Lyn
720a229579 kata-types: Introduce disable guest emptydir flag
It acts as if it should create Kubernetes emptyDir mounts on the
guest filesystem. If enabled, the runtime will not create Kubernetes
emptyDir mounts on the guest filesystem.Instead, emptyDir mounts will
be created on the host and shared via virtio-fs.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-07 19:45:55 +08:00
Fabiano Fidêncio
03e06fdf4d tests: nvidia: Deploy Trustee
Let's ensure Trustee is deployed as some of the tests rely images that
live behind authentication. /o\

The approach taken here to deploy Trustee is exactly the same one taken
on the other CoCo tests, apart from an env var passed to ensure we're
using the NVIDIA remote verifier (which will be in handy very very
soon).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-07 12:32:11 +01:00
Pavel Mores
841fee28da runtime-rs: add a helper to run external command and capture its output
This isn't really related to remote hypervisor though it was useful for
its debugging.  It's a small helper I've been using regularly during
development for quite some time that I think might be useful more broadly.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
72c704b287 runtime-rs: make error reporting for CreateVM a bit more explicit
A naked ttrpc error with no context turns out to be rather hard to
understand or even notice in log.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
45d8141edc runtime-rs: remote hv needs neither image nor initrd specified in config
The remote hypervisor launches no VM, it just instructs the Cloud API
Adaptor to do so, therefore it has no need for an image or initrd to boot
from and should be exempt from the mandate for one or the other to be
specified.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Pavel Mores
80ef102a00 runtime-rs: fix scoping of the remote hv Hypervisor service
The go runtime's .proto file - which is also used by the Cloud API
Adaptor - puts the Hypervisor service into the "hypervisor" package.
runtime-rs has to do the same to avoid an "unimplemented" error.

Signed-off-by: Pavel Mores <pmores@redhat.com>
2025-11-07 10:49:14 +01:00
Alex Lyn
d5e2071869 Merge pull request #11921 from Apokleos/enhance-copyfile2
runtime-rs: Add support LocalStorage for emptyDir within nontee cases
2025-11-07 16:58:39 +08:00
Fupan Li
bfe8da6c8a tests: disable the qemu-runtime-rs cpu hotplug test
Since there's something wrong with the cpu hotplug
on qemu-runtime-rs, thus disable this test temporally.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-06 21:37:01 +08:00
Fupan Li
3b1bfea609 runtime-rs: fix the issue of hot-unplug memory smaller
It should do nothing instead of return an error when
hot-unplug the memory to the size smaller than static
plugged memory size.

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2025-11-06 18:19:55 +08:00
Alex Lyn
8f0dd4c44b runtime-rs: Introduce disable_guest_empty_dir flag
This commit introduces the configuration flag `disable_guest_empty_dir`
to control the placement of Kubernetes emptyDir volumes.

By default, the value is set to `false`, maintaining the current
behavior of creating emptyDirs within the guest VM

When set to `true`, emptyDirs will be created on the host filesystem.
This is essential for scenarios where users need to share data between
the host and the guest VM via an emptyDir.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
205c3dac44 runtime-rs: Add rprivate and rw options for memory emptyDir mounts
When handling a memory-based emptyDir, the runtime creates a tmpfs
mount inside the guest VM. The previous implementation just supports
mount options with only "rbind", which does not explicitly guarantee
the desired mount propagation behavior.

This commit hardens the mounting process by explicitly adding the
`rprivate` and `rw` mount flags.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
fac9c795c6 runtime-rs: Add 'local' volume to support k8s emptyDir
This commit introduces the 'local' volume, which is specifically
designed to create and manage Kubernetes emptyDir volumes directly
within the VM's sandbox directory.

The core functionality ensures that local volume can be handled
correctly in handle volume procedure.

This capability is essential for allowing containers to leverage the
storage backend for shared volumes.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:45 +08:00
Alex Lyn
1696968eb1 runtime-rs: Implement 'local' storage type for k8s emptyDir volumes
This commit implements the new 'local' storage type, enabling Kubernetes
emptyDir volumes to be created and managed directly inside the Kata VM
(in the sandbox directory).

The 'local' type instructs the kata-agent to provision the empty
directory within the VM.

This approach allows containers to share storage inside VM, Specially
useful within CoCo emptyDir scenarios.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 15:05:22 +08:00
Alex Lyn
b58a53bfa4 kata-sys-util: Improve handling of Kubernetes emptyDir volumes
Separated the checks for tmpfs and disk-based emptyDirs from an
`if-else if` block into two distinct `if` statements. This clarifies
the logic by treating each volume type detection as an independent task.

Additionally, updated the type for disk-based emptyDirs to the more
semantically accurate `KATA_K8S_LOCAL_STORAGE_TYPE`. This allows for
more specific handling downstream, distinguishing them from generic
host path mounts.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Alex Lyn
c39c6f1ae4 kata-sys-utils: Correct the judgement of logic of host emptyDir
In fact, emptyDir is not usually found in the proc mounts with the
previous logic and then it failed with the previous implementation.

Based on the related implementation within runtime-go,related
implementation within

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Alex Lyn
f278616bf7 kata-types: Introduce a new storage type of "local"
This introduces a new storage type: local. Local storage type will
tell kata-agent to create an empty directory with LocalStorgae handler
in the sandbox directory within the VM.

And it also makes it align with runtime-go `KataLocalDevType = "local"`.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-05 14:59:21 +08:00
Saul Paredes
26396881cf webhook: allow privileged containers
This allows us to test privileged containers when using the webhook.
We can do this because kata-deploy sets privileged_without_host_devices = true for kata runtime by default.

Signed-off-by: Saul Paredes <saulparedes@microsoft.com>
2025-10-30 14:59:26 -07:00
268 changed files with 11488 additions and 2500 deletions

View File

@@ -31,4 +31,4 @@ self-hosted-runner:
- s390x
- s390x-large
- tdx
- ubuntu-22.04-arm
- ubuntu-24.04-arm

View File

@@ -279,50 +279,6 @@ jobs:
timeout-minutes: 15
run: bash tests/functional/vfio/gha-run.sh run
run-docker-tests:
name: run-docker-tests
strategy:
# We can set this to true whenever we're 100% sure that
# all the tests are not flaky, otherwise we'll fail them
# all due to a single flaky instance.
fail-fast: false
matrix:
vmm:
- qemu
runs-on: ubuntu-22.04
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/docker/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/docker/gha-run.sh install-kata kata-artifacts
- name: Run docker smoke test
timeout-minutes: 5
run: bash tests/integration/docker/gha-run.sh run
run-nerdctl-tests:
name: run-nerdctl-tests
strategy:

View File

@@ -106,44 +106,3 @@ jobs:
- name: Run containerd-stability tests
timeout-minutes: 15
run: bash tests/stability/gha-run.sh run
run-docker-tests:
name: run-docker-tests
strategy:
# We can set this to true whenever we're 100% sure that
# all the tests are not flaky, otherwise we'll fail them
# all due to a single flaky instance.
fail-fast: false
matrix:
vmm: ['qemu']
runs-on: s390x-large
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/docker/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/docker/gha-run.sh install-kata kata-artifacts
- name: Run docker smoke test
timeout-minutes: 5
run: bash tests/integration/docker/gha-run.sh run

View File

@@ -12,7 +12,7 @@ name: Build checks
jobs:
check:
name: check
runs-on: ${{ matrix.component.name == 'runtime' && inputs.instance == 'ubuntu-24.04-s390x' && 's390x' || matrix.component.name == 'runtime' && inputs.instance == 'ubuntu-24.04-ppc64le' && 'ppc64le' || inputs.instance }}
runs-on: ${{ matrix.runner || inputs.instance }}
strategy:
fail-fast: false
matrix:
@@ -68,6 +68,38 @@ jobs:
needs:
- rust
- protobuf-compiler
instance:
- ${{ inputs.instance }}
include:
- component:
name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
instance: ubuntu-24.04-s390x
runner: s390x
- component:
name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
instance: ubuntu-24.04-ppc64le
runner: ppc64le
- component:
name: agent
path: src/agent
needs:
- rust
- libdevmapper
- libseccomp
- protobuf-compiler
- clang
instance: ubuntu-24.04-ppc64le
runner: ppc64le
steps:
- name: Adjust a permission for repo

View File

@@ -31,7 +31,7 @@ permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
permissions:
contents: read
packages: write
@@ -141,7 +141,7 @@ jobs:
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset
permissions:
contents: read
@@ -209,7 +209,7 @@ jobs:
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
@@ -224,7 +224,7 @@ jobs:
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts-for-release:
name: remove-rootfs-binary-artifacts-for-release
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
@@ -238,7 +238,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts, remove-rootfs-binary-artifacts-for-release]
permissions:
contents: read
@@ -298,7 +298,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read

View File

@@ -20,9 +20,6 @@ on:
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
@@ -41,14 +38,6 @@ jobs:
- kernel
- virtiofsd
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
@@ -82,5 +71,5 @@ jobs:
with:
name: kata-artifacts-riscv64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
retention-days: 3
if-no-files-found: error

34
.github/workflows/ci-nightly-riscv.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
on:
schedule:
- cron: '0 5 * * *'
name: Nightly CI for RISC-V
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
build-kata-static-tarball-riscv:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-riscv64.yaml
with:
tarball-suffix: -${{ github.sha }}
commit-hash: ${{ github.sha }}
target-branch: ${{ github.ref_name }}
build-checks-preview:
strategy:
fail-fast: false
matrix:
instance:
- "riscv-builder"
uses: ./.github/workflows/build-checks-preview-riscv64.yaml
with:
instance: ${{ matrix.instance }}

View File

@@ -102,7 +102,7 @@ jobs:
tag: ${{ inputs.tag }}-arm64
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04-arm
runner: ubuntu-24.04-arm
arch: arm64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -134,20 +134,6 @@ jobs:
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-kata-static-tarball-riscv64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-riscv64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-s390x:
needs: build-kata-static-tarball-s390x
permissions:

View File

@@ -97,7 +97,7 @@ jobs:
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-arm64
target-branch: ${{ github.ref_name }}
runner: ubuntu-22.04-arm
runner: ubuntu-24.04-arm
arch: arm64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}

View File

@@ -34,7 +34,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04-arm
runs-on: ubuntu-24.04-arm
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -45,7 +45,7 @@ jobs:
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.environment.vmm }}
KUBERNETES: kubeadm
K8S_TEST_HOST_TYPE: all
K8S_TEST_HOST_TYPE: baremetal
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -59,6 +59,24 @@ jobs:
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Uninstall previous `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh uninstall-kbs-client
- name: Deploy CoCo KBS
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
env:
NVIDIA_VERIFIER_MODE: remote
KBS_INGRESS: nodeport
- name: Install `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
@@ -87,3 +105,8 @@ jobs:
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always() && matrix.environment.name != 'nvidia-gpu'
run: |
bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs

View File

@@ -131,12 +131,14 @@ jobs:
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
- guest-pull
include:
- pull-type: experimental-force-guest-pull
vmm: qemu-coco-dev
snapshotter: ""
runs-on: ubuntu-22.04
permissions:
@@ -249,6 +251,7 @@ jobs:
- name: Delete AKS cluster
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter

View File

@@ -28,21 +28,9 @@ jobs:
fail-fast: false
matrix:
instance:
- "ubuntu-22.04-arm"
- "ubuntu-24.04-arm"
- "s390x"
- "ubuntu-24.04-ppc64le"
uses: ./.github/workflows/build-checks.yaml
with:
instance: ${{ matrix.instance }}
build-checks-preview:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
strategy:
fail-fast: false
matrix:
instance:
- "riscv-builder"
uses: ./.github/workflows/build-checks-preview-riscv64.yaml
with:
instance: ${{ matrix.instance }}

View File

@@ -122,7 +122,7 @@ $(TARGET): $(GENERATED_CODE) $(TARGET_PATH)
$(TARGET_PATH): show-summary
@RUSTFLAGS="$(EXTRA_RUSTFLAGS) --deny warnings" cargo build --target $(TRIPLE) $(if $(findstring release,$(BUILD_TYPE)),--release) $(EXTRA_RUSTFEATURES)
$(GENERATED_FILES): %: %.in
$(GENERATED_FILES): %: %.in $(VERSION_FILE)
@sed $(foreach r,$(GENERATED_REPLACEMENTS),-e 's|@$r@|$($r)|g') "$<" > "$@"
##TARGET optimize: optimized build

View File

@@ -39,6 +39,12 @@ pub const CDH_CONFIG_PATH: &str = concatcp!(INITDATA_PATH, "/cdh.toml");
/// Magic number of initdata device
pub const INITDATA_MAGIC_NUMBER: &[u8] = b"initdata";
/// initdata device with disk type 'vd*'
const INITDATA_PREFIX_DISK_VDX: &str = "vd";
/// initdata device with disk type 'sd*'
const INITDATA_PREFIX_DISK_SDX: &str = "sd";
async fn detect_initdata_device(logger: &Logger) -> Result<Option<String>> {
let dev_dir = Path::new("/dev");
let mut read_dir = tokio::fs::read_dir(dev_dir).await?;
@@ -46,9 +52,15 @@ async fn detect_initdata_device(logger: &Logger) -> Result<Option<String>> {
let filename = entry.file_name();
let filename = filename.to_string_lossy();
debug!(logger, "Initdata check device `{filename}`");
if !filename.starts_with("vd") {
// Currently there're two disk types supported:
// virtio-blk (vd*) and virtio-scsi (sd*)
if !filename.starts_with(INITDATA_PREFIX_DISK_VDX)
&& !filename.starts_with(INITDATA_PREFIX_DISK_SDX)
{
continue;
}
let path = entry.path();
debug!(logger, "Initdata find potential device: `{path:?}`");

View File

@@ -42,15 +42,15 @@ pub fn is_ephemeral_volume(mount: &Mount) -> bool {
/// K8s `EmptyDir` volumes are directories on the host. If the fs type is tmpfs, it's a ephemeral
/// volume instead of a `EmptyDir` volume.
pub fn is_host_empty_dir(path: &str) -> bool {
if is_empty_dir(path) {
if let Ok(info) = get_linux_mount_info(path) {
if info.fs_type != "tmpfs" {
return true;
}
}
if !is_empty_dir(path) {
return false;
}
false
match get_linux_mount_info(path) {
Ok(info) => info.fs_type != "tmpfs",
Err(crate::mount::Error::NoMountEntry(_)) => true,
Err(_) => false,
}
}
// update_ephemeral_storage_type sets the mount type to 'ephemeral'
@@ -58,7 +58,7 @@ pub fn is_host_empty_dir(path: &str) -> bool {
// For the given pod ephemeral volume is created only once
// backed by tmpfs inside the VM. For successive containers
// of the same pod the already existing volume is reused.
pub fn update_ephemeral_storage_type(oci_spec: &mut Spec) {
pub fn update_ephemeral_storage_type(oci_spec: &mut Spec, disable_guest_empty_dir: bool) {
if let Some(mounts) = oci_spec.mounts_mut() {
for m in mounts.iter_mut() {
if let Some(typ) = &m.typ() {
@@ -69,13 +69,12 @@ pub fn update_ephemeral_storage_type(oci_spec: &mut Spec) {
if let Some(source) = &m.source() {
let mnt_src = &source.display().to_string();
//here we only care about the "bind" mount volume.
// We only care about the "bind" mount volume here.
if is_ephemeral_volume(m) {
m.set_typ(Some(String::from(mount::KATA_EPHEMERAL_VOLUME_TYPE)));
} else if is_host_empty_dir(mnt_src) {
// FIXME support disable_guest_empty_dir
// https://github.com/kata-containers/kata-containers/blob/02a51e75a7e0c6fce5e8abe3b991eeac87e09645/src/runtime/pkg/katautils/create.go#L105
m.set_typ(Some(String::from(mount::KATA_HOST_DIR_VOLUME_TYPE)));
}
if is_host_empty_dir(mnt_src) && !disable_guest_empty_dir {
m.set_typ(Some(mount::KATA_K8S_LOCAL_STORAGE_TYPE.to_string()));
}
}
}

View File

@@ -6,6 +6,8 @@
#[cfg(any(target_arch = "s390x", target_arch = "x86_64", target_arch = "aarch64"))]
use anyhow::Result;
use serde::{Deserialize, Serialize};
#[cfg(target_arch = "x86_64")]
use std::arch::x86_64;
use std::fmt;
#[cfg(all(target_arch = "powerpc64", target_endian = "little"))]
use std::fs;
@@ -26,6 +28,7 @@ use std::fs;
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
pub struct SevSnpDetails {
pub cbitpos: u32,
pub phys_addr_reduction: u32,
}
#[allow(dead_code)]
@@ -117,17 +120,44 @@ pub fn arch_guest_protection(
Ok(false)
};
let retrieve_sev_cbitpos = || -> Result<u32, ProtectionError> {
Err(ProtectionError::CheckFailed(
"cbitpos retrieval NOT IMPLEMENTED YET".to_owned(),
))
let retrieve_sev_params = || -> Result<(u32, u32), ProtectionError> {
// The initial checks for AMD and SEV shouldn't be necessary due to
// the context this function is currently called from, however it
// shouldn't hurt to double-check and have better logging if anything
// goes wrong.
let fn0 = unsafe { x86_64::__cpuid(0) };
// The values in [ ebx, edx, ecx ] spell out "AuthenticAMD" when
// interpreted byte-wise as ASCII. No need to bother here with an
// actual conversion to string though.
// See also AMD64 Architecture Programmer's Manual pg. 600
// https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/24594.pdf
if fn0.ebx != 0x68747541 || fn0.edx != 0x69746e65 || fn0.ecx != 0x444d4163 {
return Err(ProtectionError::CheckFailed(
"Not an AMD processor".to_owned(),
));
}
// AMD64 Architecture Prgrammer's Manual Fn8000_001f docs on pg. 640
let fn8000_001f = unsafe { x86_64::__cpuid(0x8000_001f) };
if fn8000_001f.eax & 0x10 == 0 {
return Err(ProtectionError::CheckFailed("SEV not supported".to_owned()));
}
let cbitpos = fn8000_001f.ebx & 0b11_1111;
let phys_addr_reduction = (fn8000_001f.ebx & 0b1111_1100_0000) >> 6;
Ok((cbitpos, phys_addr_reduction))
};
let is_snp_available = check_contents(snp_path)?;
let is_sev_available = is_snp_available || check_contents(sev_path)?;
if is_snp_available || is_sev_available {
let cbitpos = retrieve_sev_cbitpos()?;
let sev_snp_details = SevSnpDetails { cbitpos };
let (cbitpos, phys_addr_reduction) = retrieve_sev_params()?;
let sev_snp_details = SevSnpDetails {
cbitpos,
phys_addr_reduction,
};
return Ok(if is_snp_available {
GuestProtection::Snp(sev_snp_details)
} else {

View File

@@ -28,7 +28,7 @@ toml = "0.5.8"
serde-enum-str = "0.4"
sysinfo = "0.34.2"
sha2 = "0.10.8"
flate2 = { version = "1.0", features = ["zlib"] }
flate2 = "1.1"
hex = "0.4"
oci-spec = { version = "0.8.1", features = ["runtime"] }

View File

@@ -296,6 +296,10 @@ pub const KATA_ANNO_CFG_RUNTIME_AGENT: &str = "io.katacontainers.config.runtime.
/// A sandbox annotation that determines if seccomp should be applied inside guest.
pub const KATA_ANNO_CFG_DISABLE_GUEST_SECCOMP: &str =
"io.katacontainers.config.runtime.disable_guest_seccomp";
/// A sandbox annotation that determines if it should create Kubernetes emptyDir mounts on the guest filesystem.
pub const KATA_ANNO_CFG_DISABLE_GUEST_EMPTY_DIR: &str =
"io.katacontainers.config.runtime.disable_guest_empty_dir";
/// A sandbox annotation that determines if pprof enabled.
pub const KATA_ANNO_CFG_ENABLE_PPROF: &str = "io.katacontainers.config.runtime.enable_pprof";
/// A sandbox annotation that determines if experimental features enabled.
@@ -1023,6 +1027,14 @@ impl Annotation {
return Err(bool_err);
}
},
KATA_ANNO_CFG_DISABLE_GUEST_EMPTY_DIR => match self.get_value::<bool>(key) {
Ok(r) => {
config.runtime.disable_guest_empty_dir = r.unwrap_or_default();
}
Err(_e) => {
return Err(bool_err);
}
},
KATA_ANNO_CFG_ENABLE_PPROF => match self.get_value::<bool>(key) {
Ok(r) => {
config.runtime.enable_pprof = r.unwrap_or_default();

View File

@@ -148,6 +148,10 @@ pub struct Agent {
/// Memory agent configuration
#[serde(default)]
pub mem_agent: MemAgent,
/// Agent policy
#[serde(default)]
pub policy: String,
}
fn deserialize_secs_to_millis<'de, D>(deserializer: D) -> std::result::Result<u32, D::Error>
@@ -176,6 +180,7 @@ impl std::default::Default for Agent {
kernel_modules: Default::default(),
container_pipe_size: 0,
mem_agent: MemAgent::default(),
policy: Default::default(),
}
}
}

View File

@@ -128,6 +128,12 @@ pub struct Runtime {
#[serde(default)]
pub disable_guest_seccomp: bool,
/// If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem.
/// Instead, emptyDir mounts will be created on the host and shared via virtio-fs.
/// This is potentially slower, but allows sharing of files from host to guest.
#[serde(default)]
pub disable_guest_empty_dir: bool,
/// Determines how VFIO devices should be be presented to the container.
///
/// Options:

View File

@@ -23,8 +23,9 @@ pub const KATA_SHAREDFS_GUEST_PREMOUNT_TAG: &str = "kataShared";
/// KATA_EPHEMERAL_VOLUME_TYPE creates a tmpfs backed volume for sharing files between containers.
pub const KATA_EPHEMERAL_VOLUME_TYPE: &str = "ephemeral";
/// KATA_HOST_DIR_TYPE use for host empty dir
pub const KATA_HOST_DIR_VOLUME_TYPE: &str = "kata:hostdir";
/// KATA_K8S_LOCAL_STORAGE_TYPE is used for k8s empty dir (a disk-backed volume),
/// to create a local directory inside the VM for sharing files between containers.
pub const KATA_K8S_LOCAL_STORAGE_TYPE: &str = "local";
/// KATA_MOUNT_INFO_FILE_NAME is used for the file that holds direct-volume mount info
pub const KATA_MOUNT_INFO_FILE_NAME: &str = "mountInfo.json";
@@ -521,11 +522,6 @@ pub fn is_kata_ephemeral_volume(ty: &str) -> bool {
ty == KATA_EPHEMERAL_VOLUME_TYPE
}
/// Checks whether a mount type is a marker for a Kata hostdir volume.
pub fn is_kata_host_dir_volume(ty: &str) -> bool {
ty == KATA_HOST_DIR_VOLUME_TYPE
}
/// Splits a sandbox bindmount string into its real path and mode.
///
/// The `bindmount` format is typically `/path/to/dir` or `/path/to/dir:ro[:rw]`.

View File

@@ -5,7 +5,7 @@
syntax = "proto3";
package remote;
package hypervisor;
service Hypervisor {
rpc CreateVM(CreateVMRequest) returns (CreateVMResponse) {}

View File

@@ -20,5 +20,7 @@ pub const IP_TABLE_URL: &str = "/iptables";
pub const IP6_TABLE_URL: &str = "/ip6tables";
/// URL for querying metrics inside shim
pub const METRICS_URL: &str = "/metrics";
/// URL for setting agent policy
pub const AGENT_POLICY_URL: &str = "/policy";
pub const ERR_NO_SHIM_SERVER: &str = "Failed to create shim management server";

View File

@@ -38,6 +38,12 @@ version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
[[package]]
name = "adler2"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa"
[[package]]
name = "agent"
version = "0.1.0"
@@ -346,7 +352,7 @@ dependencies = [
"cc",
"cfg-if 1.0.0",
"libc",
"miniz_oxide",
"miniz_oxide 0.7.1",
"object",
"rustc-demangle",
]
@@ -1436,13 +1442,13 @@ checksum = "37ab347416e802de484e4d03c7316c48f1ecb56574dfd4a46a80f173ce1de04d"
[[package]]
name = "flate2"
version = "1.0.26"
version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b9429470923de8e8cbd4d2dc513535400b4b3fef0319fb5c4e1f520a7bef743"
checksum = "bfe33edd8e85a12a67454e37f8c75e730830d83e313556ab9ebf9ee7fbeb3bfb"
dependencies = [
"crc32fast",
"libz-sys",
"miniz_oxide",
"miniz_oxide 0.8.9",
]
[[package]]
@@ -2513,6 +2519,16 @@ dependencies = [
"adler",
]
[[package]]
name = "miniz_oxide"
version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316"
dependencies = [
"adler2",
"simd-adler32",
]
[[package]]
name = "mio"
version = "0.8.11"
@@ -4466,6 +4482,12 @@ dependencies = [
"libc",
]
[[package]]
name = "simd-adler32"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d66dc143e6b11c1eddc06d5c423cfc97062865baf299914ab64caa38182078fe"
[[package]]
name = "simdutf8"
version = "0.1.5"
@@ -5268,6 +5290,9 @@ name = "uuid"
version = "1.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "458f7a779bf54acc9f347480ac654f68407d3aab21269a6e3c9f922acd9e2da9"
dependencies = [
"getrandom 0.3.2",
]
[[package]]
name = "valuable"
@@ -5362,6 +5387,7 @@ dependencies = [
"toml 0.4.10",
"tracing",
"url",
"uuid 1.16.0",
]
[[package]]

View File

@@ -77,7 +77,9 @@ CLHBINDIR := $(PREFIXDEPS)/bin
QEMUBINDIR := $(PREFIXDEPS)/bin
PROJECT_DIR = $(PROJECT_TAG)
IMAGENAME = $(PROJECT_TAG).img
IMAGECONFIDENTIALNAME = $(PROJECT_TAG)-confidential.img
INITRDNAME = $(PROJECT_TAG)-initrd.img
INITRDCONFIDENTIALNAME = $(PROJECT_TAG)-initrd-confidential.img
TARGET = $(PROJECT_COMPONENT)
SYSCONFDIR := /etc
LOCALSTATEDIR := /var
@@ -112,7 +114,9 @@ PKGDATADIR := $(PREFIXDEPS)/share/$(PROJECT_DIR)
PKGRUNDIR := $(LOCALSTATEDIR)/run/$(PROJECT_DIR)
KERNELDIR := $(PKGDATADIR)
IMAGEPATH := $(PKGDATADIR)/$(IMAGENAME)
IMAGECONFIDENTIALPATH := $(PKGDATADIR)/$(IMAGECONFIDENTIALNAME)
INITRDPATH := $(PKGDATADIR)/$(INITRDNAME)
INITRDCONFIDENTIALPATH := $(PKGDATADIR)/$(INITRDCONFIDENTIALNAME)
ROOTFSTYPE_EXT4 := \"ext4\"
ROOTFSTYPE_XFS := \"xfs\"
@@ -186,6 +190,8 @@ QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT := 4050
# Create Container Timeout in seconds
DEFCREATECONTAINERTIMEOUT ?= 30
DEFCREATECONTAINERTIMEOUT_COCO ?= 60
DEFSTATICRESOURCEMGMT_COCO = true
SED = sed
CLI_DIR = cmd
@@ -298,6 +304,18 @@ ifneq (,$(QEMUCMD))
CONFIGS += $(CONFIG_QEMU_SE)
CONFIG_FILE_QEMU_COCO_DEV = configuration-qemu-coco-dev-runtime-rs.toml
CONFIG_QEMU_COCO_DEV = config/$(CONFIG_FILE_QEMU_COCO_DEV)
CONFIG_QEMU_COCO_DEV_IN = $(CONFIG_QEMU_COCO_DEV).in
CONFIG_PATH_QEMU_COCO_DEV = $(abspath $(CONFDIR)/$(CONFIG_FILE_QEMU_COCO_DEV))
CONFIG_PATHS += $(CONFIG_PATH_QEMU_COCO_DEV)
SYSCONFIG_QEMU_COCO_DEV = $(abspath $(SYSCONFDIR)/$(CONFIG_FILE_QEMU_COCO_DEV))
SYSCONFIG_PATHS += $(SYSCONFIG_QEMU_COCO_DEV)
CONFIGS += $(CONFIG_QEMU_COCO_DEV)
KERNELTYPE_QEMU = uncompressed
KERNEL_NAME_QEMU = $(call MAKE_KERNEL_NAME,$(KERNELTYPE_QEMU))
KERNELPATH_QEMU = $(KERNELDIR)/$(KERNEL_NAME_QEMU)
@@ -305,6 +323,10 @@ ifneq (,$(QEMUCMD))
KERNEL_NAME_QEMU_SE = kata-containers-se.img
KERNELPATH_QEMU_SE = $(KERNELDIR)/$(KERNEL_NAME_QEMU_SE)
KERNEL_TYPE_COCO = compressed
KERNEL_NAME_COCO = $(call MAKE_KERNEL_NAME_COCO,$(KERNEL_TYPE_COCO))
KERNELPATH_COCO = $(KERNELDIR)/$(KERNEL_NAME_COCO)
# overriding options
DEFSTATICRESOURCEMGMT_QEMU := false
@@ -321,6 +343,7 @@ endif
DEFMAXVCPUS_QEMU := 0
DEFSHAREDFS_QEMU_VIRTIOFS := virtio-fs
DEFSHAREDFS_QEMU_SEL_VIRTIOFS := none
DEFSHAREDFS_QEMU_COCO_DEV_VIRTIOFS := none
DEFBLOCKDEVICEAIO_QEMU := io_uring
DEFNETWORKMODEL_QEMU := tcfilter
DEFDISABLEGUESTSELINUX := true
@@ -387,6 +410,7 @@ USER_VARS += CONFIG_PATH
USER_VARS += CONFIG_QEMU_IN
USER_VARS += CONFIG_QEMU_SE_IN
USER_VARS += CONFIG_REMOTE_IN
USER_VARS += CONFIG_QEMU_COCO_DEV_IN
USER_VARS += DESTDIR
USER_VARS += HYPERVISOR
USER_VARS += USE_BUILDIN_DB
@@ -412,8 +436,13 @@ USER_VARS += FCVALIDJAILERPATHS
USER_VARS += DEFMAXMEMSZ_FC
USER_VARS += SYSCONFIG
USER_VARS += IMAGENAME
USER_VARS += IMAGECONFIDENTIALNAME
USER_VARS += IMAGEPATH
USER_VARS += IMAGECONFIDENTIALPATH
USER_VARS += INITRDNAME
USER_VARS += INITRDCONFIDENTIALNAME
USER_VARS += INITRDPATH
USER_VARS += INITRDCONFIDENTIALPATH
USER_VARS += DEFROOTFSTYPE
USER_VARS += VMROOTFSDRIVER_DB
USER_VARS += VMROOTFSDRIVER_CLH
@@ -425,6 +454,7 @@ USER_VARS += KERNELPATH_DB
USER_VARS += KERNELPATH_QEMU
USER_VARS += KERNELPATH_QEMU_SE
USER_VARS += KERNELPATH_FC
USER_VARS += KERNELPATH_COCO
USER_VARS += KERNELPATH
USER_VARS += KERNELVIRTIOFSPATH
USER_VARS += FIRMWAREPATH
@@ -476,6 +506,7 @@ USER_VARS += DEFBLOCKSTORAGEDRIVER_FC
USER_VARS += DEFSHAREDFS_CLH_VIRTIOFS
USER_VARS += DEFSHAREDFS_QEMU_VIRTIOFS
USER_VARS += DEFSHAREDFS_QEMU_SEL_VIRTIOFS
USER_VARS += DEFSHAREDFS_QEMU_COCO_DEV_VIRTIOFS
USER_VARS += DEFVIRTIOFSDAEMON
USER_VARS += DEFVALIDVIRTIOFSDAEMONPATHS
USER_VARS += DEFVIRTIOFSCACHESIZE
@@ -504,6 +535,7 @@ USER_VARS += DEFSTATICRESOURCEMGMT_DB
USER_VARS += DEFSTATICRESOURCEMGMT_FC
USER_VARS += DEFSTATICRESOURCEMGMT_CLH
USER_VARS += DEFSTATICRESOURCEMGMT_QEMU
USER_VARS += DEFSTATICRESOURCEMGMT_COCO
USER_VARS += DEFBINDMOUNTS
USER_VARS += DEFVFIOMODE
USER_VARS += DEFVFIOMODE_SE
@@ -522,6 +554,7 @@ USER_VARS += DEFDANCONF
USER_VARS += DEFFORCEGUESTPULL
USER_VARS += QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT
USER_VARS += DEFCREATECONTAINERTIMEOUT
USER_VARS += DEFCREATECONTAINERTIMEOUT_COCO
SOURCES := \
$(shell find . 2>&1 | grep -E '.*\.rs$$') \
@@ -610,6 +643,10 @@ define MAKE_KERNEL_NAME
$(if $(findstring uncompressed,$1),vmlinux.container,vmlinuz.container)
endef
define MAKE_KERNEL_NAME_COCO
$(if $(findstring uncompressed,$1),vmlinux-confidential.container,vmlinuz-confidential.container)
endef
.DEFAULT_GOAL := default
GENERATED_FILES += $(CONFIGS)

View File

@@ -0,0 +1,851 @@
# Copyright (c) 2017-2019 Intel Corporation
# Copyright (c) 2021 Adobe Inc.
# Copyright (c) 2024-2025 IBM Corp.
#
# SPDX-License-Identifier: Apache-2.0
#
# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "@CONFIG_QEMU_IN@"
# XXX: Project:
# XXX: Name: @PROJECT_NAME@
# XXX: Type: @PROJECT_TYPE@
[hypervisor.qemu]
path = "@QEMUPATH@"
kernel = "@KERNELPATH_COCO@"
image = "@IMAGECONFIDENTIALPATH@"
# initrd = "@INITRDCONFIDENTIALPATH@"
machine_type = "@MACHINETYPE@"
# rootfs filesystem type:
# - ext4 (default)
# - xfs
# - erofs
rootfs_type=@DEFROOTFSTYPE@
# Block storage driver to be used for the VM rootfs is backed
# by a block device. This is virtio-blk-pci, virtio-blk-mmio or nvdimm
vm_rootfs_driver = "@VMROOTFSDRIVER_QEMU@"
# Enable confidential guest support.
# Toggling that setting may trigger different hardware features, ranging
# from memory encryption to both memory and CPU-state encryption and integrity.
# The Kata Containers runtime dynamically detects the available feature set and
# aims at enabling the largest possible one, returning an error if none is
# available, or none is supported by the hypervisor.
#
# Known limitations:
# * Does not work by design:
# - CPU Hotplug
# - Memory Hotplug
# - NVDIMM devices
#
# Default false
# confidential_guest = true
# Choose AMD SEV-SNP confidential guests
# In case of using confidential guests on AMD hardware that supports both SEV
# and SEV-SNP, the following enables SEV-SNP guests. SEV guests are default.
# Default false
# sev_snp_guest = true
# Enable running QEMU VMM as a non-root user.
# By default QEMU VMM run as root. When this is set to true, QEMU VMM process runs as
# a non-root random user. See documentation for the limitations of this mode.
# rootless = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS_COCO@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @QEMUVALIDHYPERVISORPATHS@
valid_hypervisor_paths = @QEMUVALIDHYPERVISORPATHS@
# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = "@KERNELPARAMS@"
# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = "@FIRMWAREPATH@"
# Path to the firmware volume.
# firmware TDVF or OVMF can be split into FIRMWARE_VARS.fd (UEFI variables
# as configuration) and FIRMWARE_CODE.fd (UEFI program image). UEFI variables
# can be customized per each user while UEFI code is kept same.
firmware_volume = "@FIRMWAREVOLUMEPATH@"
# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators="@MACHINEACCELERATORS@"
# Qemu seccomp sandbox feature
# comma-separated list of seccomp sandbox features to control the syscall access.
# For example, `seccompsandbox= "on,obsolete=deny,spawn=deny,resourcecontrol=deny"`
# Note: "elevateprivileges=deny" doesn't work with daemonize option, so it's removed from the seccomp sandbox
# Another note: enabling this feature may reduce performance, you may enable
# /proc/sys/net/core/bpf_jit_enable to reduce the impact. see https://man7.org/linux/man-pages/man8/bpfc.8.html
#seccompsandbox="@DEFSECCOMPSANDBOXPARAM@"
# CPU features
# comma-separated list of cpu features to pass to the cpu
# For example, `cpu_features = "pmu=off,vmx=off"
cpu_features="@CPUFEATURES@"
# Default number of vCPUs per SB/VM:
# unspecified or 0 --> will be set to @DEFVCPUS@
# < 0 --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores
default_vcpus = @DEFVCPUS_QEMU@
# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0 --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores --> will be set to the actual number of physical cores or to the maximum number
# of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
# NOTICE: on arm platform with gicv2 interrupt controller, set it to 8.
default_maxvcpus = @DEFMAXVCPUS_QEMU@
# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
# This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0 --> will be set to @DEFBRIDGES@
# > 1 <= 5 --> will be set to the specified number
# > 5 --> will be set to 5
default_bridges = @DEFBRIDGES@
# Reclaim guest freed memory.
# Enabling this will result in the VM balloon device having f_reporting=on set.
# Then the hypervisor will use it to reclaim guest freed memory.
# This is useful for reducing the amount of memory used by a VM.
# Enabling this feature may sometimes reduce the speed of memory access in
# the VM.
#
# Default false
#reclaim_guest_freed_memory = true
# Default memory size in MiB for SB/VM.
# If unspecified then it will be set @DEFMEMSZ@ MiB.
default_memory = @DEFMEMSZ@
#
# Default memory slots per SB/VM.
# If unspecified then it will be set @DEFMEMSLOTS@.
# This is will determine the times that memory will be hotadded to sandbox/VM.
#memory_slots = @DEFMEMSLOTS@
# Default maximum memory in MiB per SB / VM
# unspecified or == 0 --> will be set to the actual amount of physical RAM
# > 0 <= amount of physical RAM --> will be set to the specified number
# > amount of physical RAM --> will be set to the actual amount of physical RAM
default_maxmemory = @DEFMAXMEMSZ@
# The size in MiB will be plused to max memory of hypervisor.
# It is the memory address space for the NVDIMM devie.
# If set block storage driver (block_device_driver) to "nvdimm",
# should set memory_offset to the size of block device.
# Default 0
#memory_offset = 0
# Specifies virtio-mem will be enabled or not.
# Please note that this option should be used with the command
# "echo 1 > /proc/sys/vm/overcommit_memory".
# Default false
#enable_virtio_mem = true
# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# virtio-fs is used instead to pass the rootfs.
disable_block_device_use = @DEFDISABLEBLOCK@
# Shared file system type:
# - virtio-fs (default)
# - virtio-9p
# - virtio-fs-nydus
# - none
shared_fs = "@DEFSHAREDFS_QEMU_COCO_DEV_VIRTIOFS@"
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "@DEFVIRTIOFSDAEMON@"
# List of valid annotations values for the virtiofs daemon
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDVIRTIOFSDAEMONPATHS@
valid_virtio_fs_daemon_paths = @DEFVALIDVIRTIOFSDAEMONPATHS@
# Default size of DAX cache in MiB
virtio_fs_cache_size = @DEFVIRTIOFSCACHESIZE@
# Default size of virtqueues
virtio_fs_queue_size = @DEFVIRTIOFSQUEUESIZE@
# Extra args for virtiofsd daemon
#
# Format example:
# ["--arg1=xxx", "--arg2=yyy"]
# Examples:
# Set virtiofsd log level to debug : ["--log-level=debug"]
#
# see `virtiofsd -h` for possible options.
virtio_fs_extra_args = @DEFVIRTIOFSEXTRAARGS@
# Cache mode:
#
# - never
# Metadata, data, and pathname lookup are not cached in guest. They are
# always fetched from host and any changes are immediately pushed to host.
#
# - auto
# Metadata and pathname lookup cache expires after a configured amount of
# time (default is 1 second). Data is cached while the file is open (close
# to open consistency).
#
# - always
# Metadata, data, and pathname lookup are cached in guest and never expire.
virtio_fs_cache = "@DEFVIRTIOFSCACHE@"
# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is virtio-scsi, virtio-blk
# or nvdimm.
block_device_driver = "@DEFBLOCKSTORAGEDRIVER_QEMU@"
# aio is the I/O mechanism used by qemu
# Options:
#
# - threads
# Pthread based disk I/O.
#
# - native
# Native Linux I/O.
#
# - io_uring
# Linux io_uring API. This provides the fastest I/O operations on Linux, requires kernel>5.1 and
# qemu >=5.0.
block_device_aio = "@DEFBLOCKDEVICEAIO_QEMU@"
# Specifies cache-related options will be set to block devices or not.
# Default false
#block_device_cache_set = true
# Specifies cache-related options for block devices.
# Denotes whether use of O_DIRECT (bypass the host page cache) is enabled.
# Default false
#block_device_cache_direct = true
# Specifies cache-related options for block devices.
# Denotes whether flush requests for the device are ignored.
# Default false
#block_device_cache_noflush = true
# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = @DEFENABLEIOTHREADS@
# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true
# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true
# Enable vhost-user storage device, default false
# Enabling this will result in some Linux reserved block type
# major range 240-254 being chosen to represent vhost-user devices.
enable_vhost_user_store = @DEFENABLEVHOSTUSERSTORE@
# The base directory specifically used for vhost-user devices.
# Its sub-path "block" is used for block devices; "block/sockets" is
# where we expect vhost-user sockets to live; "block/devices" is where
# simulated block device nodes for vhost-user devices to live.
vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
# Enable vIOMMU, default false
# Enabling this will result in the VM having a vIOMMU device
# This will also add the following options to the kernel's
# command line: intel_iommu=on,iommu=pt
#enable_iommu = true
# Enable IOMMU_PLATFORM, default false
# Enabling this will result in the VM device having iommu_platform=on set
#enable_iommu_platform = true
# List of valid annotations values for the vhost user store path
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDVHOSTUSERSTOREPATHS@
valid_vhost_user_store_paths = @DEFVALIDVHOSTUSERSTOREPATHS@
# The timeout for reconnecting on non-server spdk sockets when the remote end goes away.
# qemu will delay this many seconds and then attempt to reconnect.
# Zero disables reconnecting, and the default is zero.
vhost_user_reconnect_timeout_sec = 0
# Enable file based guest memory support. The default is an empty string which
# will disable this feature. In the case of virtio-fs, this is enabled
# automatically and '/dev/shm' is used as the backing folder.
# This option will be ignored if VM templating is enabled.
#file_mem_backend = "@DEFFILEMEMBACKEND@"
# List of valid annotations values for the file_mem_backend annotation
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDFILEMEMBACKENDS@
valid_file_mem_backends = @DEFVALIDFILEMEMBACKENDS@
# -pflash can add image file to VM. The arguments of it should be in format
# of ["/path/to/flash0.img", "/path/to/flash1.img"]
pflashes = []
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available.
#
# Default false
#enable_debug = true
# This option allows to add an extra HMP or QMP socket when `enable_debug = true`
#
# WARNING: Anyone with access to the extra socket can take full control of
# Qemu. This is for debugging purpose only and must *NEVER* be used in
# production.
#
# Valid values are :
# - "hmp"
# - "qmp"
# - "qmp-pretty" (same as "qmp" with pretty json formatting)
#
# If set to the empty string "", no extra monitor socket is added. This is
# the default.
#extra_monitor_socket = "hmp"
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = @DEFMSIZE9P@
# If false and nvdimm is supported, use nvdimm device to plug guest image.
# Otherwise virtio-block device is used.
#
# nvdimm is not supported when `confidential_guest = true`.
#
# Default is false
#disable_image_nvdimm = true
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge.
# Default false
#hotplug_vfio_on_root_bus = true
# Enable hot-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port"
#hot_plug_vfio = "root-port"
# In a confidential compute environment hot-plugging can compromise
# security.
# Enable cold-plugging of VFIO devices to a bridge-port,
# root-port or switch-port.
# The default setting is "no-port", which means disabled.
#cold_plug_vfio = "root-port"
# Before hot plugging a PCIe device, you need to add a pcie_root_port device.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means the number of pcie_root_port
# This value is valid when hotplug_vfio_on_root_bus is true and machine_type is "q35"
# Default 0
#pcie_root_port = 2
# Before hot plugging a PCIe device onto a switch port, you need add a pcie_switch_port device fist.
# Use this parameter when using some large PCI bar devices, such as Nvidia GPU
# The value means how many devices attached onto pcie_switch_port will be created.
# This value is valid when hotplug_vfio_on_root_bus is true, and machine_type is "q35"
# Default 0
#pcie_switch_port = 2
# If vhost-net backend for virtio-net is not desired, set to true. Default is false, which trades off
# security (vhost-net runs ring0) for network I/O performance.
#disable_vhost_net = true
#
# Default entropy source.
# The path to a host source of entropy (including a real hardware RNG)
# /dev/urandom and /dev/random are two main options.
# Be aware that /dev/random is a blocking source of entropy. If the host
# runs out of entropy, the VMs boot time will increase leading to get startup
# timeouts.
# The source of entropy /dev/urandom is non-blocking and provides a
# generally acceptable source of entropy. It should work well for pretty much
# all practical purposes.
#entropy_source= "@DEFENTROPYSOURCE@"
# List of valid annotations values for entropy_source
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDENTROPYSOURCES@
valid_entropy_sources = @DEFVALIDENTROPYSOURCES@
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
#
# You can create a rootfs with hooks by customizing the osbuilder scripts:
# https://github.com/kata-containers/kata-containers/tree/main/tools/osbuilder
#
# Hooks must be stored in a subdirectory of guest_hook_path according to their
# hook type, i.e. "guest_hook_path/{prestart,poststart,poststop}".
# The agent will scan these directories for executable files and add them, in
# lexicographical order, to the lifecycle of the guest container.
# Hooks are executed in the runtime namespace of the guest. See the official documentation:
# https://github.com/opencontainers/runtime-spec/blob/v1.0.1/config.md#posix-platform-hooks
# Warnings will be logged if any error is encountered while scanning for hooks,
# but it will not abort container execution.
#guest_hook_path = "/usr/share/oci/hooks"
# Enable connection to Quote Generation Service (QGS)
# The "tdx_quote_generation_service_socket_port" parameter configures how QEMU connects to the TDX Quote Generation Service (QGS).
# This connection is essential for Trusted Domain (TD) attestation, as QGS signs the TDREPORT sent by QEMU via the GetQuote hypercall.
# By default QGS runs on vsock port 4050, but can be modified by the host admin. For QEMU's tdx-guest object, this connection needs to
# be specified in a JSON format, for example:
# -object '{"qom-type":"tdx-guest","id":"tdx","quote-generation-socket":{"type":"vsock","cid":"2","port":"4050"}}'
# It's important to note that setting "tdx_quote_generation_service_socket_port" to 0 enables communication via Unix Domain Sockets (UDS).
# To activate UDS, the QGS service itself must be launched with the "-port=0" parameter and the UDS will always be located at /var/run/tdx-qgs/qgs.socket.
# -object '{"qom-type":"tdx-guest","id":"tdx","quote-generation-socket":{"type":"unix","path":"/var/run/tdx-qgs/qgs.socket"}}'
# tdx_quote_generation_service_socket_port = @QEMUTDXQUOTEGENERATIONSERVICESOCKETPORT@
#
# Use rx Rate Limiter to control network I/O inbound bandwidth(size in bits/sec for SB/VM).
# In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) to discipline traffic.
# Default 0-sized value means unlimited rate.
#rx_rate_limiter_max_rate = 0
# Use tx Rate Limiter to control network I/O outbound bandwidth(size in bits/sec for SB/VM).
# In Qemu, we use classful qdiscs HTB(Hierarchy Token Bucket) and ifb(Intermediate Functional Block)
# to discipline traffic.
# Default 0-sized value means unlimited rate.
#tx_rate_limiter_max_rate = 0
# Set where to save the guest memory dump file.
# If set, when GUEST_PANICKED event occurred,
# guest memeory will be dumped to host filesystem under guest_memory_dump_path,
# This directory will be created automatically if it does not exist.
#
# The dumped file(also called vmcore) can be processed with crash or gdb.
#
# WARNING:
# Dump guests memory can take very long depending on the amount of guest memory
# and use much disk space.
#guest_memory_dump_path="/var/crash/kata"
# If enable paging.
# Basically, if you want to use "gdb" rather than "crash",
# or need the guest-virtual addresses in the ELF vmcore,
# then you should enable paging.
#
# See: https://www.qemu.org/docs/master/qemu-qmp-ref.html#Dump-guest-memory for details
#guest_memory_dump_paging=false
# use legacy serial for guest console if available and implemented for architecture. Default false
#use_legacy_serial = true
# disable applying SELinux on the VMM process (default false)
disable_selinux=@DEFDISABLESELINUX@
# disable applying SELinux on the container process
# If set to false, the type `container_t` is applied to the container process by default.
# Note: To enable guest SELinux, the guest rootfs must be CentOS that is created and built
# with `SELINUX=yes`.
# (default: true)
disable_guest_selinux=@DEFDISABLEGUESTSELINUX@
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
#enable_template = true
# Specifies the path of template.
#
# Default "/run/vc/vm/template"
#template_path = "/run/vc/vm/template"
# The number of caches of VMCache:
# unspecified or == 0 --> VMCache is disabled
# > 0 --> will be set to the specified number
#
# VMCache is a function that creates VMs as caches before using it.
# It helps speed up new container creation.
# The function consists of a server and some clients communicating
# through Unix socket. The protocol is gRPC in protocols/cache/cache.proto.
# The VMCache server will create some VMs and cache them by factory cache.
# It will convert the VM to gRPC format and transport it when gets
# requestion from clients.
# Factory grpccache is the VMCache client. It will request gRPC format
# VM and convert it back to a VM. If VMCache function is enabled,
# kata-runtime will request VM from factory grpccache when it creates
# a new sandbox.
#
# Default 0
#vm_cache_number = 0
# Specify the address of the Unix socket that is used by VMCache.
#
# Default /var/run/kata-containers/cache.sock
#vm_cache_endpoint = "/var/run/kata-containers/cache.sock"
[agent.@PROJECT_TYPE@]
# If enabled, make the agent display debug-level messages.
# (default: disabled)
#enable_debug = true
# Enable agent tracing.
#
# If enabled, the agent will generate OpenTelemetry trace spans.
#
# Notes:
#
# - If the runtime also has tracing enabled, the agent spans will be
# associated with the appropriate runtime parent span.
# - If enabled, the runtime will wait for the container to shutdown,
# increasing the container shutdown time slightly.
#
# (default: disabled)
#enable_tracing = true
# Comma separated list of kernel modules and their parameters.
# These modules will be loaded in the guest kernel using modprobe(8).
# The following example can be used to load two kernel modules with parameters
# - kernel_modules=["e1000e InterruptThrottleRate=3000,3000,3000 EEE=1", "i915 enable_ppgtt=0"]
# The first word is considered as the module name and the rest as its parameters.
# Container will not be started when:
# * A kernel module is specified and the modprobe command is not installed in the guest
# or it fails loading the module.
# * The module is not available in the guest or it doesn't met the guest kernel
# requirements, like architecture and version.
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
# Agent dial timeout in millisecond.
# (default: 10)
#dial_timeout_ms = 10
# Agent reconnect timeout in millisecond.
# Retry times = reconnect_timeout_ms / dial_timeout_ms (default: 300)
# If you find pod cannot connect to the agent when starting, please
# consider increasing this value to increase the retry times.
# You'd better not change the value of dial_timeout_ms, unless you have an
# idea of what you are doing.
# (default: 3000)
#reconnect_timeout_ms = 3000
# Create Container Request Timeout
# This timeout value is used to set the maximum duration for the agent to process a CreateContainerRequest.
# It's also used to ensure that workloads, especially those involving large image pulls within the guest,
# have sufficient time to become ready.
#
# Effective Timeout Determination:
# The effective timeout for a CreateContainerRequest is determined by taking the minimum of the following two values:
# - create_container_timeout: The timeout value configured for creating containers (default: 30 seconds).
# - runtime-request-timeout: The timeout value specified in the Kubelet configuration described as the link below:
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout)
# Defaults to @DEFCREATECONTAINERTIMEOUT@ second(s)
# create_container_timeout = @DEFCREATECONTAINERTIMEOUT@
[agent.@PROJECT_TYPE@.mem_agent]
# Control the mem-agent function enable or disable.
# Default to false
#mem_agent_enable = true
# Control the mem-agent memcg function disable or enable
# Default to false
#memcg_disable = false
# Control the mem-agent function swap enable or disable.
# Default to false
#memcg_swap = false
# Control the mem-agent function swappiness max number.
# Default to 50
#memcg_swappiness_max = 50
# Control the mem-agent memcg function wait period seconds
# Default to 600
#memcg_period_secs = 600
# Control the mem-agent memcg wait period PSI percent limit.
# If the percentage of memory and IO PSI stall time within
# the memcg waiting period for a cgroup exceeds this value,
# then the aging and eviction for this cgroup will not be
# executed after this waiting period.
# Default to 1
#memcg_period_psi_percent_limit = 1
# Control the mem-agent memcg eviction PSI percent limit.
# If the percentage of memory and IO PSI stall time for a cgroup
# exceeds this value during an eviction cycle, the eviction for
# this cgroup will immediately stop and will not resume until
# the next memcg waiting period.
# Default to 1
#memcg_eviction_psi_percent_limit = 1
# Control the mem-agent memcg eviction run aging count min.
# A cgroup will only perform eviction when the number of aging cycles
# in memcg is greater than or equal to memcg_eviction_run_aging_count_min.
# Default to 3
#memcg_eviction_run_aging_count_min = 3
# Control the mem-agent compact function disable or enable
# Default to false
#compact_disable = false
# Control the mem-agent compaction function wait period seconds
# Default to 600
#compact_period_secs = 600
# Control the mem-agent compaction function wait period PSI percent limit.
# If the percentage of memory and IO PSI stall time within
# the compaction waiting period exceeds this value,
# then the compaction will not be executed after this waiting period.
# Default to 1
#compact_period_psi_percent_limit = 1
# Control the mem-agent compaction function compact PSI percent limit.
# During compaction, the percentage of memory and IO PSI stall time
# is checked every second. If this percentage exceeds
# compact_psi_percent_limit, the compaction process will stop.
# Default to 5
#compact_psi_percent_limit = 5
# Control the maximum number of seconds for each compaction of mem-agent compact function.
# Default to 180
#compact_sec_max = 180
# Control the mem-agent compaction function compact order.
# compact_order is use with compact_threshold.
# Default to 9
#compact_order = 9
# Control the mem-agent compaction function compact threshold.
# compact_threshold is the pages number.
# When examining the /proc/pagetypeinfo, if there's an increase in the
# number of movable pages of orders smaller than the compact_order
# compared to the amount following the previous compaction,
# and this increase surpasses a certain threshold—specifically,
# more than 'compact_threshold' number of pages.
# Or the number of free pages has decreased by 'compact_threshold'
# since the previous compaction.
# then the system should initiate another round of memory compaction.
# Default to 1024
#compact_threshold = 1024
# Control the mem-agent compaction function force compact times.
# After one compaction, if there has not been a compaction within
# the next compact_force_times times, a compaction will be forced
# regardless of the system's memory situation.
# If compact_force_times is set to 0, will do force compaction each time.
# If compact_force_times is set to 18446744073709551615, will never do force compaction.
# Default to 18446744073709551615
#compact_force_times = 18446744073709551615
# Create Container Request Timeout
# This timeout value is used to set the maximum duration for the agent to process a CreateContainerRequest.
# It's also used to ensure that workloads, especially those involving large image pulls within the guest,
# have sufficient time to complete.
#
# Effective Timeout Determination:
# The effective timeout for a CreateContainerRequest is determined by taking the minimum of the following two values:
# - create_container_timeout: The timeout value configured for creating containers (default: @DEFCREATECONTAINERTIMEOUT_COCO@ seconds).
# - runtime-request-timeout: The timeout value specified in the Kubelet configuration described as the link below:
# (https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=runtime%2Drequest%2Dtimeout)
# Defaults to @DEFCREATECONTAINERTIMEOUT_COCO@ second(s)
# create_container_timeout = @DEFCREATECONTAINERTIMEOUT_COCO@
[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
# - macvtap
# Used when the Container network interface can be bridged using
# macvtap.
#
# - none
# Used when customize network. Only creates a tap device. No veth pair.
#
# - tcfilter
# Uses tc filter rules to redirect traffic from the network interface
# provided by plugin to a tap interface connected to the VM.
#
internetworking_model="@DEFNETWORKMODEL_QEMU@"
name="@RUNTIMENAME@"
hypervisor_name="@HYPERVISOR_QEMU@"
agent_name="@PROJECT_TYPE@"
# disable guest seccomp
# Determines whether container seccomp profiles are passed to the virtual
# machine and applied by the kata agent. If set to true, seccomp is not applied
# within the guest
# (default: true)
disable_guest_seccomp=@DEFDISABLEGUESTSECCOMP@
# vCPUs pinning settings
# if enabled, each vCPU thread will be scheduled to a fixed CPU
# qualified condition: num(vCPU threads) == num(CPUs in sandbox's CPUSet)
# enable_vcpus_pinning = false
# Apply a custom SELinux security policy to the container process inside the VM.
# This is used when you want to apply a type other than the default `container_t`,
# so general users should not uncomment and apply it.
# (format: "user:role:type")
# Note: You cannot specify MCS policy with the label because the sensitivity levels and
# categories are determined automatically by high-level container runtimes such as containerd.
#guest_selinux_label="@DEFGUESTSELINUXLABEL@"
# If enabled, the runtime will create opentracing.io traces and spans.
# (See https://www.jaegertracing.io/docs/getting-started).
# (default: disabled)
#enable_tracing = true
# Set the full url to the Jaeger HTTP Thrift collector.
# The default if not set will be "http://localhost:14268/api/traces"
#jaeger_endpoint = ""
# Sets the username to be used if basic auth is required for Jaeger.
#jaeger_user = ""
# Sets the password to be used if basic auth is required for Jaeger.
#jaeger_password = ""
# If enabled, the runtime will not create a network namespace for shim and hypervisor processes.
# This option may have some potential impacts to your host. It should only be used when you know what you're doing.
# `disable_new_netns` conflicts with `internetworking_model=tcfilter` and `internetworking_model=macvtap`. It works only
# with `internetworking_model=none`. The tap device will be in the host network namespace and can connect to a bridge
# (like OVS) directly.
# (default: false)
#disable_new_netns = true
# if enabled, the runtime will add all the kata processes inside one dedicated cgroup.
# The container cgroups in the host are not created, just one single cgroup per sandbox.
# The runtime caller is free to restrict or collect cgroup stats of the overall Kata sandbox.
# The sandbox cgroup path is the parent cgroup of a container with the PodSandbox annotation.
# The sandbox cgroup is constrained if there is no container type annotation.
# See: https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/virtcontainers#ContainerType
sandbox_cgroup_only=@DEFSANDBOXCGROUPONLY_QEMU@
# If enabled, the runtime will attempt to determine appropriate sandbox size (memory, CPU) before booting the virtual machine. In
# this case, the runtime will not dynamically update the amount of memory and CPU in the virtual machine. This is generally helpful
# when a hardware architecture or hypervisor solutions is utilized which does not support CPU and/or memory hotplug.
# Compatibility for determining appropriate sandbox (VM) size:
# - When running with pods, sandbox sizing information will only be available if using Kubernetes >= 1.23 and containerd >= 1.6. CRI-O
# does not yet support sandbox sizing annotations.
# - When running single containers using a tool like ctr, container sizing information will be available.
static_sandbox_resource_mgmt=@DEFSTATICRESOURCEMGMT_COCO@
# If specified, sandbox_bind_mounts identifieds host paths to be mounted (ro) into the sandboxes shared path.
# This is only valid if filesystem sharing is utilized. The provided path(s) will be bindmounted into the shared fs directory.
# If defaults are utilized, these mounts should be available in the guest at `/run/kata-containers/shared/containers/sandbox-mounts`
# These will not be exposed to the container workloads, and are only provided for potential guest services.
sandbox_bind_mounts=@DEFBINDMOUNTS@
# VFIO Mode
# Determines how VFIO devices should be be presented to the container.
# Options:
#
# - vfio
# Matches behaviour of OCI runtimes (e.g. runc) as much as
# possible. VFIO devices will appear in the container as VFIO
# character devices under /dev/vfio. The exact names may differ
# from the host (they need to match the VM's IOMMU group numbers
# rather than the host's)
#
# - guest-kernel
# This is a Kata-specific behaviour that's useful in certain cases.
# The VFIO device is managed by whatever driver in the VM kernel
# claims it. This means it will appear as one or more device nodes
# or network interfaces depending on the nature of the device.
# Using this mode requires specially built workloads that know how
# to locate the relevant device interfaces within the VM.
#
vfio_mode="@DEFVFIOMODE@"
# If enabled, the runtime will not create Kubernetes emptyDir mounts on the guest filesystem. Instead, emptyDir mounts will
# be created on the host and shared via virtio-fs. This is potentially slower, but allows sharing of files from host to guest.
disable_guest_empty_dir=@DEFDISABLEGUESTEMPTYDIR@
# Enabled experimental feature list, format: ["a", "b"].
# Experimental features are features not stable enough for production,
# they may break compatibility, and are prepared for a big version bump.
# Supported experimental features:
# (default: [])
experimental=@DEFAULTEXPFEATURES@
# If enabled, user can run pprof tools with shim v2 process through kata-monitor.
# (default: false)
# enable_pprof = true

View File

@@ -129,5 +129,6 @@ impl_agent!(
get_metrics | crate::Empty | crate::MetricsResponse | None,
get_guest_details | crate::GetGuestDetailsRequest | crate::GuestDetailsResponse | None,
add_swap | crate::AddSwapRequest | crate::Empty | None,
add_swap_path | crate::AddSwapPathRequest | crate::Empty | None
add_swap_path | crate::AddSwapPathRequest | crate::Empty | None,
set_policy | crate::SetPolicyRequest | crate::Empty | None
);

View File

@@ -28,7 +28,8 @@ use crate::{
VersionCheckResponse, VolumeStatsRequest, VolumeStatsResponse, WaitProcessRequest,
WriteStreamRequest,
},
GetGuestDetailsRequest, OomEventResponse, WaitProcessResponse, WriteStreamResponse,
GetGuestDetailsRequest, OomEventResponse, SetPolicyRequest, WaitProcessResponse,
WriteStreamResponse,
};
fn trans_vec<F: Sized + Clone, T: From<F>>(from: Vec<F>) -> Vec<T> {
@@ -744,6 +745,15 @@ impl From<GetGuestDetailsRequest> for agent::GuestDetailsRequest {
}
}
impl From<SetPolicyRequest> for agent::SetPolicyRequest {
fn from(from: SetPolicyRequest) -> Self {
Self {
policy: from.policy,
..Default::default()
}
}
}
impl From<agent::AgentDetails> for AgentDetails {
fn from(src: agent::AgentDetails) -> Self {
Self {

View File

@@ -33,6 +33,8 @@ use async_trait::async_trait;
use kata_types::config::Agent as AgentConfig;
use crate::types::SetPolicyRequest;
pub const AGENT_KATA: &str = "kata";
#[async_trait]
@@ -97,4 +99,5 @@ pub trait Agent: AgentManager + HealthService + Send + Sync {
async fn get_guest_details(&self, req: GetGuestDetailsRequest) -> Result<GuestDetailsResponse>;
async fn add_swap(&self, req: AddSwapRequest) -> Result<Empty>;
async fn add_swap_path(&self, req: AddSwapPathRequest) -> Result<Empty>;
async fn set_policy(&self, req: SetPolicyRequest) -> Result<Empty>;
}

View File

@@ -615,6 +615,11 @@ pub struct AddSwapPathRequest {
pub path: String,
}
#[derive(PartialEq, Clone, Default, Debug)]
pub struct SetPolicyRequest {
pub policy: String,
}
#[cfg(test)]
mod test {
use std::convert::TryFrom;

View File

@@ -2144,7 +2144,10 @@ mod tests {
#[test]
fn test_check_tdx_rootfs_settings() {
let sev_snp_details = SevSnpDetails { cbitpos: 42 };
let sev_snp_details = SevSnpDetails {
cbitpos: 42,
phys_addr_reduction: 42,
};
#[derive(Debug)]
struct TestData<'a> {

View File

@@ -538,7 +538,10 @@ mod tests {
#[test]
fn test_guest_protection_is_tdx() {
let sev_snp_details = SevSnpDetails { cbitpos: 42 };
let sev_snp_details = SevSnpDetails {
cbitpos: 42,
phys_addr_reduction: 42,
};
#[derive(Debug)]
struct TestData {

View File

@@ -1196,7 +1196,10 @@ mod tests {
// available_guest_protection() requires super user privs.
skip_if_not_root!();
let sev_snp_details = SevSnpDetails { cbitpos: 42 };
let sev_snp_details = SevSnpDetails {
cbitpos: 42,
phys_addr_reduction: 42,
};
#[derive(Debug)]
struct TestData {

View File

@@ -21,6 +21,7 @@ pub enum ProtectionDeviceConfig {
pub struct SevSnpConfig {
pub is_snp: bool,
pub cbitpos: u32,
pub phys_addr_reduction: u32,
pub firmware: String,
pub host_data: Option<String>,
}

View File

@@ -233,7 +233,7 @@ impl DragonballInner {
let vm_config = VmConfigInfo {
serial_path: Some(serial_path),
mem_size_mib: self.config.memory_info.default_memory as usize,
vcpu_count: self.config.cpu_info.default_vcpus as u8,
vcpu_count: self.config.cpu_info.default_vcpus.ceil() as u8,
max_vcpu_count: self.config.cpu_info.default_maxvcpus as u8,
mem_type,
mem_file_path,

View File

@@ -335,7 +335,7 @@ struct Smp {
impl Smp {
fn new(config: &HypervisorConfig) -> Smp {
Smp {
num_vcpus: config.cpu_info.default_vcpus as u32,
num_vcpus: config.cpu_info.default_vcpus.ceil() as u32,
max_num_vcpus: config.cpu_info.default_maxvcpus,
}
}
@@ -1875,11 +1875,11 @@ struct ObjectSevSnpGuest {
}
impl ObjectSevSnpGuest {
fn new(is_snp: bool, cbitpos: u32, host_data: Option<String>) -> Self {
fn new(is_snp: bool, cbitpos: u32, reduced_phys_bits: u32, host_data: Option<String>) -> Self {
ObjectSevSnpGuest {
id: (if is_snp { "snp" } else { "sev" }).to_owned(),
cbitpos,
reduced_phys_bits: 1,
reduced_phys_bits,
kernel_hashes: true,
host_data,
is_snp,
@@ -2538,8 +2538,13 @@ impl<'a> QemuCmdLine<'a> {
.remove_all_by_key("rootfstype".to_string());
}
pub fn add_sev_protection_device(&mut self, cbitpos: u32, firmware: &str) {
let sev_object = ObjectSevSnpGuest::new(true, cbitpos, None);
pub fn add_sev_protection_device(
&mut self,
cbitpos: u32,
phys_addr_reduction: u32,
firmware: &str,
) {
let sev_object = ObjectSevSnpGuest::new(false, cbitpos, phys_addr_reduction, None);
self.devices.push(Box::new(sev_object));
self.devices.push(Box::new(Bios::new(firmware.to_owned())));
@@ -2552,10 +2557,12 @@ impl<'a> QemuCmdLine<'a> {
pub fn add_sev_snp_protection_device(
&mut self,
cbitpos: u32,
phys_addr_reduction: u32,
firmware: &str,
host_data: &Option<String>,
) {
let sev_snp_object = ObjectSevSnpGuest::new(true, cbitpos, host_data.clone());
let sev_snp_object =
ObjectSevSnpGuest::new(true, cbitpos, phys_addr_reduction, host_data.clone());
self.devices.push(Box::new(sev_snp_object));
self.devices.push(Box::new(Bios::new(firmware.to_owned())));

View File

@@ -163,12 +163,14 @@ impl QemuInner {
if sev_snp_cfg.is_snp {
cmdline.add_sev_snp_protection_device(
sev_snp_cfg.cbitpos,
sev_snp_cfg.phys_addr_reduction,
&sev_snp_cfg.firmware,
&sev_snp_cfg.host_data,
)
} else {
cmdline.add_sev_protection_device(
sev_snp_cfg.cbitpos,
sev_snp_cfg.phys_addr_reduction,
&sev_snp_cfg.firmware,
)
}
@@ -603,15 +605,15 @@ impl QemuInner {
}
};
let coldplugged_mem = megs_to_bytes(self.config.memory_info.default_memory);
let coldplugged_mem_mb = self.config.memory_info.default_memory;
let coldplugged_mem = megs_to_bytes(coldplugged_mem_mb);
let new_total_mem = megs_to_bytes(new_total_mem_mb);
if new_total_mem < coldplugged_mem {
return Err(anyhow!(
"asked to resize to {} M but that is less than cold-plugged memory size ({})",
new_total_mem_mb,
bytes_to_megs(coldplugged_mem)
));
warn!(sl!(), "asked to resize to {} M but that is less than cold-plugged memory size ({}), nothing to do",new_total_mem_mb,
bytes_to_megs(coldplugged_mem));
return Ok((coldplugged_mem_mb, MemoryConfig::default()));
}
let guest_mem_block_size = qmp.guest_memory_block_size();

View File

@@ -173,7 +173,10 @@ impl RemoteInner {
..Default::default()
};
info!(sl!(), "Preparing REMOTE VM req: {:?}", req.clone());
let resp = client.create_vm(ctx, &req).await?;
let resp = client
.create_vm(ctx, &req)
.await
.map_err(|e| anyhow::anyhow!("error creating VM: {e}"))?;
info!(sl!(), "Preparing REMOTE VM resp: {:?}", resp.clone());
self.agent_socket_path = resp.agentSocketPath;
self.netns = netns;

View File

@@ -359,6 +359,16 @@ pub fn megs_to_bytes(bytes: u32) -> u64 {
bytes as u64 * (1 << 20)
}
#[allow(dead_code)]
pub fn get_cmd_output(cmd: &str, args: &[&str]) -> Result<String> {
let mut cmd = std::process::Command::new(cmd);
if !args.is_empty() {
cmd.args(args);
}
let result = cmd.output()?;
Ok(String::from_utf8(result.stdout)?)
}
#[cfg(test)]
mod tests {
use std::fs;

View File

@@ -35,7 +35,7 @@ uuid = { version = "0.4", features = ["v4"] }
oci-spec = { workspace = true }
inotify = "0.11.0"
walkdir = "2.5.0"
flate2 = { version = "1.0", features = ["zlib"] }
flate2 = "1.1"
tempfile = "3.19.1"
hex = "0.4"

View File

@@ -73,7 +73,7 @@ impl EphemeralVolume {
mount.set_destination(m.destination().clone());
mount.set_typ(Some("bind".to_string()));
mount.set_source(Some(PathBuf::from(&source)));
mount.set_options(Some(vec!["rbind".to_string()]));
mount.set_options(m.options().clone());
Ok(Self {
mount,

View File

@@ -0,0 +1,126 @@
// Copyright (c) 2025 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use std::path::{Path, PathBuf};
use crate::share_fs::{kata_guest_share_dir, PASSTHROUGH_FS_DIR};
use super::Volume;
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use hypervisor::device::device_manager::DeviceManager;
use kata_sys_util::mount::{get_mount_path, get_mount_type};
use kata_types::mount::KATA_K8S_LOCAL_STORAGE_TYPE;
use nix::sys::stat::stat;
use oci_spec::runtime as oci;
use tokio::sync::RwLock;
/// Allocating an FSGroup that owns the pod's volumes
const FS_GID: &str = "fsgid";
#[derive(Debug)]
pub(crate) struct LocalStorage {
mounts: Vec<oci::Mount>,
storage: Option<agent::Storage>,
}
impl LocalStorage {
pub(crate) fn new(m: &oci::Mount, sid: &str, cid: &str) -> Result<Self> {
if m.source().is_none() {
return Err(anyhow!(format!(
"got a wrong volume without source: {:?}",
m
)));
}
let source = &get_mount_path(m.source());
let file_stat = stat(Path::new(source))?;
let mut dir_options = vec!["mode=0777".to_string()];
// if volume's gid isn't root group(default group), this means there's
// an specific fsGroup is set on this local volume, then it should pass
// to guest.
if file_stat.st_gid != 0 {
dir_options.push(format!("{}={}", FS_GID, file_stat.st_gid));
}
let file_name = Path::new(source)
.file_name()
.context(format!("get file name from {:?}", &m.source()))?;
// Set the mount source path to a the desired directory point in the VM.
// In this case it is located in the sandbox directory.
// We rely on the fact that the first container in the VM has the same ID as the sandbox ID.
// In Kubernetes, this is usually the pause container and we depend on it existing for
// local directories to work.
let source = Path::new(&kata_guest_share_dir())
.join(PASSTHROUGH_FS_DIR)
.join(sid)
.join("rootfs")
.join(KATA_K8S_LOCAL_STORAGE_TYPE)
.join(file_name)
.into_os_string()
.into_string()
.map_err(|e| anyhow!("failed to get local path {:?}", e))?;
// Create a storage struct so that kata agent is able to create
// tmpfs backed volume inside the VM
let local_storage = agent::Storage {
driver: String::from(KATA_K8S_LOCAL_STORAGE_TYPE),
driver_options: Vec::new(),
source: String::from(KATA_K8S_LOCAL_STORAGE_TYPE),
fs_type: String::from(KATA_K8S_LOCAL_STORAGE_TYPE),
fs_group: None,
options: dir_options,
mount_point: source.clone(),
};
let mounts: Vec<oci::Mount> = if sid != cid {
let mut mount = oci::Mount::default();
mount.set_destination(m.destination().clone());
mount.set_typ(Some(KATA_K8S_LOCAL_STORAGE_TYPE.to_string()));
mount.set_source(Some(PathBuf::from(&source)));
mount.set_options(m.options().clone());
vec![mount]
} else {
vec![]
};
Ok(Self {
mounts,
storage: Some(local_storage),
})
}
}
#[async_trait]
impl Volume for LocalStorage {
fn get_volume_mount(&self) -> anyhow::Result<Vec<oci::Mount>> {
Ok(self.mounts.clone())
}
fn get_storage(&self) -> Result<Vec<agent::Storage>> {
let s = if let Some(s) = self.storage.as_ref() {
vec![s.clone()]
} else {
vec![]
};
Ok(s)
}
async fn cleanup(&self, _device_manager: &RwLock<DeviceManager>) -> Result<()> {
// TODO: Clean up LocalStorage
warn!(sl!(), "Cleaning up LocalStorage is no need.");
Ok(())
}
fn get_device_id(&self) -> Result<Option<String>> {
Ok(None)
}
}
pub(crate) fn is_local_volume(m: &oci::Mount) -> bool {
get_mount_type(m).as_str() == KATA_K8S_LOCAL_STORAGE_TYPE
}

View File

@@ -8,6 +8,7 @@ mod block_volume;
mod default_volume;
mod ephemeral_volume;
pub mod hugepage;
mod local_volume;
mod share_fs_volume;
mod shm_volume;
pub mod utils;
@@ -81,6 +82,11 @@ impl VolumeResource {
shm_volume::ShmVolume::new(m)
.with_context(|| format!("new shm volume {:?}", m))?,
)
} else if local_volume::is_local_volume(m) {
Arc::new(
local_volume::LocalStorage::new(m, sid, cid)
.with_context(|| format!("new local volume {:?}", m))?,
)
} else if ephemeral_volume::is_ephemeral_volume(m) {
Arc::new(
ephemeral_volume::EphemeralVolume::new(m)

View File

@@ -54,4 +54,7 @@ pub trait Sandbox: Send + Sync {
// metrics function
async fn agent_metrics(&self) -> Result<String>;
async fn hypervisor_metrics(&self) -> Result<String>;
// set agent policy
async fn set_policy(&self, policy: &str) -> Result<()>;
}

View File

@@ -12,12 +12,12 @@ use agent::ResizeVolumeRequest;
use anyhow::{anyhow, Context, Result};
use common::Sandbox;
use hyper::{Body, Method, Request, Response, StatusCode};
use std::sync::Arc;
use std::{str, sync::Arc};
use url::Url;
use shim_interface::shim_mgmt::{
AGENT_URL, DIRECT_VOLUME_PATH_KEY, DIRECT_VOLUME_RESIZE_URL, DIRECT_VOLUME_STATS_URL,
IP6_TABLE_URL, IP_TABLE_URL, METRICS_URL,
AGENT_POLICY_URL, AGENT_URL, DIRECT_VOLUME_PATH_KEY, DIRECT_VOLUME_RESIZE_URL,
DIRECT_VOLUME_STATS_URL, IP6_TABLE_URL, IP_TABLE_URL, METRICS_URL,
};
// main router for response, this works as a multiplexer on
@@ -45,6 +45,7 @@ pub(crate) async fn handler_mux(
direct_volume_resize_handler(sandbox, req).await
}
(&Method::GET, METRICS_URL) => metrics_url_handler(sandbox, req).await,
(&Method::PUT, AGENT_POLICY_URL) => set_agent_policy_handler(sandbox, req).await,
_ => Ok(not_found(req).await),
}
}
@@ -164,3 +165,22 @@ async fn metrics_url_handler(
agent_metrics, hypervisor_metrics, shim_metrics
))))
}
/// The set agent policy handler, for setting agent policy
async fn set_agent_policy_handler(
sandbox: Arc<dyn Sandbox>,
req: Request<Body>,
) -> Result<Response<Body>> {
match *req.method() {
Method::PUT => {
let data = hyper::body::to_bytes(req.into_body()).await?;
let policy: &str = str::from_utf8(&data)?;
sandbox
.set_policy(policy)
.await
.context("set agent policy handler failed")?;
Ok(Response::new(Body::from("")))
}
_ => Err(anyhow!("Set agent policy only takes PUT method")),
}
}

View File

@@ -134,6 +134,7 @@ impl Container {
&mut spec,
toml_config.runtime.disable_guest_seccomp,
disable_guest_selinux,
toml_config.runtime.disable_guest_empty_dir,
)
.context("amend spec")?;
@@ -622,6 +623,7 @@ fn amend_spec(
spec: &mut oci::Spec,
disable_guest_seccomp: bool,
disable_guest_selinux: bool,
disable_guest_empty_dir: bool,
) -> Result<()> {
// Only the StartContainer hook needs to be reserved for execution in the guest
if let Some(hooks) = spec.hooks().as_ref() {
@@ -631,7 +633,7 @@ fn amend_spec(
}
// special process K8s ephemeral volumes.
update_ephemeral_storage_type(spec);
update_ephemeral_storage_type(spec, disable_guest_empty_dir);
if let Some(linux) = &mut spec.linux_mut() {
if disable_guest_seccomp {
@@ -728,11 +730,11 @@ mod tests {
assert!(spec.linux().as_ref().unwrap().seccomp().is_some());
// disable_guest_seccomp = false
amend_spec(&mut spec, false, false).unwrap();
amend_spec(&mut spec, false, false, false).unwrap();
assert!(spec.linux().as_ref().unwrap().seccomp().is_some());
// disable_guest_seccomp = true
amend_spec(&mut spec, true, false).unwrap();
amend_spec(&mut spec, true, false, false).unwrap();
assert!(spec.linux().as_ref().unwrap().seccomp().is_none());
}
@@ -755,12 +757,12 @@ mod tests {
.unwrap();
// disable_guest_selinux = false, selinux labels are left alone
amend_spec(&mut spec, false, false).unwrap();
amend_spec(&mut spec, false, false, false).unwrap();
assert!(spec.process().as_ref().unwrap().selinux_label() == &Some("xxx".to_owned()));
assert!(spec.linux().as_ref().unwrap().mount_label() == &Some("yyy".to_owned()));
// disable_guest_selinux = true, selinux labels are reset
amend_spec(&mut spec, false, true).unwrap();
amend_spec(&mut spec, false, true, false).unwrap();
assert!(spec.process().as_ref().unwrap().selinux_label().is_none());
assert!(spec.linux().as_ref().unwrap().mount_label().is_none());
}

View File

@@ -6,7 +6,7 @@
use crate::health_check::HealthCheck;
use agent::kata::KataAgent;
use agent::types::KernelModule;
use agent::types::{KernelModule, SetPolicyRequest};
use agent::{
self, Agent, GetGuestDetailsRequest, GetIPTablesRequest, SetIPTablesRequest, VolumeStatsRequest,
};
@@ -358,7 +358,13 @@ impl VirtSandbox {
}
if boot_info.image.is_empty() {
if boot_info.vm_rootfs_driver.ends_with("ccw") && security_info.confidential_guest {
let is_remote_hypervisor = Arc::clone(&self.resource_manager.config().await)
.runtime
.hypervisor_name
== "remote";
if (boot_info.vm_rootfs_driver.ends_with("ccw") && security_info.confidential_guest)
|| is_remote_hypervisor
{
return Ok(None);
} else {
return Err(anyhow!("both of image and initrd isn't set"));
@@ -373,6 +379,28 @@ impl VirtSandbox {
}))
}
async fn set_agent_policy(&self) -> Result<()> {
// TODO: Exclude policy-related items from the annotations.
let toml_config = self.resource_manager.config().await;
if let Some(agent_config) = toml_config.agent.get(&toml_config.runtime.agent_name) {
// If a Policy has been specified, send it to the agent.
if !agent_config.policy.is_empty() {
info!(
sl!(),
"Setting Agent Policy with {:?}.", &agent_config.policy
);
self.agent
.set_policy(SetPolicyRequest {
policy: agent_config.policy.clone(),
})
.await
.context("sandbox: set policy failed")?;
}
}
Ok(())
}
async fn prepare_vm_socket_config(&self) -> Result<ResourceConfig> {
// It will check the hypervisor's capabilities to see if it supports hybrid-vsock.
// If it does not, it'll assume that it only supports legacy vsock.
@@ -417,6 +445,7 @@ impl VirtSandbox {
Ok(Some(ProtectionDeviceConfig::SevSnp(SevSnpConfig {
is_snp: false,
cbitpos: details.cbitpos,
phys_addr_reduction: details.phys_addr_reduction,
firmware: hypervisor_config.boot_info.firmware.clone(),
host_data: None,
})))
@@ -437,6 +466,7 @@ impl VirtSandbox {
Ok(Some(ProtectionDeviceConfig::SevSnp(SevSnpConfig {
is_snp,
cbitpos: details.cbitpos,
phys_addr_reduction: details.phys_addr_reduction,
firmware: hypervisor_config.boot_info.firmware.clone(),
host_data: init_data,
})))
@@ -628,6 +658,7 @@ impl Sandbox for VirtSandbox {
.start(&address)
.await
.context(format!("connect to address {:?}", &address))?;
self.set_agent_policy().await.context("set agent policy")?;
self.resource_manager
.setup_after_start_vm()
@@ -927,6 +958,24 @@ impl Sandbox for VirtSandbox {
async fn hypervisor_metrics(&self) -> Result<String> {
self.hypervisor.get_hypervisor_metrics().await
}
async fn set_policy(&self, policy: &str) -> Result<()> {
if policy.is_empty() {
debug!(sl!(), "sb: set_policy skipped without policy");
return Ok(());
}
info!(sl!(), "sb: set_policy invoked");
let policy_req = SetPolicyRequest {
policy: policy.to_string(),
};
self.agent
.set_policy(policy_req)
.await
.context("sandbox: failed to set policy")?;
Ok(())
}
}
#[async_trait]

View File

@@ -52,7 +52,7 @@ sev_snp_guest = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
enable_annotations = @DEFENABLEANNOTATIONS_COCO@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).

View File

@@ -49,7 +49,7 @@ confidential_guest = true
# List of valid annotation names for the hypervisor
# Each member of the list is a regular expression, which is the base name
# of the annotation, e.g. "path" for io.katacontainers.config.hypervisor.path"
enable_annotations = @DEFENABLEANNOTATIONS@
enable_annotations = @DEFENABLEANNOTATIONS_COCO@
# List of valid annotations values for the hypervisor
# Each member of the list is a path pattern as described by glob(3).

View File

@@ -13,8 +13,8 @@ require (
github.com/blang/semver/v4 v4.0.0
github.com/container-orchestrated-devices/container-device-interface v0.6.0
github.com/containerd/cgroups v1.1.0
github.com/containerd/console v1.0.4
github.com/containerd/containerd v1.7.27
github.com/containerd/console v1.0.5
github.com/containerd/containerd v1.7.29
github.com/containerd/containerd/api v1.9.0
github.com/containerd/cri-containerd v1.19.0
github.com/containerd/fifo v1.1.0
@@ -35,10 +35,11 @@ require (
github.com/hashicorp/go-multierror v1.1.1
github.com/intel-go/cpuid v0.0.0-20210602155658-5747e5cec0d9
github.com/mdlayher/vsock v1.2.1
github.com/moby/sys/mountinfo v0.7.2
github.com/moby/sys/userns v0.1.0
github.com/opencontainers/runc v1.2.6
github.com/opencontainers/runc v1.2.8
github.com/opencontainers/runtime-spec v1.2.1
github.com/opencontainers/selinux v1.12.0
github.com/opencontainers/selinux v1.13.0
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.22.0
@@ -47,7 +48,7 @@ require (
github.com/prometheus/procfs v0.15.1
github.com/safchain/ethtool v0.5.10
github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.10.0
github.com/stretchr/testify v1.11.1
github.com/urfave/cli v1.22.15
github.com/vishvananda/netlink v1.3.1-0.20250303224720-0e7078ed04c8
github.com/vishvananda/netns v0.0.5
@@ -56,8 +57,8 @@ require (
go.opentelemetry.io/otel/exporters/jaeger v1.0.0
go.opentelemetry.io/otel/sdk v1.35.0
go.opentelemetry.io/otel/trace v1.35.0
golang.org/x/oauth2 v0.29.0
golang.org/x/sys v0.33.0
golang.org/x/oauth2 v0.30.0
golang.org/x/sys v0.34.0
google.golang.org/grpc v1.72.0
google.golang.org/protobuf v1.36.6
k8s.io/apimachinery v0.33.0
@@ -66,6 +67,7 @@ require (
)
require (
cyphar.com/go-pathrs v0.2.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
@@ -83,7 +85,7 @@ require (
github.com/containerd/platforms v0.2.1 // indirect
github.com/containernetworking/cni v1.3.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/cyphar/filepath-securejoin v0.6.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
@@ -108,7 +110,6 @@ require (
github.com/mdlayher/socket v0.5.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.7.2 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
@@ -129,10 +130,10 @@ require (
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
golang.org/x/exp v0.0.0-20241108190413-2d47ceb2692f // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/net v0.40.0 // indirect
golang.org/x/sync v0.14.0 // indirect
golang.org/x/text v0.25.0 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/net v0.42.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/text v0.27.0 // indirect
google.golang.org/genproto v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250313205543-e70fdf4c4cb4 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect

View File

@@ -1,6 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
code.cloudfoundry.org/bytefmt v0.0.0-20211005130812-5bb3c17173e5 h1:tM5+dn2C9xZw1RzgI6WTQW1rGqdUimKB3RFbyu4h6Hc=
code.cloudfoundry.org/bytefmt v0.0.0-20211005130812-5bb3c17173e5/go.mod h1:v4VVB6oBMz/c9fRY6vZrwr5xKRWOH5NPDjQZlPk0Gbs=
cyphar.com/go-pathrs v0.2.1 h1:9nx1vOgwVvX1mNBWDu93+vaceedpbsDqo+XuBGL40b8=
cyphar.com/go-pathrs v0.2.1/go.mod h1:y8f1EMG7r+hCuFf/rXsKqMJrJAUoADZGNh5/vZPKcGc=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 h1:59MxjQVfjXsBpLy+dbd2/ELV5ofnUkUZBvWSC85sheA=
@@ -34,10 +36,10 @@ github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaD
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/cgroups/v3 v3.0.5 h1:44na7Ud+VwyE7LIoJ8JTNQOa549a8543BmzaJHo6Bzo=
github.com/containerd/cgroups/v3 v3.0.5/go.mod h1:SA5DLYnXO8pTGYiAHXz94qvLQTKfVM5GEVisn4jpins=
github.com/containerd/console v1.0.4 h1:F2g4+oChYvBTsASRTz8NP6iIAi97J3TtSAsLbIFn4ro=
github.com/containerd/console v1.0.4/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd v1.7.27 h1:yFyEyojddO3MIGVER2xJLWoCIn+Up4GaHFquP7hsFII=
github.com/containerd/containerd v1.7.27/go.mod h1:xZmPnl75Vc+BLGt4MIfu6bp+fy03gdHAn9bz+FreFR0=
github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc=
github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
github.com/containerd/containerd v1.7.29 h1:90fWABQsaN9mJhGkoVnuzEY+o1XDPbg9BTC9QTAHnuE=
github.com/containerd/containerd v1.7.29/go.mod h1:azUkWcOvHrWvaiUjSQH0fjzuHIwSPg1WL5PshGP4Szs=
github.com/containerd/containerd/api v1.9.0 h1:HZ/licowTRazus+wt9fM6r/9BQO7S0vD5lMcWspGIg0=
github.com/containerd/containerd/api v1.9.0/go.mod h1:GhghKFmTR3hNtyznBoQ0EMWr9ju5AqHjcZPsSpTKutI=
github.com/containerd/continuity v0.4.4 h1:/fNVfTJ7wIl/YPMHjf+5H32uFhl63JucB34PlCpMKII=
@@ -71,8 +73,8 @@ github.com/cpuguy83/go-md2man/v2 v2.0.6 h1:XJtiaUW6dEEqVuZiMTn1ldk455QWwEIsMIJlo
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cri-o/cri-o v1.34.0 h1:ux2URwAyENy5e5hD9Z95tshdfy98eqatZk0fxx3rhuk=
github.com/cri-o/cri-o v1.34.0/go.mod h1:kP40HG+1EW5CDNHjqQBFhb6dehT5dCBKcmtO5RZAm6k=
github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s=
github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/cyphar/filepath-securejoin v0.6.0 h1:BtGB77njd6SVO6VztOHfPxKitJvd/VPT+OFBFMOi1Is=
github.com/cyphar/filepath-securejoin v0.6.0/go.mod h1:A8hd4EnAeyujCJRrICiOWqjS1AX0a9kM5XL+NwKoYSc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
@@ -238,14 +240,14 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/opencontainers/runc v1.2.6 h1:P7Hqg40bsMvQGCS4S7DJYhUZOISMLJOB2iGX5COWiPk=
github.com/opencontainers/runc v1.2.6/go.mod h1:dOQeFo29xZKBNeRBI0B19mJtfHv68YgCTh1X+YphA+4=
github.com/opencontainers/runc v1.2.8 h1:RnEICeDReapbZ5lZEgHvj7E9Q3Eex9toYmaGBsbvU5Q=
github.com/opencontainers/runc v1.2.8/go.mod h1:cC0YkmZcuvr+rtBZ6T7NBoVbMGNAdLa/21vIElJDOzI=
github.com/opencontainers/runtime-spec v1.2.1 h1:S4k4ryNgEpxW1dzyqffOmhI1BHYcjzU8lpJfSlR0xww=
github.com/opencontainers/runtime-spec v1.2.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.9.1-0.20250303011046-260e151b8552 h1:CkXngT0nixZqQUPDVfwVs3GiuhfTqCMk0V+OoHpxIvA=
github.com/opencontainers/runtime-tools v0.9.1-0.20250303011046-260e151b8552/go.mod h1:T487Kf80NeF2i0OyVXHiylg217e0buz8pQsa0T791RA=
github.com/opencontainers/selinux v1.12.0 h1:6n5JV4Cf+4y0KNXW48TLj5DwfXpvWlxXplUkdTrmPb8=
github.com/opencontainers/selinux v1.12.0/go.mod h1:BTPX+bjVbWGXw7ZZWUbdENt8w0htPSrlgOOysQaU62U=
github.com/opencontainers/selinux v1.13.0 h1:Zza88GWezyT7RLql12URvoxsbLfjFx988+LGaWfbL84=
github.com/opencontainers/selinux v1.13.0/go.mod h1:XxWTed+A/s5NNq4GmYScVy+9jzXhGBVEOAyucdRUY8s=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
@@ -287,8 +289,8 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 h1:kdXcSzyDtseVEc4yCz2qF8ZrQvIDBJLl4S1c3GCXmoI=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/urfave/cli v1.22.15 h1:nuqt+pdC/KqswQKhETJjo7pvn/k4xMUxgW6liI7XpnM=
@@ -348,8 +350,8 @@ golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvx
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -362,18 +364,18 @@ golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98=
golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -394,14 +396,14 @@ golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -411,8 +413,8 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.31.0 h1:0EedkvKDbh+qistFTd0Bcwe/YLh4vHwWEkiI0toFIBU=
golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ=
golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

43
src/runtime/vendor/cyphar.com/go-pathrs/.golangci.yml generated vendored Normal file
View File

@@ -0,0 +1,43 @@
# SPDX-License-Identifier: MPL-2.0
#
# libpathrs: safe path resolution on Linux
# Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
# Copyright (C) 2019-2025 SUSE LLC
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
version: "2"
linters:
enable:
- bidichk
- cyclop
- errname
- errorlint
- exhaustive
- goconst
- godot
- gomoddirectives
- gosec
- mirror
- misspell
- mnd
- nilerr
- nilnil
- perfsprint
- prealloc
- reassign
- revive
- unconvert
- unparam
- usestdlibvars
- wastedassign
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- cyphar.com/go-pathrs

373
src/runtime/vendor/cyphar.com/go-pathrs/COPYING generated vendored Normal file
View File

@@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

14
src/runtime/vendor/cyphar.com/go-pathrs/doc.go generated vendored Normal file
View File

@@ -0,0 +1,14 @@
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
// Package pathrs provides bindings for libpathrs, a library for safe path
// resolution on Linux.
package pathrs

114
src/runtime/vendor/cyphar.com/go-pathrs/handle_linux.go generated vendored Normal file
View File

@@ -0,0 +1,114 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
package pathrs
import (
"fmt"
"os"
"cyphar.com/go-pathrs/internal/fdutils"
"cyphar.com/go-pathrs/internal/libpathrs"
)
// Handle is a handle for a path within a given [Root]. This handle references
// an already-resolved path which can be used for only one purpose -- to
// "re-open" the handle and get an actual [os.File] which can be used for
// ordinary operations.
//
// If you wish to open a file without having an intermediate [Handle] object,
// you can try to use [Root.Open] or [Root.OpenFile].
//
// It is critical that perform all relevant operations through this [Handle]
// (rather than fetching the file descriptor yourself with [Handle.IntoRaw]),
// because the security properties of libpathrs depend on users doing all
// relevant filesystem operations through libpathrs.
//
// [os.File]: https://pkg.go.dev/os#File
type Handle struct {
inner *os.File
}
// HandleFromFile creates a new [Handle] from an existing file handle. The
// handle will be copied by this method, so the original handle should still be
// freed by the caller.
//
// This is effectively the inverse operation of [Handle.IntoRaw], and is used
// for "deserialising" pathrs root handles.
func HandleFromFile(file *os.File) (*Handle, error) {
newFile, err := fdutils.DupFile(file)
if err != nil {
return nil, fmt.Errorf("duplicate handle fd: %w", err)
}
return &Handle{inner: newFile}, nil
}
// Open creates an "upgraded" file handle to the file referenced by the
// [Handle]. Note that the original [Handle] is not consumed by this operation,
// and can be opened multiple times.
//
// The handle returned is only usable for reading, and this is method is
// shorthand for [Handle.OpenFile] with os.O_RDONLY.
//
// TODO: Rename these to "Reopen" or something.
func (h *Handle) Open() (*os.File, error) {
return h.OpenFile(os.O_RDONLY)
}
// OpenFile creates an "upgraded" file handle to the file referenced by the
// [Handle]. Note that the original [Handle] is not consumed by this operation,
// and can be opened multiple times.
//
// The provided flags indicate which open(2) flags are used to create the new
// handle.
//
// TODO: Rename these to "Reopen" or something.
func (h *Handle) OpenFile(flags int) (*os.File, error) {
return fdutils.WithFileFd(h.inner, func(fd uintptr) (*os.File, error) {
newFd, err := libpathrs.Reopen(fd, flags)
if err != nil {
return nil, err
}
return os.NewFile(newFd, h.inner.Name()), nil
})
}
// IntoFile unwraps the [Handle] into its underlying [os.File].
//
// You almost certainly want to use [Handle.OpenFile] to get a non-O_PATH
// version of this [Handle].
//
// This operation returns the internal [os.File] of the [Handle] directly, so
// calling [Handle.Close] will also close any copies of the returned [os.File].
// If you want to get an independent copy, use [Handle.Clone] followed by
// [Handle.IntoFile] on the cloned [Handle].
//
// [os.File]: https://pkg.go.dev/os#File
func (h *Handle) IntoFile() *os.File {
// TODO: Figure out if we really don't want to make a copy.
// TODO: We almost certainly want to clear r.inner here, but we can't do
// that easily atomically (we could use atomic.Value but that'll make
// things quite a bit uglier).
return h.inner
}
// Clone creates a copy of a [Handle], such that it has a separate lifetime to
// the original (while referring to the same underlying file).
func (h *Handle) Clone() (*Handle, error) {
return HandleFromFile(h.inner)
}
// Close frees all of the resources used by the [Handle].
func (h *Handle) Close() error {
return h.inner.Close()
}

View File

@@ -0,0 +1,75 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
// Package fdutils contains a few helper methods when dealing with *os.File and
// file descriptors.
package fdutils
import (
"fmt"
"os"
"golang.org/x/sys/unix"
"cyphar.com/go-pathrs/internal/libpathrs"
)
// DupFd makes a duplicate of the given fd.
func DupFd(fd uintptr, name string) (*os.File, error) {
newFd, err := unix.FcntlInt(fd, unix.F_DUPFD_CLOEXEC, 0)
if err != nil {
return nil, fmt.Errorf("fcntl(F_DUPFD_CLOEXEC): %w", err)
}
return os.NewFile(uintptr(newFd), name), nil
}
// WithFileFd is a more ergonomic wrapper around file.SyscallConn().Control().
func WithFileFd[T any](file *os.File, fn func(fd uintptr) (T, error)) (T, error) {
conn, err := file.SyscallConn()
if err != nil {
return *new(T), err
}
var (
ret T
innerErr error
)
if err := conn.Control(func(fd uintptr) {
ret, innerErr = fn(fd)
}); err != nil {
return *new(T), err
}
return ret, innerErr
}
// DupFile makes a duplicate of the given file.
func DupFile(file *os.File) (*os.File, error) {
return WithFileFd(file, func(fd uintptr) (*os.File, error) {
return DupFd(fd, file.Name())
})
}
// MkFile creates a new *os.File from the provided file descriptor. However,
// unlike os.NewFile, the file's Name is based on the real path (provided by
// /proc/self/fd/$n).
func MkFile(fd uintptr) (*os.File, error) {
fdPath := fmt.Sprintf("fd/%d", fd)
fdName, err := libpathrs.ProcReadlinkat(libpathrs.ProcDefaultRootFd, libpathrs.ProcThreadSelf, fdPath)
if err != nil {
_ = unix.Close(int(fd))
return nil, fmt.Errorf("failed to fetch real name of fd %d: %w", fd, err)
}
// TODO: Maybe we should prefix this name with something to indicate to
// users that they must not use this path as a "safe" path. Something like
// "//pathrs-handle:/foo/bar"?
return os.NewFile(fd, fdName), nil
}

View File

@@ -0,0 +1,40 @@
//go:build linux
// TODO: Use "go:build unix" once we bump the minimum Go version 1.19.
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
package libpathrs
import (
"syscall"
)
// Error represents an underlying libpathrs error.
type Error struct {
description string
errno syscall.Errno
}
// Error returns a textual description of the error.
func (err *Error) Error() string {
return err.description
}
// Unwrap returns the underlying error which was wrapped by this error (if
// applicable).
func (err *Error) Unwrap() error {
if err.errno != 0 {
return err.errno
}
return nil
}

View File

@@ -0,0 +1,337 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
// Package libpathrs is an internal thin wrapper around the libpathrs C API.
package libpathrs
import (
"fmt"
"syscall"
"unsafe"
)
/*
// TODO: Figure out if we need to add support for linking against libpathrs
// statically even if in dynamically linked builds in order to make
// packaging a bit easier (using "-Wl,-Bstatic -lpathrs -Wl,-Bdynamic" or
// "-l:pathrs.a").
#cgo pkg-config: pathrs
#include <pathrs.h>
// This is a workaround for unsafe.Pointer() not working for non-void pointers.
char *cast_ptr(void *ptr) { return ptr; }
*/
import "C"
func fetchError(errID C.int) error {
if errID >= C.__PATHRS_MAX_ERR_VALUE {
return nil
}
cErr := C.pathrs_errorinfo(errID)
defer C.pathrs_errorinfo_free(cErr)
var err error
if cErr != nil {
err = &Error{
errno: syscall.Errno(cErr.saved_errno),
description: C.GoString(cErr.description),
}
}
return err
}
// OpenRoot wraps pathrs_open_root.
func OpenRoot(path string) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_open_root(cPath)
return uintptr(fd), fetchError(fd)
}
// Reopen wraps pathrs_reopen.
func Reopen(fd uintptr, flags int) (uintptr, error) {
newFd := C.pathrs_reopen(C.int(fd), C.int(flags))
return uintptr(newFd), fetchError(newFd)
}
// InRootResolve wraps pathrs_inroot_resolve.
func InRootResolve(rootFd uintptr, path string) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_inroot_resolve(C.int(rootFd), cPath)
return uintptr(fd), fetchError(fd)
}
// InRootResolveNoFollow wraps pathrs_inroot_resolve_nofollow.
func InRootResolveNoFollow(rootFd uintptr, path string) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_inroot_resolve_nofollow(C.int(rootFd), cPath)
return uintptr(fd), fetchError(fd)
}
// InRootOpen wraps pathrs_inroot_open.
func InRootOpen(rootFd uintptr, path string, flags int) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_inroot_open(C.int(rootFd), cPath, C.int(flags))
return uintptr(fd), fetchError(fd)
}
// InRootReadlink wraps pathrs_inroot_readlink.
func InRootReadlink(rootFd uintptr, path string) (string, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
size := 128
for {
linkBuf := make([]byte, size)
n := C.pathrs_inroot_readlink(C.int(rootFd), cPath, C.cast_ptr(unsafe.Pointer(&linkBuf[0])), C.ulong(len(linkBuf)))
switch {
case int(n) < C.__PATHRS_MAX_ERR_VALUE:
return "", fetchError(n)
case int(n) <= len(linkBuf):
return string(linkBuf[:int(n)]), nil
default:
// The contents were truncated. Unlike readlinkat, pathrs returns
// the size of the link when it checked. So use the returned size
// as a basis for the reallocated size (but in order to avoid a DoS
// where a magic-link is growing by a single byte each iteration,
// make sure we are a fair bit larger).
size += int(n)
}
}
}
// InRootRmdir wraps pathrs_inroot_rmdir.
func InRootRmdir(rootFd uintptr, path string) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
err := C.pathrs_inroot_rmdir(C.int(rootFd), cPath)
return fetchError(err)
}
// InRootUnlink wraps pathrs_inroot_unlink.
func InRootUnlink(rootFd uintptr, path string) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
err := C.pathrs_inroot_unlink(C.int(rootFd), cPath)
return fetchError(err)
}
// InRootRemoveAll wraps pathrs_inroot_remove_all.
func InRootRemoveAll(rootFd uintptr, path string) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
err := C.pathrs_inroot_remove_all(C.int(rootFd), cPath)
return fetchError(err)
}
// InRootCreat wraps pathrs_inroot_creat.
func InRootCreat(rootFd uintptr, path string, flags int, mode uint32) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_inroot_creat(C.int(rootFd), cPath, C.int(flags), C.uint(mode))
return uintptr(fd), fetchError(fd)
}
// InRootRename wraps pathrs_inroot_rename.
func InRootRename(rootFd uintptr, src, dst string, flags uint) error {
cSrc := C.CString(src)
defer C.free(unsafe.Pointer(cSrc))
cDst := C.CString(dst)
defer C.free(unsafe.Pointer(cDst))
err := C.pathrs_inroot_rename(C.int(rootFd), cSrc, cDst, C.uint(flags))
return fetchError(err)
}
// InRootMkdir wraps pathrs_inroot_mkdir.
func InRootMkdir(rootFd uintptr, path string, mode uint32) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
err := C.pathrs_inroot_mkdir(C.int(rootFd), cPath, C.uint(mode))
return fetchError(err)
}
// InRootMkdirAll wraps pathrs_inroot_mkdir_all.
func InRootMkdirAll(rootFd uintptr, path string, mode uint32) (uintptr, error) {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_inroot_mkdir_all(C.int(rootFd), cPath, C.uint(mode))
return uintptr(fd), fetchError(fd)
}
// InRootMknod wraps pathrs_inroot_mknod.
func InRootMknod(rootFd uintptr, path string, mode uint32, dev uint64) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
err := C.pathrs_inroot_mknod(C.int(rootFd), cPath, C.uint(mode), C.dev_t(dev))
return fetchError(err)
}
// InRootSymlink wraps pathrs_inroot_symlink.
func InRootSymlink(rootFd uintptr, path, target string) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
cTarget := C.CString(target)
defer C.free(unsafe.Pointer(cTarget))
err := C.pathrs_inroot_symlink(C.int(rootFd), cPath, cTarget)
return fetchError(err)
}
// InRootHardlink wraps pathrs_inroot_hardlink.
func InRootHardlink(rootFd uintptr, path, target string) error {
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
cTarget := C.CString(target)
defer C.free(unsafe.Pointer(cTarget))
err := C.pathrs_inroot_hardlink(C.int(rootFd), cPath, cTarget)
return fetchError(err)
}
// ProcBase is pathrs_proc_base_t (uint64_t).
type ProcBase C.pathrs_proc_base_t
// FIXME: We need to open-code the constants because CGo unfortunately will
// implicitly convert any non-literal constants (i.e. those resolved using gcc)
// to signed integers. See <https://github.com/golang/go/issues/39136> for some
// more information on the underlying issue (though.
const (
// ProcRoot is PATHRS_PROC_ROOT.
ProcRoot ProcBase = 0xFFFF_FFFE_7072_6F63 // C.PATHRS_PROC_ROOT
// ProcSelf is PATHRS_PROC_SELF.
ProcSelf ProcBase = 0xFFFF_FFFE_091D_5E1F // C.PATHRS_PROC_SELF
// ProcThreadSelf is PATHRS_PROC_THREAD_SELF.
ProcThreadSelf ProcBase = 0xFFFF_FFFE_3EAD_5E1F // C.PATHRS_PROC_THREAD_SELF
// ProcBaseTypeMask is __PATHRS_PROC_TYPE_MASK.
ProcBaseTypeMask ProcBase = 0xFFFF_FFFF_0000_0000 // C.__PATHRS_PROC_TYPE_MASK
// ProcBaseTypePid is __PATHRS_PROC_TYPE_PID.
ProcBaseTypePid ProcBase = 0x8000_0000_0000_0000 // C.__PATHRS_PROC_TYPE_PID
// ProcDefaultRootFd is PATHRS_PROC_DEFAULT_ROOTFD.
ProcDefaultRootFd = -int(syscall.EBADF) // C.PATHRS_PROC_DEFAULT_ROOTFD
)
func assertEqual[T comparable](a, b T, msg string) {
if a != b {
panic(fmt.Sprintf("%s ((%T) %#v != (%T) %#v)", msg, a, a, b, b))
}
}
// Verify that the values above match the actual C values. Unfortunately, Go
// only allows us to forcefully cast int64 to uint64 if you use a temporary
// variable, which means we cannot do it in a const context and thus need to do
// it at runtime (even though it is a check that fundamentally could be done at
// compile-time)...
func init() {
var (
actualProcRoot int64 = C.PATHRS_PROC_ROOT
actualProcSelf int64 = C.PATHRS_PROC_SELF
actualProcThreadSelf int64 = C.PATHRS_PROC_THREAD_SELF
)
assertEqual(ProcRoot, ProcBase(actualProcRoot), "PATHRS_PROC_ROOT")
assertEqual(ProcSelf, ProcBase(actualProcSelf), "PATHRS_PROC_SELF")
assertEqual(ProcThreadSelf, ProcBase(actualProcThreadSelf), "PATHRS_PROC_THREAD_SELF")
var (
actualProcBaseTypeMask uint64 = C.__PATHRS_PROC_TYPE_MASK
actualProcBaseTypePid uint64 = C.__PATHRS_PROC_TYPE_PID
)
assertEqual(ProcBaseTypeMask, ProcBase(actualProcBaseTypeMask), "__PATHRS_PROC_TYPE_MASK")
assertEqual(ProcBaseTypePid, ProcBase(actualProcBaseTypePid), "__PATHRS_PROC_TYPE_PID")
assertEqual(ProcDefaultRootFd, int(C.PATHRS_PROC_DEFAULT_ROOTFD), "PATHRS_PROC_DEFAULT_ROOTFD")
}
// ProcPid reimplements the PROC_PID(x) conversion.
func ProcPid(pid uint32) ProcBase { return ProcBaseTypePid | ProcBase(pid) }
// ProcOpenat wraps pathrs_proc_openat.
func ProcOpenat(procRootFd int, base ProcBase, path string, flags int) (uintptr, error) {
cBase := C.pathrs_proc_base_t(base)
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
fd := C.pathrs_proc_openat(C.int(procRootFd), cBase, cPath, C.int(flags))
return uintptr(fd), fetchError(fd)
}
// ProcReadlinkat wraps pathrs_proc_readlinkat.
func ProcReadlinkat(procRootFd int, base ProcBase, path string) (string, error) {
// TODO: See if we can unify this code with InRootReadlink.
cBase := C.pathrs_proc_base_t(base)
cPath := C.CString(path)
defer C.free(unsafe.Pointer(cPath))
size := 128
for {
linkBuf := make([]byte, size)
n := C.pathrs_proc_readlinkat(
C.int(procRootFd), cBase, cPath,
C.cast_ptr(unsafe.Pointer(&linkBuf[0])), C.ulong(len(linkBuf)))
switch {
case int(n) < C.__PATHRS_MAX_ERR_VALUE:
return "", fetchError(n)
case int(n) <= len(linkBuf):
return string(linkBuf[:int(n)]), nil
default:
// The contents were truncated. Unlike readlinkat, pathrs returns
// the size of the link when it checked. So use the returned size
// as a basis for the reallocated size (but in order to avoid a DoS
// where a magic-link is growing by a single byte each iteration,
// make sure we are a fair bit larger).
size += int(n)
}
}
}
// ProcfsOpenHow is pathrs_procfs_open_how (struct).
type ProcfsOpenHow C.pathrs_procfs_open_how
const (
// ProcfsNewUnmasked is PATHRS_PROCFS_NEW_UNMASKED.
ProcfsNewUnmasked = C.PATHRS_PROCFS_NEW_UNMASKED
)
// Flags returns a pointer to the internal flags field to allow other packages
// to modify structure fields that are internal due to Go's visibility model.
func (how *ProcfsOpenHow) Flags() *C.uint64_t { return &how.flags }
// ProcfsOpen is pathrs_procfs_open (sizeof(*how) is passed automatically).
func ProcfsOpen(how *ProcfsOpenHow) (uintptr, error) {
fd := C.pathrs_procfs_open((*C.pathrs_procfs_open_how)(how), C.size_t(unsafe.Sizeof(*how)))
return uintptr(fd), fetchError(fd)
}

View File

@@ -0,0 +1,246 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
// Package procfs provides a safe API for operating on /proc on Linux.
package procfs
import (
"os"
"runtime"
"cyphar.com/go-pathrs/internal/fdutils"
"cyphar.com/go-pathrs/internal/libpathrs"
)
// ProcBase is used with [ProcReadlink] and related functions to indicate what
// /proc subpath path operations should be done relative to.
type ProcBase struct {
inner libpathrs.ProcBase
}
var (
// ProcRoot indicates to use /proc. Note that this mode may be more
// expensive because we have to take steps to try to avoid leaking unmasked
// procfs handles, so you should use [ProcBaseSelf] if you can.
ProcRoot = ProcBase{inner: libpathrs.ProcRoot}
// ProcSelf indicates to use /proc/self. For most programs, this is the
// standard choice.
ProcSelf = ProcBase{inner: libpathrs.ProcSelf}
// ProcThreadSelf indicates to use /proc/thread-self. In multi-threaded
// programs where one thread has a different CLONE_FS, it is possible for
// /proc/self to point the wrong thread and so /proc/thread-self may be
// necessary.
ProcThreadSelf = ProcBase{inner: libpathrs.ProcThreadSelf}
)
// ProcPid returns a ProcBase which indicates to use /proc/$pid for the given
// PID (or TID). Be aware that due to PID recycling, using this is generally
// not safe except in certain circumstances. Namely:
//
// - PID 1 (the init process), as that PID cannot ever get recycled.
// - Your current PID (though you should just use [ProcBaseSelf]).
// - Your current TID if you have used [runtime.LockOSThread] (though you
// should just use [ProcBaseThreadSelf]).
// - PIDs of child processes (as long as you are sure that no other part of
// your program incorrectly catches or ignores SIGCHLD, and that you do it
// *before* you call wait(2)or any equivalent method that could reap
// zombies).
func ProcPid(pid int) ProcBase {
if pid < 0 || pid >= 1<<31 {
panic("invalid ProcBasePid value") // TODO: should this be an error?
}
return ProcBase{inner: libpathrs.ProcPid(uint32(pid))}
}
// ThreadCloser is a callback that needs to be called when you are done
// operating on an [os.File] fetched using [Handle.OpenThreadSelf].
//
// [os.File]: https://pkg.go.dev/os#File
type ThreadCloser func()
// Handle is a wrapper around an *os.File handle to "/proc", which can be
// used to do further procfs-related operations in a safe way.
type Handle struct {
inner *os.File
}
// Close releases all internal resources for this [Handle].
//
// Note that if the handle is actually the global cached handle, this operation
// is a no-op.
func (proc *Handle) Close() error {
var err error
if proc.inner != nil {
err = proc.inner.Close()
}
return err
}
// OpenOption is a configuration function passed as an argument to [Open].
type OpenOption func(*libpathrs.ProcfsOpenHow) error
// UnmaskedProcRoot can be passed to [Open] to request an unmasked procfs
// handle be created.
//
// procfs, err := procfs.OpenRoot(procfs.UnmaskedProcRoot)
func UnmaskedProcRoot(how *libpathrs.ProcfsOpenHow) error {
*how.Flags() |= libpathrs.ProcfsNewUnmasked
return nil
}
// Open creates a new [Handle] to a safe "/proc", based on the passed
// configuration options (in the form of a series of [OpenOption]s).
func Open(opts ...OpenOption) (*Handle, error) {
var how libpathrs.ProcfsOpenHow
for _, opt := range opts {
if err := opt(&how); err != nil {
return nil, err
}
}
fd, err := libpathrs.ProcfsOpen(&how)
if err != nil {
return nil, err
}
var procFile *os.File
if int(fd) >= 0 {
procFile = os.NewFile(fd, "/proc")
}
// TODO: Check that fd == PATHRS_PROC_DEFAULT_ROOTFD in the <0 case?
return &Handle{inner: procFile}, nil
}
// TODO: Switch to something fdutils.WithFileFd-like.
func (proc *Handle) fd() int {
if proc.inner != nil {
return int(proc.inner.Fd())
}
return libpathrs.ProcDefaultRootFd
}
// TODO: Should we expose open?
func (proc *Handle) open(base ProcBase, path string, flags int) (_ *os.File, Closer ThreadCloser, Err error) {
var closer ThreadCloser
if base == ProcThreadSelf {
runtime.LockOSThread()
closer = runtime.UnlockOSThread
}
defer func() {
if closer != nil && Err != nil {
closer()
Closer = nil
}
}()
fd, err := libpathrs.ProcOpenat(proc.fd(), base.inner, path, flags)
if err != nil {
return nil, nil, err
}
file, err := fdutils.MkFile(fd)
return file, closer, err
}
// OpenRoot safely opens a given path from inside /proc/.
//
// This function must only be used for accessing global information from procfs
// (such as /proc/cpuinfo) or information about other processes (such as
// /proc/1). Accessing your own process information should be done using
// [Handle.OpenSelf] or [Handle.OpenThreadSelf].
func (proc *Handle) OpenRoot(path string, flags int) (*os.File, error) {
file, closer, err := proc.open(ProcRoot, path, flags)
if closer != nil {
// should not happen
panic("non-zero closer returned from procOpen(ProcRoot)")
}
return file, err
}
// OpenSelf safely opens a given path from inside /proc/self/.
//
// This method is recommend for getting process information about the current
// process for almost all Go processes *except* for cases where there are
// [runtime.LockOSThread] threads that have changed some aspect of their state
// (such as through unshare(CLONE_FS) or changing namespaces).
//
// For such non-heterogeneous processes, /proc/self may reference to a task
// that has different state from the current goroutine and so it may be
// preferable to use [Handle.OpenThreadSelf]. The same is true if a user
// really wants to inspect the current OS thread's information (such as
// /proc/thread-self/stack or /proc/thread-self/status which is always uniquely
// per-thread).
//
// Unlike [Handle.OpenThreadSelf], this method does not involve locking
// the goroutine to the current OS thread and so is simpler to use and
// theoretically has slightly less overhead.
//
// [runtime.LockOSThread]: https://pkg.go.dev/runtime#LockOSThread
func (proc *Handle) OpenSelf(path string, flags int) (*os.File, error) {
file, closer, err := proc.open(ProcSelf, path, flags)
if closer != nil {
// should not happen
panic("non-zero closer returned from procOpen(ProcSelf)")
}
return file, err
}
// OpenPid safely opens a given path from inside /proc/$pid/, where pid can be
// either a PID or TID.
//
// This is effectively equivalent to calling [Handle.OpenRoot] with the
// pid prefixed to the subpath.
//
// Be aware that due to PID recycling, using this is generally not safe except
// in certain circumstances. See the documentation of [ProcPid] for more
// details.
func (proc *Handle) OpenPid(pid int, path string, flags int) (*os.File, error) {
file, closer, err := proc.open(ProcPid(pid), path, flags)
if closer != nil {
// should not happen
panic("non-zero closer returned from procOpen(ProcPidOpen)")
}
return file, err
}
// OpenThreadSelf safely opens a given path from inside /proc/thread-self/.
//
// Most Go processes have heterogeneous threads (all threads have most of the
// same kernel state such as CLONE_FS) and so [Handle.OpenSelf] is
// preferable for most users.
//
// For non-heterogeneous threads, or users that actually want thread-specific
// information (such as /proc/thread-self/stack or /proc/thread-self/status),
// this method is necessary.
//
// Because Go can change the running OS thread of your goroutine without notice
// (and then subsequently kill the old thread), this method will lock the
// current goroutine to the OS thread (with [runtime.LockOSThread]) and the
// caller is responsible for unlocking the the OS thread with the
// [ThreadCloser] callback once they are done using the returned file. This
// callback MUST be called AFTER you have finished using the returned
// [os.File]. This callback is completely separate to [os.File.Close], so it
// must be called regardless of how you close the handle.
//
// [runtime.LockOSThread]: https://pkg.go.dev/runtime#LockOSThread
// [os.File]: https://pkg.go.dev/os#File
// [os.File.Close]: https://pkg.go.dev/os#File.Close
func (proc *Handle) OpenThreadSelf(path string, flags int) (*os.File, ThreadCloser, error) {
return proc.open(ProcThreadSelf, path, flags)
}
// Readlink safely reads the contents of a symlink from the given procfs base.
//
// This is effectively equivalent to doing an Open*(O_PATH|O_NOFOLLOW) of the
// path and then doing unix.Readlinkat(fd, ""), but with the benefit that
// thread locking is not necessary for [ProcThreadSelf].
func (proc *Handle) Readlink(base ProcBase, path string) (string, error) {
return libpathrs.ProcReadlinkat(proc.fd(), base.inner, path)
}

367
src/runtime/vendor/cyphar.com/go-pathrs/root_linux.go generated vendored Normal file
View File

@@ -0,0 +1,367 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
package pathrs
import (
"errors"
"fmt"
"os"
"syscall"
"cyphar.com/go-pathrs/internal/fdutils"
"cyphar.com/go-pathrs/internal/libpathrs"
)
// Root is a handle to the root of a directory tree to resolve within. The only
// purpose of this "root handle" is to perform operations within the directory
// tree, or to get a [Handle] to inodes within the directory tree.
//
// At time of writing, it is considered a *VERY BAD IDEA* to open a [Root]
// inside a possibly-attacker-controlled directory tree. While we do have
// protections that should defend against it, it's far more dangerous than just
// opening a directory tree which is not inside a potentially-untrusted
// directory.
type Root struct {
inner *os.File
}
// OpenRoot creates a new [Root] handle to the directory at the given path.
func OpenRoot(path string) (*Root, error) {
fd, err := libpathrs.OpenRoot(path)
if err != nil {
return nil, err
}
file, err := fdutils.MkFile(fd)
if err != nil {
return nil, err
}
return &Root{inner: file}, nil
}
// RootFromFile creates a new [Root] handle from an [os.File] referencing a
// directory. The provided file will be duplicated, so the original file should
// still be closed by the caller.
//
// This is effectively the inverse operation of [Root.IntoFile].
//
// [os.File]: https://pkg.go.dev/os#File
func RootFromFile(file *os.File) (*Root, error) {
newFile, err := fdutils.DupFile(file)
if err != nil {
return nil, fmt.Errorf("duplicate root fd: %w", err)
}
return &Root{inner: newFile}, nil
}
// Resolve resolves the given path within the [Root]'s directory tree, and
// returns a [Handle] to the resolved path. The path must already exist,
// otherwise an error will occur.
//
// All symlinks (including trailing symlinks) are followed, but they are
// resolved within the rootfs. If you wish to open a handle to the symlink
// itself, use [ResolveNoFollow].
func (r *Root) Resolve(path string) (*Handle, error) {
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (*Handle, error) {
handleFd, err := libpathrs.InRootResolve(rootFd, path)
if err != nil {
return nil, err
}
handleFile, err := fdutils.MkFile(handleFd)
if err != nil {
return nil, err
}
return &Handle{inner: handleFile}, nil
})
}
// ResolveNoFollow is effectively an O_NOFOLLOW version of [Resolve]. Their
// behaviour is identical, except that *trailing* symlinks will not be
// followed. If the final component is a trailing symlink, an O_PATH|O_NOFOLLOW
// handle to the symlink itself is returned.
func (r *Root) ResolveNoFollow(path string) (*Handle, error) {
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (*Handle, error) {
handleFd, err := libpathrs.InRootResolveNoFollow(rootFd, path)
if err != nil {
return nil, err
}
handleFile, err := fdutils.MkFile(handleFd)
if err != nil {
return nil, err
}
return &Handle{inner: handleFile}, nil
})
}
// Open is effectively shorthand for [Resolve] followed by [Handle.Open], but
// can be slightly more efficient (it reduces CGo overhead and the number of
// syscalls used when using the openat2-based resolver) and is arguably more
// ergonomic to use.
//
// This is effectively equivalent to [os.Open].
//
// [os.Open]: https://pkg.go.dev/os#Open
func (r *Root) Open(path string) (*os.File, error) {
return r.OpenFile(path, os.O_RDONLY)
}
// OpenFile is effectively shorthand for [Resolve] followed by
// [Handle.OpenFile], but can be slightly more efficient (it reduces CGo
// overhead and the number of syscalls used when using the openat2-based
// resolver) and is arguably more ergonomic to use.
//
// However, if flags contains os.O_NOFOLLOW and the path is a symlink, then
// OpenFile's behaviour will match that of openat2. In most cases an error will
// be returned, but if os.O_PATH is provided along with os.O_NOFOLLOW then a
// file equivalent to [ResolveNoFollow] will be returned instead.
//
// This is effectively equivalent to [os.OpenFile], except that os.O_CREAT is
// not supported.
//
// [os.OpenFile]: https://pkg.go.dev/os#OpenFile
func (r *Root) OpenFile(path string, flags int) (*os.File, error) {
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (*os.File, error) {
fd, err := libpathrs.InRootOpen(rootFd, path, flags)
if err != nil {
return nil, err
}
return fdutils.MkFile(fd)
})
}
// Create creates a file within the [Root]'s directory tree at the given path,
// and returns a handle to the file. The provided mode is used for the new file
// (the process's umask applies).
//
// Unlike [os.Create], if the file already exists an error is created rather
// than the file being opened and truncated.
//
// [os.Create]: https://pkg.go.dev/os#Create
func (r *Root) Create(path string, flags int, mode os.FileMode) (*os.File, error) {
unixMode, err := toUnixMode(mode, false)
if err != nil {
return nil, err
}
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (*os.File, error) {
handleFd, err := libpathrs.InRootCreat(rootFd, path, flags, unixMode)
if err != nil {
return nil, err
}
return fdutils.MkFile(handleFd)
})
}
// Rename two paths within a [Root]'s directory tree. The flags argument is
// identical to the RENAME_* flags to the renameat2(2) system call.
func (r *Root) Rename(src, dst string, flags uint) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootRename(rootFd, src, dst, flags)
return struct{}{}, err
})
return err
}
// RemoveDir removes the named empty directory within a [Root]'s directory
// tree.
func (r *Root) RemoveDir(path string) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootRmdir(rootFd, path)
return struct{}{}, err
})
return err
}
// RemoveFile removes the named file within a [Root]'s directory tree.
func (r *Root) RemoveFile(path string) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootUnlink(rootFd, path)
return struct{}{}, err
})
return err
}
// Remove removes the named file or (empty) directory within a [Root]'s
// directory tree.
//
// This is effectively equivalent to [os.Remove].
//
// [os.Remove]: https://pkg.go.dev/os#Remove
func (r *Root) Remove(path string) error {
// In order to match os.Remove's implementation we need to also do both
// syscalls unconditionally and adjust the error based on whether
// pathrs_inroot_rmdir() returned ENOTDIR.
unlinkErr := r.RemoveFile(path)
if unlinkErr == nil {
return nil
}
rmdirErr := r.RemoveDir(path)
if rmdirErr == nil {
return nil
}
// Both failed, adjust the error in the same way that os.Remove does.
err := rmdirErr
if errors.Is(err, syscall.ENOTDIR) {
err = unlinkErr
}
return err
}
// RemoveAll recursively deletes a path and all of its children.
//
// This is effectively equivalent to [os.RemoveAll].
//
// [os.RemoveAll]: https://pkg.go.dev/os#RemoveAll
func (r *Root) RemoveAll(path string) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootRemoveAll(rootFd, path)
return struct{}{}, err
})
return err
}
// Mkdir creates a directory within a [Root]'s directory tree. The provided
// mode is used for the new directory (the process's umask applies).
//
// This is effectively equivalent to [os.Mkdir].
//
// [os.Mkdir]: https://pkg.go.dev/os#Mkdir
func (r *Root) Mkdir(path string, mode os.FileMode) error {
unixMode, err := toUnixMode(mode, false)
if err != nil {
return err
}
_, err = fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootMkdir(rootFd, path, unixMode)
return struct{}{}, err
})
return err
}
// MkdirAll creates a directory (and any parent path components if they don't
// exist) within a [Root]'s directory tree. The provided mode is used for any
// directories created by this function (the process's umask applies).
//
// This is effectively equivalent to [os.MkdirAll].
//
// [os.MkdirAll]: https://pkg.go.dev/os#MkdirAll
func (r *Root) MkdirAll(path string, mode os.FileMode) (*Handle, error) {
unixMode, err := toUnixMode(mode, false)
if err != nil {
return nil, err
}
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (*Handle, error) {
handleFd, err := libpathrs.InRootMkdirAll(rootFd, path, unixMode)
if err != nil {
return nil, err
}
handleFile, err := fdutils.MkFile(handleFd)
if err != nil {
return nil, err
}
return &Handle{inner: handleFile}, err
})
}
// Mknod creates a new device inode of the given type within a [Root]'s
// directory tree. The provided mode is used for the new directory (the
// process's umask applies).
//
// This is effectively equivalent to [unix.Mknod].
//
// [unix.Mknod]: https://pkg.go.dev/golang.org/x/sys/unix#Mknod
func (r *Root) Mknod(path string, mode os.FileMode, dev uint64) error {
unixMode, err := toUnixMode(mode, true)
if err != nil {
return err
}
_, err = fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootMknod(rootFd, path, unixMode, dev)
return struct{}{}, err
})
return err
}
// Symlink creates a symlink within a [Root]'s directory tree. The symlink is
// created at path and is a link to target.
//
// This is effectively equivalent to [os.Symlink].
//
// [os.Symlink]: https://pkg.go.dev/os#Symlink
func (r *Root) Symlink(path, target string) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootSymlink(rootFd, path, target)
return struct{}{}, err
})
return err
}
// Hardlink creates a hardlink within a [Root]'s directory tree. The hardlink
// is created at path and is a link to target. Both paths are within the
// [Root]'s directory tree (you cannot hardlink to a different [Root] or the
// host).
//
// This is effectively equivalent to [os.Link].
//
// [os.Link]: https://pkg.go.dev/os#Link
func (r *Root) Hardlink(path, target string) error {
_, err := fdutils.WithFileFd(r.inner, func(rootFd uintptr) (struct{}, error) {
err := libpathrs.InRootHardlink(rootFd, path, target)
return struct{}{}, err
})
return err
}
// Readlink returns the target of a symlink with a [Root]'s directory tree.
//
// This is effectively equivalent to [os.Readlink].
//
// [os.Readlink]: https://pkg.go.dev/os#Readlink
func (r *Root) Readlink(path string) (string, error) {
return fdutils.WithFileFd(r.inner, func(rootFd uintptr) (string, error) {
return libpathrs.InRootReadlink(rootFd, path)
})
}
// IntoFile unwraps the [Root] into its underlying [os.File].
//
// It is critical that you do not operate on this file descriptor yourself,
// because the security properties of libpathrs depend on users doing all
// relevant filesystem operations through libpathrs.
//
// This operation returns the internal [os.File] of the [Root] directly, so
// calling [Root.Close] will also close any copies of the returned [os.File].
// If you want to get an independent copy, use [Root.Clone] followed by
// [Root.IntoFile] on the cloned [Root].
//
// [os.File]: https://pkg.go.dev/os#File
func (r *Root) IntoFile() *os.File {
// TODO: Figure out if we really don't want to make a copy.
// TODO: We almost certainly want to clear r.inner here, but we can't do
// that easily atomically (we could use atomic.Value but that'll make
// things quite a bit uglier).
return r.inner
}
// Clone creates a copy of a [Root] handle, such that it has a separate
// lifetime to the original (while referring to the same underlying directory).
func (r *Root) Clone() (*Root, error) {
return RootFromFile(r.inner)
}
// Close frees all of the resources used by the [Root] handle.
func (r *Root) Close() error {
return r.inner.Close()
}

56
src/runtime/vendor/cyphar.com/go-pathrs/utils_linux.go generated vendored Normal file
View File

@@ -0,0 +1,56 @@
//go:build linux
// SPDX-License-Identifier: MPL-2.0
/*
* libpathrs: safe path resolution on Linux
* Copyright (C) 2019-2025 Aleksa Sarai <cyphar@cyphar.com>
* Copyright (C) 2019-2025 SUSE LLC
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at https://mozilla.org/MPL/2.0/.
*/
package pathrs
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
//nolint:cyclop // this function needs to handle a lot of cases
func toUnixMode(mode os.FileMode, needsType bool) (uint32, error) {
sysMode := uint32(mode.Perm())
switch mode & os.ModeType { //nolint:exhaustive // we only care about ModeType bits
case 0:
if needsType {
sysMode |= unix.S_IFREG
}
case os.ModeDir:
sysMode |= unix.S_IFDIR
case os.ModeSymlink:
sysMode |= unix.S_IFLNK
case os.ModeCharDevice | os.ModeDevice:
sysMode |= unix.S_IFCHR
case os.ModeDevice:
sysMode |= unix.S_IFBLK
case os.ModeNamedPipe:
sysMode |= unix.S_IFIFO
case os.ModeSocket:
sysMode |= unix.S_IFSOCK
default:
return 0, fmt.Errorf("invalid mode filetype %+o", mode)
}
if mode&os.ModeSetuid != 0 {
sysMode |= unix.S_ISUID
}
if mode&os.ModeSetgid != 0 {
sysMode |= unix.S_ISGID
}
if mode&os.ModeSticky != 0 {
sysMode |= unix.S_ISVTX
}
return sysMode, nil
}

View File

@@ -1,5 +1,5 @@
//go:build !darwin && !freebsd && !linux && !netbsd && !openbsd && !solaris && !windows && !zos
// +build !darwin,!freebsd,!linux,!netbsd,!openbsd,!solaris,!windows,!zos
//go:build !darwin && !freebsd && !linux && !netbsd && !openbsd && !windows && !zos
// +build !darwin,!freebsd,!linux,!netbsd,!openbsd,!windows,!zos
/*
Copyright The containerd Authors.

View File

@@ -31,6 +31,15 @@ func NewPty() (Console, string, error) {
if err != nil {
return nil, "", err
}
return NewPtyFromFile(f)
}
// NewPtyFromFile creates a new pty pair, just like [NewPty] except that the
// provided [os.File] is used as the master rather than automatically creating
// a new master from /dev/ptmx. The ownership of [os.File] is passed to the
// returned [Console], so the caller must be careful to not call Close on the
// underlying file.
func NewPtyFromFile(f File) (Console, string, error) {
slave, err := ptsname(f)
if err != nil {
return nil, "", err

View File

@@ -18,7 +18,6 @@ package console
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
@@ -30,12 +29,12 @@ const (
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
return unix.IoctlSetPointerInt(int(f.Fd()), unix.TIOCPTYUNLK, 0)
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCPTYGNAME)
if err != nil {
return "", err

View File

@@ -21,7 +21,6 @@ package console
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
@@ -39,7 +38,7 @@ const (
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
fd := C.int(f.Fd())
if _, err := C.unlockpt(fd); err != nil {
C.close(fd)
@@ -49,7 +48,7 @@ func unlockpt(f *os.File) error {
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCGPTN)
if err != nil {
return "", err

View File

@@ -21,7 +21,6 @@ package console
import (
"fmt"
"os"
"golang.org/x/sys/unix"
)
@@ -42,12 +41,12 @@ const (
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
panic("unlockpt() support requires cgo.")
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCGPTN)
if err != nil {
return "", err

View File

@@ -18,7 +18,6 @@ package console
import (
"fmt"
"os"
"unsafe"
"golang.org/x/sys/unix"
@@ -31,7 +30,7 @@ const (
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
var u int32
// XXX do not use unix.IoctlSetPointerInt here, see commit dbd69c59b81.
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))); err != 0 {
@@ -41,7 +40,7 @@ func unlockpt(f *os.File) error {
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
var u uint32
// XXX do not use unix.IoctlGetInt here, see commit dbd69c59b81.
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCGPTN, uintptr(unsafe.Pointer(&u))); err != 0 {

View File

@@ -18,7 +18,6 @@ package console
import (
"bytes"
"os"
"golang.org/x/sys/unix"
)
@@ -31,12 +30,12 @@ const (
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
// This does not exist on NetBSD, it does not allocate controlling terminals on open
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
return nil
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
ptm, err := unix.IoctlGetPtmget(int(f.Fd()), unix.TIOCPTSNAME)
if err != nil {
return "", err

View File

@@ -20,8 +20,6 @@
package console
import (
"os"
"golang.org/x/sys/unix"
)
@@ -34,7 +32,7 @@ const (
)
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
ptspath, err := C.ptsname(C.int(f.Fd()))
if err != nil {
return "", err
@@ -44,7 +42,7 @@ func ptsname(f *os.File) (string, error) {
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
if _, err := C.grantpt(C.int(f.Fd())); err != nil {
return err
}

View File

@@ -29,8 +29,6 @@
package console
import (
"os"
"golang.org/x/sys/unix"
)
@@ -39,10 +37,10 @@ const (
cmdTcSet = unix.TIOCSETA
)
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
panic("ptsname() support requires cgo.")
}
func unlockpt(f *os.File) error {
func unlockpt(f File) error {
panic("unlockpt() support requires cgo.")
}

View File

@@ -17,7 +17,6 @@
package console
import (
"os"
"strings"
"golang.org/x/sys/unix"
@@ -29,11 +28,11 @@ const (
)
// unlockpt is a no-op on zos.
func unlockpt(_ *os.File) error {
func unlockpt(File) error {
return nil
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
func ptsname(f File) (string, error) {
return "/dev/ttyp" + strings.TrimPrefix(f.Name(), "/dev/ptyp"), nil
}

View File

@@ -1,7 +1,7 @@
linters:
enable:
- depguard # Checks for imports that shouldn't be used.
- exportloopref # Checks for pointers to enclosing loop variables
- depguard # Checks for dependencies that should not be (re)introduced. See "linter-settings" for further details.
# - copyloopvar # Checks for loop variable copies in Go 1.22+
- gofmt
- goimports
- gosec

View File

@@ -15,7 +15,7 @@ This doc includes:
To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required:
* Go 1.22.x or above
* Go 1.23.x or above
* Protoc 3.x compiler and headers (download at the [Google protobuf releases page](https://github.com/protocolbuffers/protobuf/releases))
* Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency.

View File

@@ -17,7 +17,7 @@
# Vagrantfile for Fedora and EL
Vagrant.configure("2") do |config|
config.vm.box = ENV["BOX"] ? ENV["BOX"].split("@")[0] : "fedora/39-cloud-base"
config.vm.box = ENV["BOX"] ? ENV["BOX"].split("@")[0] : "fedora/43-cloud-base"
# BOX_VERSION is deprecated. Use "BOX=<BOX>@<BOX_VERSION>".
config.vm.box_version = ENV["BOX_VERSION"] || (ENV["BOX"].split("@")[1] if ENV["BOX"])
@@ -35,7 +35,10 @@ Vagrant.configure("2") do |config|
v.memory = memory
v.cpus = cpus
v.machine_virtual_size = disk_size
v.loader = "/usr/share/OVMF/OVMF_CODE.fd"
# https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1725#issuecomment-1454058646
# Needs `sudo cp /usr/share/OVMF/OVMF_VARS_4M.fd /var/lib/libvirt/qemu/nvram/`
v.loader = '/usr/share/OVMF/OVMF_CODE_4M.fd'
v.nvram = '/var/lib/libvirt/qemu/nvram/OVMF_VARS_4M.fd'
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
@@ -104,7 +107,7 @@ EOF
config.vm.provision "install-golang", type: "shell", run: "once" do |sh|
sh.upload_path = "/tmp/vagrant-install-golang"
sh.env = {
'GO_VERSION': ENV['GO_VERSION'] || "1.23.7",
'GO_VERSION': ENV['GO_VERSION'] || "1.24.9",
}
sh.inline = <<~SHELL
#!/usr/bin/env bash
@@ -246,6 +249,7 @@ EOF
sh.upload_path = "/tmp/test-integration"
sh.env = {
'RUNC_FLAVOR': ENV['RUNC_FLAVOR'] || "runc",
'RUNC_RUNTIME': ENV['RUNC_RUNTIME'] || "io.containerd.runc.v2",
'GOTEST': ENV['GOTEST'] || "go test",
'GOTESTSUM_JUNITFILE': ENV['GOTESTSUM_JUNITFILE'],
'GOTESTSUM_JSONFILE': ENV['GOTESTSUM_JSONFILE'],
@@ -258,7 +262,7 @@ EOF
rm -rf /var/lib/containerd-test /run/containerd-test
cd ${GOPATH}/src/github.com/containerd/containerd
go test -v -count=1 -race ./metrics/cgroups
make integration EXTRA_TESTFLAGS="-timeout 15m -no-criu -test.v" TEST_RUNTIME=io.containerd.runc.v2 RUNC_FLAVOR=$RUNC_FLAVOR
make integration EXTRA_TESTFLAGS="-timeout 15m -no-criu -test.v" TEST_RUNTIME=$RUNC_RUNTIME RUNC_FLAVOR=$RUNC_FLAVOR
SHELL
end
@@ -269,6 +273,7 @@ EOF
sh.upload_path = "/tmp/test-cri-integration"
sh.env = {
'GOTEST': ENV['GOTEST'] || "go test",
'RUNC_RUNTIME': ENV['RUNC_RUNTIME'] || "io.containerd.runc.v2",
'GOTESTSUM_JUNITFILE': ENV['GOTESTSUM_JUNITFILE'],
'GOTESTSUM_JSONFILE': ENV['GOTESTSUM_JSONFILE'],
'GITHUB_WORKSPACE': '',
@@ -287,7 +292,7 @@ EOF
# cri-integration.sh executes containerd from ./bin, not from $PATH .
make BUILDTAGS="seccomp selinux no_aufs no_btrfs no_devmapper no_zfs" binaries bin/cri-integration.test
chcon -v -t container_runtime_exec_t ./bin/{containerd,containerd-shim*}
CONTAINERD_RUNTIME=io.containerd.runc.v2 ./script/test/cri-integration.sh
CONTAINERD_RUNTIME=$RUNC_RUNTIME ./script/test/cri-integration.sh
cleanup
SHELL
end

View File

@@ -45,6 +45,8 @@ const (
Gzip
// Zstd is zstd compression algorithm.
Zstd
// Unknown is used when a plugin handles the algorithm.
Unknown
)
const disablePigzEnv = "CONTAINERD_DISABLE_PIGZ"
@@ -254,6 +256,8 @@ func (compression *Compression) Extension() string {
return "gz"
case Zstd:
return "zst"
case Unknown:
return "unknown"
}
return ""
}

View File

@@ -37,12 +37,11 @@ func SourceDateEpoch() (*time.Time, error) {
if !ok || v == "" {
return nil, nil // not an error
}
i64, err := strconv.ParseInt(v, 10, 64)
t, err := ParseSourceDateEpoch(v)
if err != nil {
return nil, fmt.Errorf("invalid %s value %q: %w", SourceDateEpochEnv, v, err)
return nil, fmt.Errorf("invalid %s value: %w", SourceDateEpochEnv, err)
}
unix := time.Unix(i64, 0).UTC()
return &unix, nil
return t, nil
}
// SourceDateEpochOrNow returns the SOURCE_DATE_EPOCH time if available,
@@ -58,12 +57,26 @@ func SourceDateEpochOrNow() time.Time {
return time.Now().UTC()
}
// ParseSourceDateEpoch parses the given source date epoch, as *time.Time.
// It returns an error if sourceDateEpoch is empty or not well-formatted.
func ParseSourceDateEpoch(sourceDateEpoch string) (*time.Time, error) {
if sourceDateEpoch == "" {
return nil, fmt.Errorf("value is empty")
}
i64, err := strconv.ParseInt(sourceDateEpoch, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid value: %w", err)
}
unix := time.Unix(i64, 0).UTC()
return &unix, nil
}
// SetSourceDateEpoch sets the SOURCE_DATE_EPOCH env var.
func SetSourceDateEpoch(tm time.Time) {
os.Setenv(SourceDateEpochEnv, fmt.Sprintf("%d", tm.Unix()))
_ = os.Setenv(SourceDateEpochEnv, strconv.Itoa(int(tm.Unix())))
}
// UnsetSourceDateEpoch unsets the SOURCE_DATE_EPOCH env var.
func UnsetSourceDateEpoch() {
os.Unsetenv(SourceDateEpochEnv)
_ = os.Unsetenv(SourceDateEpochEnv)
}

View File

@@ -86,11 +86,11 @@ type TokenOptions struct {
// OAuthTokenResponse is response from fetching token with a OAuth POST request
type OAuthTokenResponse struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
Scope string `json:"scope"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresInSeconds int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
Scope string `json:"scope"`
}
// FetchTokenWithOAuth fetches a token using a POST request
@@ -152,11 +152,11 @@ func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.
// FetchTokenResponse is response from fetching token with GET request
type FetchTokenResponse struct {
Token string `json:"token"`
AccessToken string `json:"access_token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
RefreshToken string `json:"refresh_token"`
Token string `json:"token"`
AccessToken string `json:"access_token"`
ExpiresInSeconds int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
RefreshToken string `json:"refresh_token"`
}
// FetchToken fetches a token using a GET request

View File

@@ -24,6 +24,7 @@ import (
"net/http"
"strings"
"sync"
"time"
"github.com/containerd/log"
@@ -206,9 +207,10 @@ func (a *dockerAuthorizer) AddResponses(ctx context.Context, responses []*http.R
// authResult is used to control limit rate.
type authResult struct {
sync.WaitGroup
token string
refreshToken string
err error
token string
refreshToken string
expirationTime *time.Time
err error
}
// authHandler is used to handle auth request per registry server.
@@ -271,8 +273,12 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken st
// Docs: https://docs.docker.com/registry/spec/auth/scope
scoped := strings.Join(to.Scopes, " ")
// Keep track of the expiration time of cached bearer tokens so they can be
// refreshed when they expire without a server roundtrip.
var expirationTime *time.Time
ah.Lock()
if r, exist := ah.scopedTokens[scoped]; exist {
if r, exist := ah.scopedTokens[scoped]; exist && (r.expirationTime == nil || r.expirationTime.After(time.Now())) {
ah.Unlock()
r.Wait()
return r.token, r.refreshToken, r.err
@@ -286,7 +292,7 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken st
defer func() {
token = fmt.Sprintf("Bearer %s", token)
r.token, r.refreshToken, r.err = token, refreshToken, err
r.token, r.refreshToken, r.err, r.expirationTime = token, refreshToken, err, expirationTime
r.Done()
}()
@@ -312,6 +318,7 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken st
if err != nil {
return "", "", err
}
expirationTime = getExpirationTime(resp.ExpiresInSeconds)
return resp.Token, resp.RefreshToken, nil
}
log.G(ctx).WithFields(log.Fields{
@@ -321,6 +328,7 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken st
}
return "", "", err
}
expirationTime = getExpirationTime(resp.ExpiresInSeconds)
return resp.AccessToken, resp.RefreshToken, nil
}
// do request anonymously
@@ -328,9 +336,18 @@ func (ah *authHandler) doBearerAuth(ctx context.Context) (token, refreshToken st
if err != nil {
return "", "", fmt.Errorf("failed to fetch anonymous token: %w", err)
}
expirationTime = getExpirationTime(resp.ExpiresInSeconds)
return resp.Token, resp.RefreshToken, nil
}
func getExpirationTime(expiresInSeconds int) *time.Time {
if expiresInSeconds <= 0 {
return nil
}
expirationTime := time.Now().Add(time.Duration(expiresInSeconds) * time.Second)
return &expirationTime
}
func invalidAuthorization(ctx context.Context, c auth.Challenge, responses []*http.Response) (retry bool, _ error) {
errStr := c.Parameters["error"]
if errStr == "" {

View File

@@ -31,7 +31,7 @@ type object interface {
// NSMap extends Map type with a notion of namespaces passed via Context.
type NSMap[T object] struct {
mu sync.Mutex
mu sync.RWMutex
objects map[string]map[string]T
}
@@ -44,13 +44,14 @@ func NewNSMap[T object]() *NSMap[T] {
// Get a task
func (m *NSMap[T]) Get(ctx context.Context, id string) (T, error) {
m.mu.Lock()
defer m.mu.Unlock()
namespace, err := namespaces.NamespaceRequired(ctx)
var t T
if err != nil {
return t, err
}
m.mu.RLock()
defer m.mu.RUnlock()
tasks, ok := m.objects[namespace]
if !ok {
return t, errdefs.ErrNotFound
@@ -64,8 +65,8 @@ func (m *NSMap[T]) Get(ctx context.Context, id string) (T, error) {
// GetAll objects under a namespace
func (m *NSMap[T]) GetAll(ctx context.Context, noNS bool) ([]T, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.mu.RLock()
defer m.mu.RUnlock()
var o []T
if noNS {
for ns := range m.objects {
@@ -100,10 +101,10 @@ func (m *NSMap[T]) Add(ctx context.Context, t T) error {
// AddWithNamespace adds a task with the provided namespace
func (m *NSMap[T]) AddWithNamespace(namespace string, t T) error {
id := t.ID()
m.mu.Lock()
defer m.mu.Unlock()
id := t.ID()
if _, ok := m.objects[namespace]; !ok {
m.objects[namespace] = make(map[string]T)
}
@@ -116,12 +117,13 @@ func (m *NSMap[T]) AddWithNamespace(namespace string, t T) error {
// Delete a task
func (m *NSMap[T]) Delete(ctx context.Context, id string) {
m.mu.Lock()
defer m.mu.Unlock()
namespace, err := namespaces.NamespaceRequired(ctx)
if err != nil {
return
}
m.mu.Lock()
defer m.mu.Unlock()
tasks, ok := m.objects[namespace]
if ok {
delete(tasks, id)
@@ -129,8 +131,8 @@ func (m *NSMap[T]) Delete(ctx context.Context, id string) {
}
func (m *NSMap[T]) IsEmpty() bool {
m.mu.Lock()
defer m.mu.Unlock()
m.mu.RLock()
defer m.mu.RUnlock()
for ns := range m.objects {
if len(m.objects[ns]) > 0 {

View File

@@ -21,4 +21,6 @@ const (
// This will be based on the client compilation target, so take that into
// account when choosing this value.
DefaultSnapshotter = "overlayfs"
// DefaultDiffer will set the default differ for the platform.
DefaultDiffer = "walking"
)

View File

@@ -23,4 +23,6 @@ const (
// This will be based on the client compilation target, so take that into
// account when choosing this value.
DefaultSnapshotter = "native"
// DefaultDiffer will set the default differ for the platform.
DefaultDiffer = "walking"
)

View File

@@ -21,4 +21,6 @@ const (
// This will be based on the client compilation target, so take that into
// account when choosing this value.
DefaultSnapshotter = "windows"
// DefaultDiffer will set the default differ for the platform.
DefaultDiffer = "walking"
)

View File

@@ -23,7 +23,7 @@ var (
Package = "github.com/containerd/containerd"
// Version holds the complete version number. Filled in at linking time.
Version = "1.7.27+unknown"
Version = "1.7.29+unknown"
// Revision is filled with the VCS (e.g. git) revision being used to build
// the program at linking time.

View File

@@ -0,0 +1,60 @@
# SPDX-License-Identifier: MPL-2.0
# Copyright (C) 2025 Aleksa Sarai <cyphar@cyphar.com>
# Copyright (C) 2025 SUSE LLC
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
version: "2"
run:
build-tags:
- libpathrs
linters:
enable:
- asasalint
- asciicheck
- containedctx
- contextcheck
- errcheck
- errorlint
- exhaustive
- forcetypeassert
- godot
- goprintffuncname
- govet
- importas
- ineffassign
- makezero
- misspell
- musttag
- nilerr
- nilnesserr
- nilnil
- noctx
- prealloc
- revive
- staticcheck
- testifylint
- unconvert
- unparam
- unused
- usetesting
settings:
govet:
enable:
- nilness
testifylint:
enable-all: true
formatters:
enable:
- gofumpt
- goimports
settings:
goimports:
local-prefixes:
- github.com/cyphar/filepath-securejoin

View File

@@ -6,6 +6,208 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
## [Unreleased] ##
## [0.6.0] - 2025-11-03 ##
> By the Power of Greyskull!
While quite small code-wise, this release marks a very key point in the
development of filepath-securejoin.
filepath-securejoin was originally intended (back in 2017) to simply be a
single-purpose library that would take some common code used in container
runtimes (specifically, Docker's `FollowSymlinksInScope`) and make it more
general-purpose (with the eventual goals of it ending up in the Go stdlib).
Of course, I quickly discovered that this problem was actually far more
complicated to solve when dealing with racing attackers, which lead to me
developing `openat2(2)` and [libpathrs][]. I had originally planned for
libpathrs to completely replace filepath-securejoin "once it was ready" but in
the interim we needed to fix several race attacks in runc as part of security
advisories. Obviously we couldn't require the usage of a pre-0.1 Rust library
in runc so it was necessary to port bits of libpathrs into filepath-securejoin.
(Ironically the first prototypes of libpathrs were originally written in Go and
then rewritten to Rust, so the code in filepath-securejoin is actually Go code
that was rewritten to Rust then re-rewritten to Go.)
It then became clear that pure-Go libraries will likely not be willing to
require CGo for all of their builds, so it was necessary to accept that
filepath-securejoin will need to stay. As such, in v0.5.0 we provided more
pure-Go implementations of features from libpathrs but moved them into
`pathrs-lite` subpackage to clarify what purpose these helpers serve.
This release finally closes the loop and makes it so that pathrs-lite can
transparently use libpathrs (via a `libpathrs` build-tag). This means that
upstream libraries can use the pure Go version if they prefer, but downstreams
(either downstream library users or even downstream distributions) are able to
migrate to libpathrs for all usages of pathrs-lite in an entire Go binary.
I should make it clear that I do not plan to port the rest of libpathrs to Go,
as I do not wish to maintain two copies of the same codebase. pathrs-lite
already provides the core essentials necessary to operate on paths safely for
most modern systems. Users who want additional hardening or more ergonomic APIs
are free to use [`cyphar.com/go-pathrs`][go-pathrs] (libpathrs's Go bindings).
[libpathrs]: https://github.com/cyphar/libpathrs
[go-pathrs]: https://cyphar.com/go-pathrs
### Breaking ###
- The deprecated `MkdirAll`, `MkdirAllHandle`, `OpenInRoot`, `OpenatInRoot` and
`Reopen` wrappers have been removed. Please switch to using `pathrs-lite`
directly.
### Added ###
- `pathrs-lite` now has support for using [libpathrs][libpathrs] as a backend.
This is opt-in and can be enabled at build time with the `libpathrs` build
tag. The intention is to allow for downstream libraries and other projects to
make use of the pure-Go `github.com/cyphar/filepath-securejoin/pathrs-lite`
package and distributors can then opt-in to using `libpathrs` for the entire
binary if they wish.
## [0.5.1] - 2025-10-31 ##
> Spooky scary skeletons send shivers down your spine!
### Changed ###
- `openat2` can return `-EAGAIN` if it detects a possible attack in certain
scenarios (namely if there was a rename or mount while walking a path with a
`..` component). While this is necessary to avoid a denial-of-service in the
kernel, it does require retry loops in userspace.
In previous versions, `pathrs-lite` would retry `openat2` 32 times before
returning an error, but we've received user reports that this limit can be
hit on systems with very heavy load. In some synthetic benchmarks (testing
the worst-case of an attacker doing renames in a tight loop on every core of
a 16-core machine) we managed to get a ~3% failure rate in runc. We have
improved this situation in two ways:
* We have now increased this limit to 128, which should be good enough for
most use-cases without becoming a denial-of-service vector (the number of
syscalls called by the `O_PATH` resolver in a typical case is within the
same ballpark). The same benchmarks show a failure rate of ~0.12% which
(while not zero) is probably sufficient for most users.
* In addition, we now return a `unix.EAGAIN` error that is bubbled up and can
be detected by callers. This means that callers with stricter requirements
to avoid spurious errors can choose to do their own infinite `EAGAIN` retry
loop (though we would strongly recommend users use time-based deadlines in
such retry loops to avoid potentially unbounded denials-of-service).
## [0.5.0] - 2025-09-26 ##
> Let the past die. Kill it if you have to.
> **NOTE**: With this release, some parts of
> `github.com/cyphar/filepath-securejoin` are now licensed under the Mozilla
> Public License (version 2). Please see [COPYING.md][] as well as the the
> license header in each file for more details.
[COPYING.md]: ./COPYING.md
### Breaking ###
- The new API introduced in the [0.3.0][] release has been moved to a new
subpackage called `pathrs-lite`. This was primarily done to better indicate
the split between the new and old APIs, as well as indicate to users the
purpose of this subpackage (it is a less complete version of [libpathrs][]).
We have added some wrappers to the top-level package to ease the transition,
but those are deprecated and will be removed in the next minor release of
filepath-securejoin. Users should update their import paths.
This new subpackage has also been relicensed under the Mozilla Public License
(version 2), please see [COPYING.md][] for more details.
### Added ###
- Most of the key bits the safe `procfs` API have now been exported and are
available in `github.com/cyphar/filepath-securejoin/pathrs-lite/procfs`. At
the moment this primarily consists of a new `procfs.Handle` API:
* `OpenProcRoot` returns a new handle to `/proc`, endeavouring to make it
safe if possible (`subset=pid` to protect against mistaken write attacks
and leaks, as well as using `fsopen(2)` to avoid racing mount attacks).
`OpenUnsafeProcRoot` returns a handle without attempting to create one
with `subset=pid`, which makes it more dangerous to leak. Most users
should use `OpenProcRoot` (even if you need to use `ProcRoot` as the base
of an operation, as filepath-securejoin will internally open a handle when
necessary).
* The `(*procfs.Handle).Open*` family of methods lets you get a safe
`O_PATH` handle to subpaths within `/proc` for certain subpaths.
For `OpenThreadSelf`, the returned `ProcThreadSelfCloser` needs to be
called after you completely finish using the handle (this is necessary
because Go is multi-threaded and `ProcThreadSelf` references
`/proc/thread-self` which may disappear if we do not
`runtime.LockOSThread` -- `ProcThreadSelfCloser` is currently equivalent
to `runtime.UnlockOSThread`).
Note that you cannot open any `procfs` symlinks (most notably magic-links)
using this API. At the moment, filepath-securejoin does not support this
feature (but [libpathrs][] does).
* `ProcSelfFdReadlink` lets you get the in-kernel path representation of a
file descriptor (think `readlink("/proc/self/fd/...")`), except that we
verify that there aren't any tricky overmounts that could fool the
process.
Please be aware that the returned string is simply a snapshot at that
particular moment, and an attacker could move the file being pointed to.
In addition, complex namespace configurations could result in non-sensical
or confusing paths to be returned. The value received from this function
should only be used as secondary verification of some security property,
not as proof that a particular handle has a particular path.
The procfs handle used internally by the API is the same as the rest of
`filepath-securejoin` (for privileged programs this is usually a private
in-process `procfs` instance created with `fsopen(2)`).
As before, this is intended as a stop-gap before users migrate to
[libpathrs][], which provides a far more extensive safe `procfs` API and is
generally more robust.
- Previously, the hardened procfs implementation (used internally within
`Reopen` and `Open(at)InRoot`) only protected against overmount attacks on
systems with `openat2(2)` (Linux 5.6) or systems with `fsopen(2)` or
`open_tree(2)` (Linux 5.2) and programs with privileges to use them (with
some caveats about locked mounts that probably affect very few users). For
other users, an attacker with the ability to create malicious mounts (on most
systems, a sysadmin) could trick you into operating on files you didn't
expect. This attack only really makes sense in the context of container
runtime implementations.
This was considered a reasonable trade-off, as the long-term intention was to
get all users to just switch to [libpathrs][] if they wanted to use the safe
`procfs` API (which had more extensive protections, and is what these new
protections in `filepath-securejoin` are based on). However, as the API
is now being exported it seems unwise to advertise the API as "safe" if we do
not protect against known attacks.
The procfs API is now more protected against attackers on systems lacking the
aforementioned protections. However, the most comprehensive of these
protections effectively rely on [`statx(STATX_MNT_ID)`][statx.2] (Linux 5.8).
On older kernel versions, there is no effective protection (there is some
minimal protection against non-`procfs` filesystem components but a
sufficiently clever attacker can work around those). In addition,
`STATX_MNT_ID` is vulnerable to mount ID reuse attacks by sufficiently
motivated and privileged attackers -- this problem is mitigated with
`STATX_MNT_ID_UNIQUE` (Linux 6.8) but that raises the minimum kernel version
for more protection.
The fact that these protections are quite limited despite needing a fair bit
of extra code to handle was one of the primary reasons we did not initially
implement this in `filepath-securejoin` ([libpathrs][] supports all of this,
of course).
### Fixed ###
- RHEL 8 kernels have backports of `fsopen(2)` but in some testing we've found
that it has very bad (and very difficult to debug) performance issues, and so
we will explicitly refuse to use `fsopen(2)` if the running kernel version is
pre-5.2 and will instead fallback to `open("/proc")`.
[CVE-2024-21626]: https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv
[libpathrs]: https://github.com/cyphar/libpathrs
[statx.2]: https://www.man7.org/linux/man-pages/man2/statx.2.html
## [0.4.1] - 2025-01-28 ##
### Fixed ###
@@ -173,7 +375,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/).
safe to start migrating to as we have extensive tests ensuring they behave
correctly and are safe against various races and other attacks.
[libpathrs]: https://github.com/openSUSE/libpathrs
[libpathrs]: https://github.com/cyphar/libpathrs
[open.2]: https://www.man7.org/linux/man-pages/man2/open.2.html
## [0.2.5] - 2024-05-03 ##
@@ -238,7 +440,10 @@ This is our first release of `github.com/cyphar/filepath-securejoin`,
containing a full implementation with a coverage of 93.5% (the only missing
cases are the error cases, which are hard to mocktest at the moment).
[Unreleased]: https://github.com/cyphar/filepath-securejoin/compare/v0.4.1...HEAD
[Unreleased]: https://github.com/cyphar/filepath-securejoin/compare/v0.6.0...HEAD
[0.6.0]: https://github.com/cyphar/filepath-securejoin/compare/v0.5.1...v0.6.0
[0.5.1]: https://github.com/cyphar/filepath-securejoin/compare/v0.5.0...v0.5.1
[0.5.0]: https://github.com/cyphar/filepath-securejoin/compare/v0.4.1...v0.5.0
[0.4.1]: https://github.com/cyphar/filepath-securejoin/compare/v0.4.0...v0.4.1
[0.4.0]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.6...v0.4.0
[0.3.6]: https://github.com/cyphar/filepath-securejoin/compare/v0.3.5...v0.3.6

View File

@@ -0,0 +1,447 @@
## COPYING ##
`SPDX-License-Identifier: BSD-3-Clause AND MPL-2.0`
This project is made up of code licensed under different licenses. Which code
you use will have an impact on whether only one or both licenses apply to your
usage of this library.
Note that **each file** in this project individually has a code comment at the
start describing the license of that particular file -- this is the most
accurate license information of this project; in case there is any conflict
between this document and the comment at the start of a file, the comment shall
take precedence. The only purpose of this document is to work around [a known
technical limitation of pkg.go.dev's license checking tool when dealing with
non-trivial project licenses][go75067].
[go75067]: https://go.dev/issue/75067
### `BSD-3-Clause` ###
At time of writing, the following files and directories are licensed under the
BSD-3-Clause license:
* `doc.go`
* `join*.go`
* `vfs.go`
* `internal/consts/*.go`
* `pathrs-lite/internal/gocompat/*.go`
* `pathrs-lite/internal/kernelversion/*.go`
The text of the BSD-3-Clause license used by this project is the following (the
text is also available from the [`LICENSE.BSD`](./LICENSE.BSD) file):
```
Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
### `MPL-2.0` ###
All other files (unless otherwise marked) are licensed under the Mozilla Public
License (version 2.0).
The text of the Mozilla Public License (version 2.0) is the following (the text
is also available from the [`LICENSE.MPL-2.0`](./LICENSE.MPL-2.0) file):
```
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
```

View File

@@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@@ -67,7 +67,8 @@ func SecureJoin(root, unsafePath string) (string, error) {
[libpathrs]: https://github.com/openSUSE/libpathrs
[go#20126]: https://github.com/golang/go/issues/20126
### New API ###
### <a name="new-api" /> New API ###
[#new-api]: #new-api
While we recommend users switch to [libpathrs][libpathrs] as soon as it has a
stable release, some methods implemented by libpathrs have been ported to this
@@ -165,5 +166,19 @@ after `MkdirAll`).
### License ###
The license of this project is the same as Go, which is a BSD 3-clause license
available in the `LICENSE` file.
`SPDX-License-Identifier: BSD-3-Clause AND MPL-2.0`
Some of the code in this project is derived from Go, and is licensed under a
BSD 3-clause license (available in `LICENSE.BSD`). Other files (many of which
are derived from [libpathrs][libpathrs]) are licensed under the Mozilla Public
License version 2.0 (available in `LICENSE.MPL-2.0`). If you are using the
["New API" described above][#new-api], you are probably using code from files
released under this license.
Every source file in this project has a copyright header describing its
license. Please check the license headers of each file to see what license
applies to it.
See [COPYING.md](./COPYING.md) for some more details.
[umoci]: https://github.com/opencontainers/umoci

View File

@@ -1 +1 @@
0.4.1
0.6.0

View File

@@ -0,0 +1,29 @@
# SPDX-License-Identifier: MPL-2.0
# Copyright (C) 2025 Aleksa Sarai <cyphar@cyphar.com>
# Copyright (C) 2025 SUSE LLC
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at https://mozilla.org/MPL/2.0/.
comment:
layout: "condensed_header, reach, diff, components, condensed_files, condensed_footer"
require_changes: true
branches:
- main
coverage:
range: 60..100
status:
project:
default:
target: 85%
threshold: 0%
patch:
default:
target: auto
informational: true
github_checks:
annotations: false

View File

@@ -1,3 +1,5 @@
// SPDX-License-Identifier: BSD-3-Clause
// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
// Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
@@ -14,14 +16,13 @@
// **not** safe against race conditions where an attacker changes the
// filesystem after (or during) the [SecureJoin] operation.
//
// The new API is made up of [OpenInRoot] and [MkdirAll] (and derived
// functions). These are safe against racing attackers and have several other
// protections that are not provided by the legacy API. There are many more
// operations that most programs expect to be able to do safely, but we do not
// provide explicit support for them because we want to encourage users to
// switch to [libpathrs](https://github.com/openSUSE/libpathrs) which is a
// cross-language next-generation library that is entirely designed around
// operating on paths safely.
// The new API is available in the [pathrs-lite] subpackage, and provide
// protections against racing attackers as well as several other key
// protections against attacks often seen by container runtimes. As the name
// suggests, [pathrs-lite] is a stripped down (pure Go) reimplementation of
// [libpathrs]. The main APIs provided are [OpenInRoot], [MkdirAll], and
// [procfs.Handle] -- other APIs are not planned to be ported. The long-term
// goal is for users to migrate to [libpathrs] which is more fully-featured.
//
// securejoin has been used by several container runtimes (Docker, runc,
// Kubernetes, etc) for quite a few years as a de-facto standard for operating
@@ -31,9 +32,16 @@
// API as soon as possible (or even better, switch to libpathrs).
//
// This project was initially intended to be included in the Go standard
// library, but [it was rejected](https://go.dev/issue/20126). There is now a
// [new Go proposal](https://go.dev/issue/67002) for a safe path resolution API
// that shares some of the goals of filepath-securejoin. However, that design
// is intended to work like `openat2(RESOLVE_BENEATH)` which does not fit the
// usecase of container runtimes and most system tools.
// library, but it was rejected (see https://go.dev/issue/20126). Much later,
// [os.Root] was added to the Go stdlib that shares some of the goals of
// filepath-securejoin. However, its design is intended to work like
// openat2(RESOLVE_BENEATH) which does not fit the usecase of container
// runtimes and most system tools.
//
// [pathrs-lite]: https://pkg.go.dev/github.com/cyphar/filepath-securejoin/pathrs-lite
// [libpathrs]: https://github.com/openSUSE/libpathrs
// [OpenInRoot]: https://pkg.go.dev/github.com/cyphar/filepath-securejoin/pathrs-lite#OpenInRoot
// [MkdirAll]: https://pkg.go.dev/github.com/cyphar/filepath-securejoin/pathrs-lite#MkdirAll
// [procfs.Handle]: https://pkg.go.dev/github.com/cyphar/filepath-securejoin/pathrs-lite/procfs#Handle
// [os.Root]: https:///pkg.go.dev/os#Root
package securejoin

View File

@@ -1,32 +0,0 @@
//go:build linux && go1.21
// Copyright (C) 2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package securejoin
import (
"slices"
"sync"
)
func slices_DeleteFunc[S ~[]E, E any](slice S, delFn func(E) bool) S {
return slices.DeleteFunc(slice, delFn)
}
func slices_Contains[S ~[]E, E comparable](slice S, val E) bool {
return slices.Contains(slice, val)
}
func slices_Clone[S ~[]E, E any](slice S) S {
return slices.Clone(slice)
}
func sync_OnceValue[T any](f func() T) func() T {
return sync.OnceValue(f)
}
func sync_OnceValues[T1, T2 any](f func() (T1, T2)) func() (T1, T2) {
return sync.OnceValues(f)
}

View File

@@ -1,124 +0,0 @@
//go:build linux && !go1.21
// Copyright (C) 2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package securejoin
import (
"sync"
)
// These are very minimal implementations of functions that appear in Go 1.21's
// stdlib, included so that we can build on older Go versions. Most are
// borrowed directly from the stdlib, and a few are modified to be "obviously
// correct" without needing to copy too many other helpers.
// clearSlice is equivalent to the builtin clear from Go 1.21.
// Copied from the Go 1.24 stdlib implementation.
func clearSlice[S ~[]E, E any](slice S) {
var zero E
for i := range slice {
slice[i] = zero
}
}
// Copied from the Go 1.24 stdlib implementation.
func slices_IndexFunc[S ~[]E, E any](s S, f func(E) bool) int {
for i := range s {
if f(s[i]) {
return i
}
}
return -1
}
// Copied from the Go 1.24 stdlib implementation.
func slices_DeleteFunc[S ~[]E, E any](s S, del func(E) bool) S {
i := slices_IndexFunc(s, del)
if i == -1 {
return s
}
// Don't start copying elements until we find one to delete.
for j := i + 1; j < len(s); j++ {
if v := s[j]; !del(v) {
s[i] = v
i++
}
}
clearSlice(s[i:]) // zero/nil out the obsolete elements, for GC
return s[:i]
}
// Similar to the stdlib slices.Contains, except that we don't have
// slices.Index so we need to use slices.IndexFunc for this non-Func helper.
func slices_Contains[S ~[]E, E comparable](s S, v E) bool {
return slices_IndexFunc(s, func(e E) bool { return e == v }) >= 0
}
// Copied from the Go 1.24 stdlib implementation.
func slices_Clone[S ~[]E, E any](s S) S {
// Preserve nil in case it matters.
if s == nil {
return nil
}
return append(S([]E{}), s...)
}
// Copied from the Go 1.24 stdlib implementation.
func sync_OnceValue[T any](f func() T) func() T {
var (
once sync.Once
valid bool
p any
result T
)
g := func() {
defer func() {
p = recover()
if !valid {
panic(p)
}
}()
result = f()
f = nil
valid = true
}
return func() T {
once.Do(g)
if !valid {
panic(p)
}
return result
}
}
// Copied from the Go 1.24 stdlib implementation.
func sync_OnceValues[T1, T2 any](f func() (T1, T2)) func() (T1, T2) {
var (
once sync.Once
valid bool
p any
r1 T1
r2 T2
)
g := func() {
defer func() {
p = recover()
if !valid {
panic(p)
}
}()
r1, r2 = f()
f = nil
valid = true
}
return func() (T1, T2) {
once.Do(g)
if !valid {
panic(p)
}
return r1, r2
}
}

View File

@@ -0,0 +1,15 @@
// SPDX-License-Identifier: BSD-3-Clause
// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
// Copyright (C) 2017-2025 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package consts contains the definitions of internal constants used
// throughout filepath-securejoin.
package consts
// MaxSymlinkLimit is the maximum number of symlinks that can be encountered
// during a single lookup before returning -ELOOP. At time of writing, Linux
// has an internal limit of 40.
const MaxSymlinkLimit = 255

View File

@@ -1,3 +1,5 @@
// SPDX-License-Identifier: BSD-3-Clause
// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
// Copyright (C) 2017-2025 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
@@ -11,9 +13,9 @@ import (
"path/filepath"
"strings"
"syscall"
)
const maxSymlinkLimit = 255
"github.com/cyphar/filepath-securejoin/internal/consts"
)
// IsNotExist tells you if err is an error that implies that either the path
// accessed does not exist (or path components don't exist). This is
@@ -49,12 +51,13 @@ func hasDotDot(path string) bool {
return strings.Contains("/"+path+"/", "/../")
}
// SecureJoinVFS joins the two given path components (similar to [filepath.Join]) except
// that the returned path is guaranteed to be scoped inside the provided root
// path (when evaluated). Any symbolic links in the path are evaluated with the
// given root treated as the root of the filesystem, similar to a chroot. The
// filesystem state is evaluated through the given [VFS] interface (if nil, the
// standard [os].* family of functions are used).
// SecureJoinVFS joins the two given path components (similar to
// [filepath.Join]) except that the returned path is guaranteed to be scoped
// inside the provided root path (when evaluated). Any symbolic links in the
// path are evaluated with the given root treated as the root of the
// filesystem, similar to a chroot. The filesystem state is evaluated through
// the given [VFS] interface (if nil, the standard [os].* family of functions
// are used).
//
// Note that the guarantees provided by this function only apply if the path
// components in the returned string are not modified (in other words are not
@@ -78,7 +81,7 @@ func hasDotDot(path string) bool {
// fully resolved using [filepath.EvalSymlinks] or otherwise constructed to
// avoid containing symlink components. Of course, the root also *must not* be
// attacker-controlled.
func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) {
func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) { //nolint:revive // name is part of public API
// The root path must not contain ".." components, otherwise when we join
// the subpath we will end up with a weird path. We could work around this
// in other ways but users shouldn't be giving us non-lexical root paths in
@@ -138,7 +141,7 @@ func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) {
// It's a symlink, so get its contents and expand it by prepending it
// to the yet-unparsed path.
linksWalked++
if linksWalked > maxSymlinkLimit {
if linksWalked > consts.MaxSymlinkLimit {
return "", &os.PathError{Op: "SecureJoin", Path: root + string(filepath.Separator) + unsafePath, Err: syscall.ELOOP}
}

View File

@@ -1,103 +0,0 @@
//go:build linux
// Copyright (C) 2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package securejoin
import (
"fmt"
"os"
"strconv"
"golang.org/x/sys/unix"
)
// OpenatInRoot is equivalent to [OpenInRoot], except that the root is provided
// using an *[os.File] handle, to ensure that the correct root directory is used.
func OpenatInRoot(root *os.File, unsafePath string) (*os.File, error) {
handle, err := completeLookupInRoot(root, unsafePath)
if err != nil {
return nil, &os.PathError{Op: "securejoin.OpenInRoot", Path: unsafePath, Err: err}
}
return handle, nil
}
// OpenInRoot safely opens the provided unsafePath within the root.
// Effectively, OpenInRoot(root, unsafePath) is equivalent to
//
// path, _ := securejoin.SecureJoin(root, unsafePath)
// handle, err := os.OpenFile(path, unix.O_PATH|unix.O_CLOEXEC)
//
// But is much safer. The above implementation is unsafe because if an attacker
// can modify the filesystem tree between [SecureJoin] and [os.OpenFile], it is
// possible for the returned file to be outside of the root.
//
// Note that the returned handle is an O_PATH handle, meaning that only a very
// limited set of operations will work on the handle. This is done to avoid
// accidentally opening an untrusted file that could cause issues (such as a
// disconnected TTY that could cause a DoS, or some other issue). In order to
// use the returned handle, you can "upgrade" it to a proper handle using
// [Reopen].
func OpenInRoot(root, unsafePath string) (*os.File, error) {
rootDir, err := os.OpenFile(root, unix.O_PATH|unix.O_DIRECTORY|unix.O_CLOEXEC, 0)
if err != nil {
return nil, err
}
defer rootDir.Close()
return OpenatInRoot(rootDir, unsafePath)
}
// Reopen takes an *[os.File] handle and re-opens it through /proc/self/fd.
// Reopen(file, flags) is effectively equivalent to
//
// fdPath := fmt.Sprintf("/proc/self/fd/%d", file.Fd())
// os.OpenFile(fdPath, flags|unix.O_CLOEXEC)
//
// But with some extra hardenings to ensure that we are not tricked by a
// maliciously-configured /proc mount. While this attack scenario is not
// common, in container runtimes it is possible for higher-level runtimes to be
// tricked into configuring an unsafe /proc that can be used to attack file
// operations. See [CVE-2019-19921] for more details.
//
// [CVE-2019-19921]: https://github.com/advisories/GHSA-fh74-hm69-rqjw
func Reopen(handle *os.File, flags int) (*os.File, error) {
procRoot, err := getProcRoot()
if err != nil {
return nil, err
}
// We can't operate on /proc/thread-self/fd/$n directly when doing a
// re-open, so we need to open /proc/thread-self/fd and then open a single
// final component.
procFdDir, closer, err := procThreadSelf(procRoot, "fd/")
if err != nil {
return nil, fmt.Errorf("get safe /proc/thread-self/fd handle: %w", err)
}
defer procFdDir.Close()
defer closer()
// Try to detect if there is a mount on top of the magic-link we are about
// to open. If we are using unsafeHostProcRoot(), this could change after
// we check it (and there's nothing we can do about that) but for
// privateProcRoot() this should be guaranteed to be safe (at least since
// Linux 5.12[1], when anonymous mount namespaces were completely isolated
// from external mounts including mount propagation events).
//
// [1]: Linux commit ee2e3f50629f ("mount: fix mounting of detached mounts
// onto targets that reside on shared mounts").
fdStr := strconv.Itoa(int(handle.Fd()))
if err := checkSymlinkOvermount(procRoot, procFdDir, fdStr); err != nil {
return nil, fmt.Errorf("check safety of /proc/thread-self/fd/%s magiclink: %w", fdStr, err)
}
flags |= unix.O_CLOEXEC
// Rather than just wrapping openatFile, open-code it so we can copy
// handle.Name().
reopenFd, err := unix.Openat(int(procFdDir.Fd()), fdStr, flags, 0)
if err != nil {
return nil, fmt.Errorf("reopen fd %d: %w", handle.Fd(), err)
}
return os.NewFile(uintptr(reopenFd), handle.Name()), nil
}

View File

@@ -1,127 +0,0 @@
//go:build linux
// Copyright (C) 2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package securejoin
import (
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"golang.org/x/sys/unix"
)
var hasOpenat2 = sync_OnceValue(func() bool {
fd, err := unix.Openat2(unix.AT_FDCWD, ".", &unix.OpenHow{
Flags: unix.O_PATH | unix.O_CLOEXEC,
Resolve: unix.RESOLVE_NO_SYMLINKS | unix.RESOLVE_IN_ROOT,
})
if err != nil {
return false
}
_ = unix.Close(fd)
return true
})
func scopedLookupShouldRetry(how *unix.OpenHow, err error) bool {
// RESOLVE_IN_ROOT (and RESOLVE_BENEATH) can return -EAGAIN if we resolve
// ".." while a mount or rename occurs anywhere on the system. This could
// happen spuriously, or as the result of an attacker trying to mess with
// us during lookup.
//
// In addition, scoped lookups have a "safety check" at the end of
// complete_walk which will return -EXDEV if the final path is not in the
// root.
return how.Resolve&(unix.RESOLVE_IN_ROOT|unix.RESOLVE_BENEATH) != 0 &&
(errors.Is(err, unix.EAGAIN) || errors.Is(err, unix.EXDEV))
}
const scopedLookupMaxRetries = 10
func openat2File(dir *os.File, path string, how *unix.OpenHow) (*os.File, error) {
fullPath := dir.Name() + "/" + path
// Make sure we always set O_CLOEXEC.
how.Flags |= unix.O_CLOEXEC
var tries int
for tries < scopedLookupMaxRetries {
fd, err := unix.Openat2(int(dir.Fd()), path, how)
if err != nil {
if scopedLookupShouldRetry(how, err) {
// We retry a couple of times to avoid the spurious errors, and
// if we are being attacked then returning -EAGAIN is the best
// we can do.
tries++
continue
}
return nil, &os.PathError{Op: "openat2", Path: fullPath, Err: err}
}
// If we are using RESOLVE_IN_ROOT, the name we generated may be wrong.
// NOTE: The procRoot code MUST NOT use RESOLVE_IN_ROOT, otherwise
// you'll get infinite recursion here.
if how.Resolve&unix.RESOLVE_IN_ROOT == unix.RESOLVE_IN_ROOT {
if actualPath, err := rawProcSelfFdReadlink(fd); err == nil {
fullPath = actualPath
}
}
return os.NewFile(uintptr(fd), fullPath), nil
}
return nil, &os.PathError{Op: "openat2", Path: fullPath, Err: errPossibleAttack}
}
func lookupOpenat2(root *os.File, unsafePath string, partial bool) (*os.File, string, error) {
if !partial {
file, err := openat2File(root, unsafePath, &unix.OpenHow{
Flags: unix.O_PATH | unix.O_CLOEXEC,
Resolve: unix.RESOLVE_IN_ROOT | unix.RESOLVE_NO_MAGICLINKS,
})
return file, "", err
}
return partialLookupOpenat2(root, unsafePath)
}
// partialLookupOpenat2 is an alternative implementation of
// partialLookupInRoot, using openat2(RESOLVE_IN_ROOT) to more safely get a
// handle to the deepest existing child of the requested path within the root.
func partialLookupOpenat2(root *os.File, unsafePath string) (*os.File, string, error) {
// TODO: Implement this as a git-bisect-like binary search.
unsafePath = filepath.ToSlash(unsafePath) // noop
endIdx := len(unsafePath)
var lastError error
for endIdx > 0 {
subpath := unsafePath[:endIdx]
handle, err := openat2File(root, subpath, &unix.OpenHow{
Flags: unix.O_PATH | unix.O_CLOEXEC,
Resolve: unix.RESOLVE_IN_ROOT | unix.RESOLVE_NO_MAGICLINKS,
})
if err == nil {
// Jump over the slash if we have a non-"" remainingPath.
if endIdx < len(unsafePath) {
endIdx += 1
}
// We found a subpath!
return handle, unsafePath[endIdx:], lastError
}
if errors.Is(err, unix.ENOENT) || errors.Is(err, unix.ENOTDIR) {
// That path doesn't exist, let's try the next directory up.
endIdx = strings.LastIndexByte(subpath, '/')
lastError = err
continue
}
return nil, "", fmt.Errorf("open subpath: %w", err)
}
// If we couldn't open anything, the whole subpath is missing. Return a
// copy of the root fd so that the caller doesn't close this one by
// accident.
rootClone, err := dupFile(root)
if err != nil {
return nil, "", err
}
return rootClone, unsafePath, lastError
}

View File

@@ -1,59 +0,0 @@
//go:build linux
// Copyright (C) 2024 SUSE LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package securejoin
import (
"os"
"path/filepath"
"golang.org/x/sys/unix"
)
func dupFile(f *os.File) (*os.File, error) {
fd, err := unix.FcntlInt(f.Fd(), unix.F_DUPFD_CLOEXEC, 0)
if err != nil {
return nil, os.NewSyscallError("fcntl(F_DUPFD_CLOEXEC)", err)
}
return os.NewFile(uintptr(fd), f.Name()), nil
}
func openatFile(dir *os.File, path string, flags int, mode int) (*os.File, error) {
// Make sure we always set O_CLOEXEC.
flags |= unix.O_CLOEXEC
fd, err := unix.Openat(int(dir.Fd()), path, flags, uint32(mode))
if err != nil {
return nil, &os.PathError{Op: "openat", Path: dir.Name() + "/" + path, Err: err}
}
// All of the paths we use with openatFile(2) are guaranteed to be
// lexically safe, so we can use path.Join here.
fullPath := filepath.Join(dir.Name(), path)
return os.NewFile(uintptr(fd), fullPath), nil
}
func fstatatFile(dir *os.File, path string, flags int) (unix.Stat_t, error) {
var stat unix.Stat_t
if err := unix.Fstatat(int(dir.Fd()), path, &stat, flags); err != nil {
return stat, &os.PathError{Op: "fstatat", Path: dir.Name() + "/" + path, Err: err}
}
return stat, nil
}
func readlinkatFile(dir *os.File, path string) (string, error) {
size := 4096
for {
linkBuf := make([]byte, size)
n, err := unix.Readlinkat(int(dir.Fd()), path, linkBuf)
if err != nil {
return "", &os.PathError{Op: "readlinkat", Path: dir.Name() + "/" + path, Err: err}
}
if n != size {
return string(linkBuf[:n]), nil
}
// Possible truncation, resize the buffer.
size *= 2
}
}

Some files were not shown because too many files have changed in this diff Show More