Compare commits

...

28 Commits

Author SHA1 Message Date
Fabiano Fidêncio
446a083f3e genpolicy: Adapt to CRI-O
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-24 17:01:09 +01:00
Fabiano Fidêncio
e58f4bceb0 tests: Add CRI-O tests for qemu-coco-dev
We had zero tests with CRI-O for these setups. This adds CRI-O to the CoCo
nontee matrix (same scenarios as containerd, but without auto-generated policy
for now). Vanilla k8s can now be deployed with kubeadm using CRI-O; CRI-O
version is derived from the current k8s stable and we fall back to x.y-1 if
that CRI-O release isn't out yet.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-24 17:01:04 +01:00
Manuel Huber
566bb306f1 tests: enable policy for openvpn on nydus
Specify runAsUser, runAsGroup, supplementalGroups values embedded
in the image's /etc/group file explicitly in the security context.
With this, both genpolicy and containerd, which in case of using
nydus guest-pull, lack image introspection capabilities, use the
same values for user/group/additionalG IDs at policy generation
time and at runtime when the OCI spec is passed.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-24 08:08:15 +01:00
Fupan Li
0bfb6b3c45 Merge pull request #12531 from BbolroC/blkdev-hotplug-s390x-runtime-rs
runtime-rs: Support for block device hotplug on s390x
2026-02-24 13:03:59 +08:00
Fabiano Fidêncio
a0d954cf7c tests: Enable auto-generated policies for experimental_force_guest_pull
We want to run with auto-generated policies when using experimental_force_guest_pull.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-23 22:15:18 +01:00
Hyounggyu Choi
4e533f82e7 tests: Remove skip condition for runtime-rs on s390x in k8s-block-volume
This commit removes the skip condition for qemu-runtime-rs on s390x
in k8s-block-volume.bats.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-02-23 09:00:29 +01:00
Hyounggyu Choi
2961914f54 runtime-rs: Support for virtio-blk-ccw devices and hotplug
- Introduced `ccw_addr` field in `BlockConfig` for CCW device addresses
- Updated `CcwSubChannel` to handle CCW addresses and channel itself
- Enhanced `QemuInner` to handle CCW subchannel for hotplug operations
- Handled `virtio-blk-ccw` devices in hotplug_block_device()
- Modified resource management to accommodate `ccw_addr`

Fixes: #10373

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-02-23 09:00:29 +01:00
Hyounggyu Choi
e893526fad runtime-rs: Reuse constants from kata-types
Some constants are duplicated in runtime-rs even though they
are already defined in kata-types. Use the definitions from
kata-types as the single source of truth to avoid inconsistencies
between components (e.g. agent and runtime).

This change makes runtime-rs use the constants defined in
kata-types.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-02-23 09:00:29 +01:00
Hyounggyu Choi
606d193f65 runtime-rs: Set DRIVER_BLK_CCW_TYPE correctly
`DRIVER_BLK_CCW_TYPE` is defined as `blk-ccw`
in src/libs/kata-types/src/device.rs, so set
the variable in runtime-rs accordingly.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2026-02-23 09:00:29 +01:00
Fabiano Fidêncio
96c20f8baa tests: k8s: set CreateContainerRequest (on free runners) timeout to 600s
Set KubeletConfiguration runtimeRequestTimeout to 600s mainly for CoCo
(Confidential Containers) tests, so container creation (attestation,
policy, image pull, VM start) does not hit the default CRI timeout.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
9634dfa859 gatekeeper: Update tests name
We need to do so after moving some of the tests to the free runners.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
a6b7a2d8a4 tests: assert_pod_fail accept RunContainerError and StartError
Treat waiting.reason RunContainerError and terminated.reason StartError/Error
as container failure, so tests that expect guest image-pull failure (e.g.
wrong credentials) pass when the container fails with those states instead
of only BackOff.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
42d980815a tests: skip k8s-policy-pvc on non-AKS
Otherwise it'll fail as we cannot bind the device.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
1523c48a2b tests: k8s: Align coco / erofs job declaration
Later on we may even think about merging those, but for now let's at
least make sure the envs used are the same / declared in a similar place
for each job.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
1b9b53248e tests: k8s: coco: rely more on free runners
Run all CoCo non-TEE variants in a single job on the free runner with an
explicit environment matrix (vmm, snapshotter, pull_type, kbs,
containerd_version).

Here we're testing CoCo only with the "active" version of containerd.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
1fa3475e36 tests: k8s: rely more on free runners
We were running most of the k8s integration tests on AKS. The ones that
don't actually depend on AKS's environment now run on normal
ubuntu-24.04 GitHub runners instead: we bring up a kubeadm cluster
there, test with both containerd lts and active, and skip attestation
tests since those runtimes don't need them. AKS is left only for the
jobs that do depend on it.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-21 08:44:47 +01:00
Fabiano Fidêncio
2f056484f3 versions: Bump containerd active to 2.2
SSIA

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-21 08:44:47 +01:00
Zvonko Kaiser
6d1eaa1065 Merge pull request #12461 from manuelh-dev/mahuber/guest-pull-bats
tests: enable more scenarios for k8s-guest-pull-image.bats
2026-02-20 08:48:54 -05:00
Zvonko Kaiser
1de7dd58f5 gpu: Add NVLSM daemon
We need to chissel the NVLSM daemon for NVL5 systems

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-02-20 11:39:59 +01:00
Zvonko Kaiser
67d154fe47 gpu: Enable NVL5 based platform
NVL5 based HGX systems need ib_umad and
fabricmanager and nvlsm installed.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2026-02-20 11:39:59 +01:00
Dan Mihai
ea53779b90 ci: k8s: temporarily disable mariner host
Disable mariner host testing in CI, and auto-generated policy testing
for the temporary replacements of these hosts (based on ubuntu), to work
around missing:

1. cloud-hypervisor/cloud-hypervisor@0a5e79a, that will allow Kata
   in the future to disable the nested property of guest VPs. Nested
   is enabled by default and doesn't work yet with mariner's MSHV.
2. cloud-hypervisor/cloud-hypervisor@bf6f0f8, exposed by the large
   ttrpc replies intentionally produced by the Kata CI Policy tests.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2026-02-19 20:42:50 +01:00
Dan Mihai
3e2153bbae ci: k8s: easier to modify az aks create command
Make `az aks create` command easier to change when needed, by moving the
arguments specific to mariner nodes onto a separate line of this script.
This change also removes the need for `shellcheck disable=SC2046` here.

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2026-02-19 20:42:50 +01:00
Fabiano Fidêncio
cadbf51015 versions: Update Cloud Hypervisor to v50.0
```
This release has been tracked in v50.0 group of our roadmap project.

Configurable Nested Virtualization Option on x86_64
The nested=on|off option has been added to --cpu to allow users
to configure nested virtualization support in the guest on x86_64
hosts (for both KVM and MSHV). The default value is on to maintain
consistency with existing behavior. (#7408)

Compression Support for QCOW2
QCOW2 support has been extended to handle compression clusters based on
zlib and zstd. (#7462)

Notable Performance Improvements
Performance of live migration has been improved via an optimized
implementation of dirty bitmap maintenance. (#7468)

Live Disk Resizing Support for Raw Images
The /vm.resize-disk API has been introduced to allow users to resize block
devices backed by raw images while a guest is running. (#7476)

Developer Experience Improvements
Significant improvements have been made to developer experience and
productivity. These include a simplified root manifest, codified and
tightened Clippy lints, and streamlined workflows for cargo clippy and
cargo test. (#7489)

Improved File-level Locking Support
Block devices now use byte-range advisory locks instead of whole-file
locks. While both approaches prevent multiple Cloud Hypervisor instances
from simultaneously accessing the same disk image with write
permissions, byte-range locks provide better compatibility with network
storage backends. (#7494)

Logging Improvements
Logs now include event information generated by the event-monitor
module. (#7512)

Notable Bug Fixes
* Fix several issues around CPUID in the guest (#7485, #7495, #7508)
* Fix snapshot/restore for Windows Guest (#7492)
* Respect queue size in block performance tests (#7515)
* Fix several Serial Manager issues (#7502)
* Fix several seccomp violation issues (#7477, #7497, #7518)
* Fix various issues around block and qcow (#7526, #7528, #7537, #7546,
  #7549)
* Retrieve MSRs list correctly on MSHV (#7543)
* Fix live migration (and snapshot/restore) with AMX state (#7534)
```

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2026-02-19 20:42:50 +01:00
Dan Mihai
d8b403437f static-build: delete cloud-hypervisor directory
This cloud-hypervisor is a directory, so it needs "rm -rf" instead of
"rm -f".

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2026-02-19 20:42:50 +01:00
Manuel Huber
fd340ac91c tests: remove skips for some guest-pull scenarios
Issue 10838 is resolved by the prior commit, enabling the -m
option of the kernel build for confidential guests which are
not users of the measured rootfs, and by commit
976df22119, which ensures
relevant user space packages are present.
Not every confidential guest has the measured rootfs option
enabled. Every confidential guest is assumed to support CDH's
secure storage features, in contrast.

We also adjust test timeouts to account for occasional spikes on
our bare metal runners (e.g., SNP, TDX, s390x).

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-19 10:10:55 -08:00
Harshitha Gowda
728d8656ee tests: Set sev-snp, qemu-snp CIs as required
run-k8s-tests-on-tee (sev-snp, qemu-snp)

Signed-off-by: Harshitha Gowda <hgowda@amd.com>
2026-02-19 16:41:29 +01:00
Manuel Huber
4c760fd031 build: add CONFIDENTIAL_GUEST variable for kernel
This change adds the CONFIDENTIAL_GUEST variable to the kernel
build logic. Similar to commit
976df22119, we would like to enable
the cryptsetup functionalities not only when building a measured
root file system, but also when building for a confidential guest.
The current state is that not all confidential guests use a
measured root filesystem, and as a matter of fact, we should
indeed decouple these aspects.

With the current convention, a confidential guest is a user of CDH
with its storage features. A better naming of the
CONFIDENTIAL_GUEST variable could have been a naming related to CDH
storage functionality. Further, the kernel build script's -m
parameter could be improved too - as indicated by this change, not
only measured rootfs builds will need the cryptsetup.conf file.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-17 12:44:50 -08:00
Manuel Huber
d3742ca877 tests: enable guest pull bats for force guest pull
Similar to k8s-guest-pull-image-authenticated and to
k8s-guest-pull-image-signature, enabling k8s-guest-pull-image to
run against the experimental force guest pull method.
Only k8s-guest-pull-image-encrypted requires nydus.

Signed-off-by: Manuel Huber <manuelh@nvidia.com>
2026-02-17 12:44:50 -08:00
49 changed files with 1567 additions and 365 deletions

View File

@@ -297,6 +297,21 @@ jobs:
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
run-k8s-tests-on-free-runner:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
permissions:
contents: read
uses: ./.github/workflows/run-k8s-tests-on-free-runner.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-k8s-tests-on-arm64:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-arm64

View File

@@ -42,17 +42,6 @@ jobs:
strategy:
fail-fast: false
matrix:
host_os:
- ubuntu
vmm:
- clh
- dragonball
- qemu
- qemu-runtime-rs
- cloud-hypervisor
instance-type:
- small
- normal
include:
- host_os: cbl-mariner
vmm: clh
@@ -80,6 +69,7 @@ jobs:
KUBERNETES: "vanilla"
K8S_TEST_HOST_TYPE: ${{ matrix.instance-type }}
GENPOLICY_PULL_METHOD: ${{ matrix.genpolicy-pull-method }}
RUNS_ON_AKS: "true"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:

View File

@@ -0,0 +1,127 @@
# Run Kubernetes integration tests on free GitHub runners with a locally
# deployed cluster (kubeadm).
name: CI | Run kubernetes tests on free runner
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-k8s-tests:
name: run-k8s-tests
strategy:
fail-fast: false
matrix:
environment: [
{ vmm: clh, containerd_version: lts },
{ vmm: clh, containerd_version: active },
{ vmm: dragonball, containerd_version: lts },
{ vmm: dragonball, containerd_version: active },
{ vmm: qemu, containerd_version: lts },
{ vmm: qemu, containerd_version: active },
{ vmm: qemu-runtime-rs, containerd_version: lts },
{ vmm: qemu-runtime-rs, containerd_version: active },
{ vmm: cloud-hypervisor, containerd_version: lts },
{ vmm: cloud-hypervisor, containerd_version: active },
]
runs-on: ubuntu-24.04
permissions:
contents: read
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HOST_OS: ubuntu
KATA_HYPERVISOR: ${{ matrix.environment.vmm }}
KUBERNETES: vanilla
K8S_TEST_HOST_TYPE: baremetal-no-attestation
CONTAINER_ENGINE: containerd
CONTAINER_ENGINE_VERSION: ${{ matrix.environment.containerd_version }}
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Deploy k8s (kubeadm)
run: bash tests/integration/kubernetes/gha-run.sh deploy-k8s
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Run tests
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup

View File

@@ -140,165 +140,36 @@ jobs:
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
- guest-pull
include:
- pull-type: experimental-force-guest-pull
vmm: qemu-coco-dev
snapshotter: ""
runs-on: ubuntu-22.04
environment: [
{ vmm: qemu-coco-dev, snapshotter: nydus, pull_type: guest-pull },
{ vmm: qemu-coco-dev-runtime-rs, snapshotter: nydus, pull_type: guest-pull },
{ vmm: qemu-coco-dev, snapshotter: "", pull_type: experimental-force-guest-pull },
]
runs-on: ubuntu-24.04
permissions:
id-token: write # Used for OIDC access to log into Azure
contents: read
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KATA_HYPERVISOR: ${{ matrix.environment.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "true"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: "aks"
KBS_INGRESS: "nodeport"
KUBERNETES: "vanilla"
PULL_TYPE: ${{ matrix.pull-type }}
PULL_TYPE: ${{ matrix.environment.pull_type }}
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
EXPERIMENTAL_FORCE_GUEST_PULL: ${{ matrix.pull-type == 'experimental-force-guest-pull' && matrix.vmm || '' }}
# Caution: current ingress controller used to expose the KBS service
# requires much vCPUs, lefting only a few for the tests. Depending on the
# host type chose it will result on the creation of a cluster with
# insufficient resources.
SNAPSHOTTER: ${{ matrix.environment.snapshotter }}
EXPERIMENTAL_FORCE_GUEST_PULL: ${{ matrix.environment.pull_type == 'experimental-force-guest-pull' && matrix.environment.vmm || '' }}
AUTO_GENERATE_POLICY: "yes"
K8S_TEST_HOST_TYPE: "all"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Create AKS cluster
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
with:
timeout_minutes: 15
max_attempts: 20
retry_on: error
retry_wait_seconds: 10
command: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Install `kubectl`
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
env:
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: ${{ env.SNAPSHOTTER == 'nydus' }}
AUTO_GENERATE_POLICY: ${{ env.PULL_TYPE == 'experimental-force-guest-pull' && 'no' || 'yes' }}
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Delete AKS cluster
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter
run-k8s-tests-coco-nontee-with-erofs-snapshotter:
name: run-k8s-tests-coco-nontee-with-erofs-snapshotter
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
snapshotter:
- erofs
pull-type:
- default
runs-on: ubuntu-24.04
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "false"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: ""
KUBERNETES: "vanilla"
CONTAINER_ENGINE: "containerd"
CONTAINER_ENGINE_VERSION: "v2.2"
PULL_TYPE: ${{ matrix.pull-type }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: "true"
K8S_TEST_HOST_TYPE: "all"
# We are skipping the auto generated policy tests for now,
# but those should be enabled as soon as we work on that.
AUTO_GENERATE_POLICY: "no"
CONTAINER_ENGINE_VERSION: "active"
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
@@ -342,8 +213,221 @@ jobs:
- name: Deploy kubernetes
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh deploy-k8s
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
env:
GH_TOKEN: ${{ github.token }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: ${{ matrix.environment.snapshotter == 'nydus' }}
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always()
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs
- name: Delete CSI driver
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver
run-k8s-tests-coco-nontee-crio:
name: run-k8s-tests-coco-nontee-crio
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
runs-on: fidencio-crio
permissions:
contents: read
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KBS: "true"
KBS_INGRESS: "nodeport"
KUBERNETES: "vanilla"
PULL_TYPE: "guest-pull"
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ""
EXPERIMENTAL_FORCE_GUEST_PULL: ""
AUTO_GENERATE_POLICY: "yes"
K8S_TEST_HOST_TYPE: "all"
CONTAINER_ENGINE: "crio"
CONTAINER_RUNTIME: "crio"
CONTAINER_ENGINE_VERSION: "active"
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always()
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs
- name: Delete CSI driver
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter
run-k8s-tests-coco-nontee-with-erofs-snapshotter:
name: run-k8s-tests-coco-nontee-with-erofs-snapshotter
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
snapshotter:
- erofs
pull-type:
- default
runs-on: ubuntu-24.04
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "false"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: ""
KUBERNETES: "vanilla"
CONTAINER_ENGINE: "containerd"
CONTAINER_ENGINE_VERSION: "active"
PULL_TYPE: ${{ matrix.pull-type }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: "true"
K8S_TEST_HOST_TYPE: "all"
# We are skipping the auto generated policy tests for now,
# but those should be enabled as soon as we work on that.
AUTO_GENERATE_POLICY: "no"
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Deploy kubernetes
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh deploy-k8s
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
@@ -363,3 +447,13 @@ jobs:
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CSI driver
if: always()
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver

View File

@@ -49,6 +49,8 @@ In order to allow Kubelet to use containerd (using the CRI interface), configure
EOF
```
For Kata Containers (and especially CoCo / Confidential Containers tests), use at least `--runtime-request-timeout=600s` (10m) so CRI CreateContainerRequest does not time out.
- Inform systemd about the new configuration
```bash

View File

@@ -13,17 +13,17 @@ use crate::device::DeviceType;
use crate::Hypervisor as hypervisor;
use anyhow::{Context, Result};
use async_trait::async_trait;
pub use kata_types::device::{
DRIVER_BLK_CCW_TYPE as KATA_CCW_DEV_TYPE, DRIVER_BLK_MMIO_TYPE as KATA_MMIO_BLK_DEV_TYPE,
DRIVER_BLK_PCI_TYPE as KATA_BLK_DEV_TYPE, DRIVER_NVDIMM_TYPE as KATA_NVDIMM_DEV_TYPE,
DRIVER_SCSI_TYPE as KATA_SCSI_DEV_TYPE,
};
/// VIRTIO_BLOCK_PCI indicates block driver is virtio-pci based
pub const VIRTIO_BLOCK_PCI: &str = "virtio-blk-pci";
pub const VIRTIO_BLOCK_MMIO: &str = "virtio-blk-mmio";
pub const VIRTIO_BLOCK_CCW: &str = "virtio-blk-ccw";
pub const VIRTIO_PMEM: &str = "virtio-pmem";
pub const KATA_MMIO_BLK_DEV_TYPE: &str = "mmioblk";
pub const KATA_BLK_DEV_TYPE: &str = "blk";
pub const KATA_CCW_DEV_TYPE: &str = "ccw";
pub const KATA_NVDIMM_DEV_TYPE: &str = "nvdimm";
pub const KATA_SCSI_DEV_TYPE: &str = "scsi";
#[derive(Clone, Copy, Debug, Default)]
pub enum BlockDeviceAio {
@@ -95,6 +95,9 @@ pub struct BlockConfig {
/// scsi_addr is of the format SCSI-Id:LUN
pub scsi_addr: Option<String>,
/// CCW device address for virtio-blk-ccw on s390x (e.g., "0.0.0005")
pub ccw_addr: Option<String>,
/// device attach count
pub attach_count: u64,

View File

@@ -392,7 +392,7 @@ impl ToQemuParams for Cpu {
/// Error type for CCW Subchannel operations
#[derive(Debug)]
#[allow(dead_code)]
enum CcwError {
pub enum CcwError {
DeviceAlreadyExists(String), // Error when trying to add an existing device
#[allow(dead_code)]
DeviceNotFound(String), // Error when trying to remove a nonexistent device
@@ -423,7 +423,7 @@ impl CcwSubChannel {
/// # Returns
/// - `Result<u32, CcwError>`: slot index of the added device
/// or an error if the device already exists
fn add_device(&mut self, dev_id: &str) -> Result<u32, CcwError> {
pub fn add_device(&mut self, dev_id: &str) -> Result<u32, CcwError> {
if self.devices.contains_key(dev_id) {
Err(CcwError::DeviceAlreadyExists(dev_id.to_owned()))
} else {
@@ -442,8 +442,7 @@ impl CcwSubChannel {
/// # Returns
/// - `Result<(), CcwError>`: Ok(()) if the device was removed
/// or an error if the device was not found
#[allow(dead_code)]
fn remove_device(&mut self, dev_id: &str) -> Result<(), CcwError> {
pub fn remove_device(&mut self, dev_id: &str) -> Result<(), CcwError> {
if self.devices.remove(dev_id).is_some() {
Ok(())
} else {
@@ -451,17 +450,30 @@ impl CcwSubChannel {
}
}
/// Formats the CCW address for a given slot
/// Formats the CCW address for a given slot.
/// Uses the 0xfe channel subsystem ID used by QEMU.
///
/// # Arguments
/// - `slot`: slot index
///
/// # Returns
/// - `String`: formatted CCW address (e.g. `fe.0.0000`)
fn address_format_ccw(&self, slot: u32) -> String {
pub fn address_format_ccw(&self, slot: u32) -> String {
format!("fe.{:x}.{:04x}", self.addr, slot)
}
/// Formats the guest-visible CCW address for a given slot.
/// Uses channel subsystem ID 0 (guest perspective).
///
/// # Arguments
/// - `slot`: slot index
///
/// # Returns
/// - `String`: formatted guest-visible CCW address (e.g. `0.0.0000`)
pub fn address_format_ccw_for_virt_server(&self, slot: u32) -> String {
format!("0.{:x}.{:04x}", self.addr, slot)
}
/// Sets the address of the subchannel.
/// # Arguments
/// - `addr`: subchannel address to set
@@ -2274,6 +2286,12 @@ impl<'a> QemuCmdLine<'a> {
Ok(qemu_cmd_line)
}
/// Takes ownership of the CCW subchannel, leaving `None` in its place.
/// Used to transfer boot-time CCW state to Qmp for hotplug allocation.
pub fn take_ccw_subchannel(&mut self) -> Option<CcwSubChannel> {
self.ccw_subchannel.take()
}
fn add_monitor(&mut self, proto: &str) -> Result<()> {
let monitor = QmpSocket::new(self.id.as_str(), MonitorProtocol::new(proto))?;
self.devices.push(Box::new(monitor));

View File

@@ -10,6 +10,7 @@ use crate::qemu::qmp::get_qmp_socket_path;
use crate::{
device::driver::ProtectionDeviceConfig, hypervisor_persist::HypervisorState, selinux,
HypervisorConfig, MemoryConfig, VcpuThreadIds, VsockDevice, HYPERVISOR_QEMU,
KATA_BLK_DEV_TYPE, KATA_CCW_DEV_TYPE, KATA_NVDIMM_DEV_TYPE, KATA_SCSI_DEV_TYPE,
};
use crate::utils::{
@@ -21,7 +22,7 @@ use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use kata_sys_util::netns::NetnsGuard;
use kata_types::build_path;
use kata_types::config::hypervisor::RootlessUser;
use kata_types::config::hypervisor::{RootlessUser, VIRTIO_BLK_CCW};
use kata_types::rootless::is_rootless;
use kata_types::{
capabilities::{Capabilities, CapabilityBits},
@@ -133,18 +134,18 @@ impl QemuInner {
continue;
}
match block_dev.config.driver_option.as_str() {
"nvdimm" => cmdline.add_nvdimm(
KATA_NVDIMM_DEV_TYPE => cmdline.add_nvdimm(
&block_dev.config.path_on_host,
block_dev.config.is_readonly,
)?,
"ccw" | "blk" | "scsi" => cmdline.add_block_device(
KATA_CCW_DEV_TYPE | KATA_BLK_DEV_TYPE | KATA_SCSI_DEV_TYPE => cmdline.add_block_device(
block_dev.device_id.as_str(),
&block_dev.config.path_on_host,
block_dev
.config
.is_direct
.unwrap_or(self.config.blockdev_info.block_device_cache_direct),
block_dev.config.driver_option.as_str() == "scsi",
block_dev.config.driver_option.as_str() == KATA_SCSI_DEV_TYPE,
)?,
unsupported => {
info!(sl!(), "unsupported block device driver: {}", unsupported)
@@ -285,7 +286,12 @@ impl QemuInner {
let qmp_socket_path = get_qmp_socket_path(self.id.as_str());
match Qmp::new(&qmp_socket_path) {
Ok(qmp) => self.qmp = Some(qmp),
Ok(mut qmp) => {
if let Some(subchannel) = cmdline.take_ccw_subchannel() {
qmp.set_ccw_subchannel(subchannel);
}
self.qmp = Some(qmp);
}
Err(e) => {
error!(sl!(), "couldn't initialise QMP: {:?}", e);
return Err(e);
@@ -842,9 +848,10 @@ impl QemuInner {
qmp.hotplug_network_device(&netdev, &virtio_net_device)?
}
DeviceType::Block(mut block_device) => {
let (pci_path, scsi_addr) = qmp
let block_driver = &self.config.blockdev_info.block_device_driver;
let (pci_path, addr_str) = qmp
.hotplug_block_device(
&self.config.blockdev_info.block_device_driver,
block_driver,
block_device.config.index,
&block_device.config.path_on_host,
&block_device.config.blkdev_aio.to_string(),
@@ -857,8 +864,12 @@ impl QemuInner {
if pci_path.is_some() {
block_device.config.pci_path = pci_path;
}
if scsi_addr.is_some() {
block_device.config.scsi_addr = scsi_addr;
if let Some(addr) = addr_str {
if block_driver == VIRTIO_BLK_CCW {
block_device.config.ccw_addr = Some(addr);
} else {
block_device.config.scsi_addr = Some(addr);
}
}
return Ok(DeviceType::Block(block_device));

View File

@@ -4,12 +4,12 @@
//
use crate::device::pci_path::PciPath;
use crate::qemu::cmdline_generator::{DeviceVirtioNet, Netdev, QMP_SOCKET_FILE};
use crate::qemu::cmdline_generator::{CcwSubChannel, DeviceVirtioNet, Netdev, QMP_SOCKET_FILE};
use crate::utils::get_jailer_root;
use crate::VcpuThreadIds;
use anyhow::{anyhow, Context, Result};
use kata_types::config::hypervisor::VIRTIO_SCSI;
use kata_types::config::hypervisor::{VIRTIO_BLK_CCW, VIRTIO_SCSI};
use kata_types::rootless::is_rootless;
use nix::sys::socket::{sendmsg, ControlMessage, MsgFlags};
use qapi_qmp::{
@@ -50,6 +50,11 @@ pub struct Qmp {
// blocks seem ever to be onlined in the guest by kata-agent.
// Store as u64 to keep up the convention of bytes being represented as u64.
guest_memory_block_size: u64,
// CCW subchannel for s390x device address management.
// Transferred from QemuCmdLine after boot so that hotplug allocations
// continue from where boot-time allocations left off.
ccw_subchannel: Option<CcwSubChannel>,
}
// We have to implement Debug since the Hypervisor trait requires it and Qmp
@@ -76,6 +81,7 @@ impl Qmp {
stream,
)),
guest_memory_block_size: 0,
ccw_subchannel: None,
};
let info = qmp.qmp.handshake().context("qmp handshake failed")?;
@@ -102,6 +108,10 @@ impl Qmp {
.with_context(|| format!("timed out waiting for QMP ready: {}", qmp_sock_path))
}
pub fn set_ccw_subchannel(&mut self, subchannel: CcwSubChannel) {
self.ccw_subchannel = Some(subchannel);
}
pub fn set_ignore_shared_memory_capability(&mut self) -> Result<()> {
self.qmp
.execute(&migrate_set_capabilities {
@@ -605,6 +615,13 @@ impl Qmp {
/// {"execute":"device_add","arguments":{"driver":"scsi-hd","drive":"virtio-scsi0","id":"scsi_device_0","bus":"virtio-scsi1.0"}}
/// {"return": {}}
///
/// Hotplug virtio-blk-ccw block device on s390x
/// # virtio-blk-ccw0
/// {"execute":"blockdev_add", "arguments": {"file":"/path/to/block.image","format":"qcow2","id":"virtio-blk-ccw0"}}
/// {"return": {}}
/// {"execute":"device_add","arguments":{"driver":"virtio-blk-ccw","id":"virtio-blk-ccw0","drive":"virtio-blk-ccw0","devno":"fe.0.0005","share-rw":true}}
/// {"return": {}}
///
#[allow(clippy::too_many_arguments)]
pub fn hotplug_block_device(
&mut self,
@@ -711,6 +728,14 @@ impl Qmp {
blkdev_add_args.insert("lun".to_string(), lun.into());
blkdev_add_args.insert("share-rw".to_string(), true.into());
info!(
sl!(),
"hotplug_block_device(): device_add arguments: bus: {}, id: {}, driver: {}, blkdev_add_args: {:#?}",
"scsi0.0",
node_name,
"scsi-hd",
blkdev_add_args
);
self.qmp
.execute(&qmp::device_add {
bus: Some("scsi0.0".to_string()),
@@ -727,11 +752,60 @@ impl Qmp {
);
Ok((None, Some(scsi_addr)))
} else if block_driver == VIRTIO_BLK_CCW {
let subchannel = self
.ccw_subchannel
.as_mut()
.ok_or_else(|| anyhow!("CCW subchannel not available for virtio-blk-ccw hotplug"))?;
let slot = subchannel
.add_device(&node_name)
.map_err(|e| anyhow!("CCW subchannel add_device failed: {:?}", e))?;
let devno = subchannel.address_format_ccw(slot);
let ccw_addr = subchannel.address_format_ccw_for_virt_server(slot);
blkdev_add_args.insert("devno".to_owned(), devno.clone().into());
blkdev_add_args.insert("share-rw".to_string(), true.into());
info!(
sl!(),
"hotplug_block_device(): CCW device_add: id: {}, driver: {}, blkdev_add_args: {:#?}, ccw_addr: {}",
node_name,
block_driver,
blkdev_add_args,
ccw_addr
);
let device_add_result = self.qmp.execute(&qmp::device_add {
bus: None,
id: Some(node_name.clone()),
driver: block_driver.to_string(),
arguments: blkdev_add_args,
});
if let Err(e) = device_add_result {
// Roll back CCW subchannel state if QMP device_add fails
let _ = subchannel.remove_device(&node_name);
return Err(anyhow!("device_add {:?}", e));
}
info!(
sl!(),
"hotplug CCW block device return ccw address: {:?}", &ccw_addr
);
Ok((None, Some(ccw_addr)))
} else {
let (bus, slot) = self.find_free_slot()?;
blkdev_add_args.insert("addr".to_owned(), format!("{slot:02}").into());
blkdev_add_args.insert("share-rw".to_string(), true.into());
info!(
sl!(),
"hotplug_block_device(): device_add arguments: bus: {}, id: {}, driver: {}, blkdev_add_args: {:#?}",
bus,
node_name,
block_driver,
blkdev_add_args
);
self.qmp
.execute(&qmp::device_add {
bus: Some(bus),

View File

@@ -429,14 +429,16 @@ impl ResourceManagerInner {
.await
.context("do handle device")?;
// create block device for kata agent,
// if driver is virtio-blk-pci, the id will be pci address.
// create block device for kata agent.
// The device ID is derived from the available address: PCI, SCSI,
// CCW, or virtual path, depending on the driver and configuration.
if let DeviceType::Block(device) = device_info {
// The following would work for drivers virtio-blk-pci and virtio-mmio and virtio-scsi.
let id = if let Some(pci_path) = device.config.pci_path {
pci_path.to_string()
} else if let Some(scsi_address) = device.config.scsi_addr {
scsi_address
} else if let Some(ccw_addr) = device.config.ccw_addr {
ccw_addr
} else {
device.config.virt_path.clone()
};

View File

@@ -100,7 +100,13 @@ impl BlockRootfs {
VIRTIO_BLK_MMIO => {
storage.source = device.config.virt_path;
}
VIRTIO_SCSI | VIRTIO_BLK_CCW | VIRTIO_PMEM => {
VIRTIO_BLK_CCW => {
storage.source = device
.config
.ccw_addr
.ok_or_else(|| anyhow!("CCW address missing for ccw block device"))?;
}
VIRTIO_SCSI | VIRTIO_PMEM => {
return Err(anyhow!(
"Complete support for block driver {} has not been implemented yet",
block_driver

View File

@@ -15,6 +15,10 @@ use crate::{
};
use anyhow::{anyhow, Context, Result};
use kata_sys_util::mount::{get_mount_options, get_mount_path};
use kata_types::device::{
DRIVER_BLK_CCW_TYPE as KATA_CCW_DEV_TYPE, DRIVER_BLK_PCI_TYPE as KATA_BLK_DEV_TYPE,
DRIVER_SCSI_TYPE as KATA_SCSI_DEV_TYPE,
};
use oci_spec::runtime as oci;
use hypervisor::device::DeviceType;
@@ -22,9 +26,6 @@ use hypervisor::device::DeviceType;
pub const DEFAULT_VOLUME_FS_TYPE: &str = "ext4";
pub const KATA_MOUNT_BIND_TYPE: &str = "bind";
pub const KATA_BLK_DEV_TYPE: &str = "blk";
pub const KATA_SCSI_DEV_TYPE: &str = "scsi";
pub fn get_file_name<P: AsRef<Path>>(src: P) -> Result<String> {
let file_name = src
.as_ref()
@@ -104,6 +105,13 @@ pub async fn handle_block_volume(
return Err(anyhow!("block driver is scsi but no scsi address exists"));
}
}
KATA_CCW_DEV_TYPE => {
if let Some(ccw_addr) = device.config.ccw_addr {
ccw_addr.to_string()
} else {
return Err(anyhow!("block driver is ccw but no ccw address exists"));
}
}
_ => device.config.virt_path,
};
device_id = device.device_id;

View File

@@ -45,6 +45,7 @@ docs/VmCoredumpData.md
docs/VmInfo.md
docs/VmRemoveDevice.md
docs/VmResize.md
docs/VmResizeDisk.md
docs/VmResizeZone.md
docs/VmSnapshotConfig.md
docs/VmmPingResponse.md
@@ -90,6 +91,7 @@ model_vm_coredump_data.go
model_vm_info.go
model_vm_remove_device.go
model_vm_resize.go
model_vm_resize_disk.go
model_vm_resize_zone.go
model_vm_snapshot_config.go
model_vmm_ping_response.go

View File

@@ -99,6 +99,7 @@ Class | Method | HTTP request | Description
*DefaultApi* | [**VmInfoGet**](docs/DefaultApi.md#vminfoget) | **Get** /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance.
*DefaultApi* | [**VmReceiveMigrationPut**](docs/DefaultApi.md#vmreceivemigrationput) | **Put** /vm.receive-migration | Receive a VM migration from URL
*DefaultApi* | [**VmRemoveDevicePut**](docs/DefaultApi.md#vmremovedeviceput) | **Put** /vm.remove-device | Remove a device from the VM
*DefaultApi* | [**VmResizeDiskPut**](docs/DefaultApi.md#vmresizediskput) | **Put** /vm.resize-disk | Resize a disk
*DefaultApi* | [**VmResizePut**](docs/DefaultApi.md#vmresizeput) | **Put** /vm.resize | Resize the VM
*DefaultApi* | [**VmResizeZonePut**](docs/DefaultApi.md#vmresizezoneput) | **Put** /vm.resize-zone | Resize a memory zone
*DefaultApi* | [**VmRestorePut**](docs/DefaultApi.md#vmrestoreput) | **Put** /vm.restore | Restore a VM from a snapshot.
@@ -148,6 +149,7 @@ Class | Method | HTTP request | Description
- [VmInfo](docs/VmInfo.md)
- [VmRemoveDevice](docs/VmRemoveDevice.md)
- [VmResize](docs/VmResize.md)
- [VmResizeDisk](docs/VmResizeDisk.md)
- [VmResizeZone](docs/VmResizeZone.md)
- [VmSnapshotConfig](docs/VmSnapshotConfig.md)
- [VmmPingResponse](docs/VmmPingResponse.md)

View File

@@ -153,6 +153,21 @@ paths:
description: The VM instance could not be resized because a cpu removal
is still pending.
summary: Resize the VM
/vm.resize-disk:
put:
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/VmResizeDisk'
description: Resizes a disk attached to the VM
required: true
responses:
"204":
description: The disk was successfully resized.
"500":
description: The disk could not be resized.
summary: Resize a disk
/vm.resize-zone:
put:
requestBody:
@@ -649,7 +664,9 @@ components:
- tap: tap
host_mac: host_mac
num_queues: 6
offload_ufo: true
queue_size: 1
offload_csum: true
ip: 192.168.249.1
rate_limiter_config:
ops:
@@ -663,6 +680,7 @@ components:
mac: mac
mtu: 3
pci_segment: 2
offload_tso: true
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
@@ -672,7 +690,9 @@ components:
- tap: tap
host_mac: host_mac
num_queues: 6
offload_ufo: true
queue_size: 1
offload_csum: true
ip: 192.168.249.1
rate_limiter_config:
ops:
@@ -686,6 +706,7 @@ components:
mac: mac
mtu: 3
pci_segment: 2
offload_tso: true
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
@@ -1079,7 +1100,9 @@ components:
- tap: tap
host_mac: host_mac
num_queues: 6
offload_ufo: true
queue_size: 1
offload_csum: true
ip: 192.168.249.1
rate_limiter_config:
ops:
@@ -1093,6 +1116,7 @@ components:
mac: mac
mtu: 3
pci_segment: 2
offload_tso: true
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
@@ -1102,7 +1126,9 @@ components:
- tap: tap
host_mac: host_mac
num_queues: 6
offload_ufo: true
queue_size: 1
offload_csum: true
ip: 192.168.249.1
rate_limiter_config:
ops:
@@ -1116,6 +1142,7 @@ components:
mac: mac
mtu: 3
pci_segment: 2
offload_tso: true
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
@@ -1741,7 +1768,9 @@ components:
tap: tap
host_mac: host_mac
num_queues: 6
offload_ufo: true
queue_size: 1
offload_csum: true
ip: 192.168.249.1
rate_limiter_config:
ops:
@@ -1755,6 +1784,7 @@ components:
mac: mac
mtu: 3
pci_segment: 2
offload_tso: true
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
@@ -1803,6 +1833,15 @@ components:
type: integer
rate_limiter_config:
$ref: '#/components/schemas/RateLimiterConfig'
offload_tso:
default: true
type: boolean
offload_ufo:
default: true
type: boolean
offload_csum:
default: true
type: boolean
type: object
RngConfig:
example:
@@ -2103,6 +2142,19 @@ components:
format: int64
type: integer
type: object
VmResizeDisk:
example:
desired_size: 0
id: id
properties:
id:
description: disk identifier
type: string
desired_size:
description: desired disk size in bytes
format: int64
type: integer
type: object
VmResizeZone:
example:
id: id

View File

@@ -2226,6 +2226,106 @@ func (a *DefaultApiService) VmRemoveDevicePutExecute(r ApiVmRemoveDevicePutReque
return localVarHTTPResponse, nil
}
type ApiVmResizeDiskPutRequest struct {
ctx _context.Context
ApiService *DefaultApiService
vmResizeDisk *VmResizeDisk
}
// Resizes a disk attached to the VM
func (r ApiVmResizeDiskPutRequest) VmResizeDisk(vmResizeDisk VmResizeDisk) ApiVmResizeDiskPutRequest {
r.vmResizeDisk = &vmResizeDisk
return r
}
func (r ApiVmResizeDiskPutRequest) Execute() (*_nethttp.Response, error) {
return r.ApiService.VmResizeDiskPutExecute(r)
}
/*
VmResizeDiskPut Resize a disk
@param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background().
@return ApiVmResizeDiskPutRequest
*/
func (a *DefaultApiService) VmResizeDiskPut(ctx _context.Context) ApiVmResizeDiskPutRequest {
return ApiVmResizeDiskPutRequest{
ApiService: a,
ctx: ctx,
}
}
// Execute executes the request
func (a *DefaultApiService) VmResizeDiskPutExecute(r ApiVmResizeDiskPutRequest) (*_nethttp.Response, error) {
var (
localVarHTTPMethod = _nethttp.MethodPut
localVarPostBody interface{}
localVarFormFileName string
localVarFileName string
localVarFileBytes []byte
)
localBasePath, err := a.client.cfg.ServerURLWithContext(r.ctx, "DefaultApiService.VmResizeDiskPut")
if err != nil {
return nil, GenericOpenAPIError{error: err.Error()}
}
localVarPath := localBasePath + "/vm.resize-disk"
localVarHeaderParams := make(map[string]string)
localVarQueryParams := _neturl.Values{}
localVarFormParams := _neturl.Values{}
if r.vmResizeDisk == nil {
return nil, reportError("vmResizeDisk is required and must be specified")
}
// to determine the Content-Type header
localVarHTTPContentTypes := []string{"application/json"}
// set Content-Type header
localVarHTTPContentType := selectHeaderContentType(localVarHTTPContentTypes)
if localVarHTTPContentType != "" {
localVarHeaderParams["Content-Type"] = localVarHTTPContentType
}
// to determine the Accept header
localVarHTTPHeaderAccepts := []string{}
// set Accept header
localVarHTTPHeaderAccept := selectHeaderAccept(localVarHTTPHeaderAccepts)
if localVarHTTPHeaderAccept != "" {
localVarHeaderParams["Accept"] = localVarHTTPHeaderAccept
}
// body params
localVarPostBody = r.vmResizeDisk
req, err := a.client.prepareRequest(r.ctx, localVarPath, localVarHTTPMethod, localVarPostBody, localVarHeaderParams, localVarQueryParams, localVarFormParams, localVarFormFileName, localVarFileName, localVarFileBytes)
if err != nil {
return nil, err
}
localVarHTTPResponse, err := a.client.callAPI(req)
if err != nil || localVarHTTPResponse == nil {
return localVarHTTPResponse, err
}
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(bytes.NewBuffer(localVarBody))
if err != nil {
return localVarHTTPResponse, err
}
if localVarHTTPResponse.StatusCode >= 300 {
newErr := GenericOpenAPIError{
body: localVarBody,
error: localVarHTTPResponse.Status,
}
return localVarHTTPResponse, newErr
}
return localVarHTTPResponse, nil
}
type ApiVmResizePutRequest struct {
ctx _context.Context
ApiService *DefaultApiService

View File

@@ -26,6 +26,7 @@ Method | HTTP request | Description
[**VmInfoGet**](DefaultApi.md#VmInfoGet) | **Get** /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance.
[**VmReceiveMigrationPut**](DefaultApi.md#VmReceiveMigrationPut) | **Put** /vm.receive-migration | Receive a VM migration from URL
[**VmRemoveDevicePut**](DefaultApi.md#VmRemoveDevicePut) | **Put** /vm.remove-device | Remove a device from the VM
[**VmResizeDiskPut**](DefaultApi.md#VmResizeDiskPut) | **Put** /vm.resize-disk | Resize a disk
[**VmResizePut**](DefaultApi.md#VmResizePut) | **Put** /vm.resize | Resize the VM
[**VmResizeZonePut**](DefaultApi.md#VmResizeZonePut) | **Put** /vm.resize-zone | Resize a memory zone
[**VmRestorePut**](DefaultApi.md#VmRestorePut) | **Put** /vm.restore | Restore a VM from a snapshot.
@@ -1370,6 +1371,68 @@ No authorization required
[[Back to README]](../README.md)
## VmResizeDiskPut
> VmResizeDiskPut(ctx).VmResizeDisk(vmResizeDisk).Execute()
Resize a disk
### Example
```go
package main
import (
"context"
"fmt"
"os"
openapiclient "./openapi"
)
func main() {
vmResizeDisk := *openapiclient.NewVmResizeDisk() // VmResizeDisk | Resizes a disk attached to the VM
configuration := openapiclient.NewConfiguration()
api_client := openapiclient.NewAPIClient(configuration)
resp, r, err := api_client.DefaultApi.VmResizeDiskPut(context.Background()).VmResizeDisk(vmResizeDisk).Execute()
if err != nil {
fmt.Fprintf(os.Stderr, "Error when calling `DefaultApi.VmResizeDiskPut``: %v\n", err)
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
}
}
```
### Path Parameters
### Other Parameters
Other parameters are passed through a pointer to a apiVmResizeDiskPutRequest struct via the builder pattern
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**vmResizeDisk** | [**VmResizeDisk**](VmResizeDisk.md) | Resizes a disk attached to the VM |
### Return type
(empty response body)
### Authorization
No authorization required
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: Not defined
[[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints)
[[Back to Model list]](../README.md#documentation-for-models)
[[Back to README]](../README.md)
## VmResizePut
> VmResizePut(ctx).VmResize(vmResize).Execute()

View File

@@ -19,6 +19,9 @@ Name | Type | Description | Notes
**Id** | Pointer to **string** | | [optional]
**PciSegment** | Pointer to **int32** | | [optional]
**RateLimiterConfig** | Pointer to [**RateLimiterConfig**](RateLimiterConfig.md) | | [optional]
**OffloadTso** | Pointer to **bool** | | [optional] [default to true]
**OffloadUfo** | Pointer to **bool** | | [optional] [default to true]
**OffloadCsum** | Pointer to **bool** | | [optional] [default to true]
## Methods
@@ -414,6 +417,81 @@ SetRateLimiterConfig sets RateLimiterConfig field to given value.
HasRateLimiterConfig returns a boolean if a field has been set.
### GetOffloadTso
`func (o *NetConfig) GetOffloadTso() bool`
GetOffloadTso returns the OffloadTso field if non-nil, zero value otherwise.
### GetOffloadTsoOk
`func (o *NetConfig) GetOffloadTsoOk() (*bool, bool)`
GetOffloadTsoOk returns a tuple with the OffloadTso field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetOffloadTso
`func (o *NetConfig) SetOffloadTso(v bool)`
SetOffloadTso sets OffloadTso field to given value.
### HasOffloadTso
`func (o *NetConfig) HasOffloadTso() bool`
HasOffloadTso returns a boolean if a field has been set.
### GetOffloadUfo
`func (o *NetConfig) GetOffloadUfo() bool`
GetOffloadUfo returns the OffloadUfo field if non-nil, zero value otherwise.
### GetOffloadUfoOk
`func (o *NetConfig) GetOffloadUfoOk() (*bool, bool)`
GetOffloadUfoOk returns a tuple with the OffloadUfo field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetOffloadUfo
`func (o *NetConfig) SetOffloadUfo(v bool)`
SetOffloadUfo sets OffloadUfo field to given value.
### HasOffloadUfo
`func (o *NetConfig) HasOffloadUfo() bool`
HasOffloadUfo returns a boolean if a field has been set.
### GetOffloadCsum
`func (o *NetConfig) GetOffloadCsum() bool`
GetOffloadCsum returns the OffloadCsum field if non-nil, zero value otherwise.
### GetOffloadCsumOk
`func (o *NetConfig) GetOffloadCsumOk() (*bool, bool)`
GetOffloadCsumOk returns a tuple with the OffloadCsum field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetOffloadCsum
`func (o *NetConfig) SetOffloadCsum(v bool)`
SetOffloadCsum sets OffloadCsum field to given value.
### HasOffloadCsum
`func (o *NetConfig) HasOffloadCsum() bool`
HasOffloadCsum returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -0,0 +1,82 @@
# VmResizeDisk
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Id** | Pointer to **string** | disk identifier | [optional]
**DesiredSize** | Pointer to **int64** | desired disk size in bytes | [optional]
## Methods
### NewVmResizeDisk
`func NewVmResizeDisk() *VmResizeDisk`
NewVmResizeDisk instantiates a new VmResizeDisk object
This constructor will assign default values to properties that have it defined,
and makes sure properties required by API are set, but the set of arguments
will change when the set of required properties is changed
### NewVmResizeDiskWithDefaults
`func NewVmResizeDiskWithDefaults() *VmResizeDisk`
NewVmResizeDiskWithDefaults instantiates a new VmResizeDisk object
This constructor will only assign default values to properties that have it defined,
but it doesn't guarantee that properties required by API are set
### GetId
`func (o *VmResizeDisk) GetId() string`
GetId returns the Id field if non-nil, zero value otherwise.
### GetIdOk
`func (o *VmResizeDisk) GetIdOk() (*string, bool)`
GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetId
`func (o *VmResizeDisk) SetId(v string)`
SetId sets Id field to given value.
### HasId
`func (o *VmResizeDisk) HasId() bool`
HasId returns a boolean if a field has been set.
### GetDesiredSize
`func (o *VmResizeDisk) GetDesiredSize() int64`
GetDesiredSize returns the DesiredSize field if non-nil, zero value otherwise.
### GetDesiredSizeOk
`func (o *VmResizeDisk) GetDesiredSizeOk() (*int64, bool)`
GetDesiredSizeOk returns a tuple with the DesiredSize field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetDesiredSize
`func (o *VmResizeDisk) SetDesiredSize(v int64)`
SetDesiredSize sets DesiredSize field to given value.
### HasDesiredSize
`func (o *VmResizeDisk) HasDesiredSize() bool`
HasDesiredSize returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -33,6 +33,9 @@ type NetConfig struct {
Id *string `json:"id,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
RateLimiterConfig *RateLimiterConfig `json:"rate_limiter_config,omitempty"`
OffloadTso *bool `json:"offload_tso,omitempty"`
OffloadUfo *bool `json:"offload_ufo,omitempty"`
OffloadCsum *bool `json:"offload_csum,omitempty"`
}
// NewNetConfig instantiates a new NetConfig object
@@ -55,6 +58,12 @@ func NewNetConfig() *NetConfig {
this.VhostUser = &vhostUser
var vhostMode string = "Client"
this.VhostMode = &vhostMode
var offloadTso bool = true
this.OffloadTso = &offloadTso
var offloadUfo bool = true
this.OffloadUfo = &offloadUfo
var offloadCsum bool = true
this.OffloadCsum = &offloadCsum
return &this
}
@@ -77,6 +86,12 @@ func NewNetConfigWithDefaults() *NetConfig {
this.VhostUser = &vhostUser
var vhostMode string = "Client"
this.VhostMode = &vhostMode
var offloadTso bool = true
this.OffloadTso = &offloadTso
var offloadUfo bool = true
this.OffloadUfo = &offloadUfo
var offloadCsum bool = true
this.OffloadCsum = &offloadCsum
return &this
}
@@ -560,6 +575,102 @@ func (o *NetConfig) SetRateLimiterConfig(v RateLimiterConfig) {
o.RateLimiterConfig = &v
}
// GetOffloadTso returns the OffloadTso field value if set, zero value otherwise.
func (o *NetConfig) GetOffloadTso() bool {
if o == nil || o.OffloadTso == nil {
var ret bool
return ret
}
return *o.OffloadTso
}
// GetOffloadTsoOk returns a tuple with the OffloadTso field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *NetConfig) GetOffloadTsoOk() (*bool, bool) {
if o == nil || o.OffloadTso == nil {
return nil, false
}
return o.OffloadTso, true
}
// HasOffloadTso returns a boolean if a field has been set.
func (o *NetConfig) HasOffloadTso() bool {
if o != nil && o.OffloadTso != nil {
return true
}
return false
}
// SetOffloadTso gets a reference to the given bool and assigns it to the OffloadTso field.
func (o *NetConfig) SetOffloadTso(v bool) {
o.OffloadTso = &v
}
// GetOffloadUfo returns the OffloadUfo field value if set, zero value otherwise.
func (o *NetConfig) GetOffloadUfo() bool {
if o == nil || o.OffloadUfo == nil {
var ret bool
return ret
}
return *o.OffloadUfo
}
// GetOffloadUfoOk returns a tuple with the OffloadUfo field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *NetConfig) GetOffloadUfoOk() (*bool, bool) {
if o == nil || o.OffloadUfo == nil {
return nil, false
}
return o.OffloadUfo, true
}
// HasOffloadUfo returns a boolean if a field has been set.
func (o *NetConfig) HasOffloadUfo() bool {
if o != nil && o.OffloadUfo != nil {
return true
}
return false
}
// SetOffloadUfo gets a reference to the given bool and assigns it to the OffloadUfo field.
func (o *NetConfig) SetOffloadUfo(v bool) {
o.OffloadUfo = &v
}
// GetOffloadCsum returns the OffloadCsum field value if set, zero value otherwise.
func (o *NetConfig) GetOffloadCsum() bool {
if o == nil || o.OffloadCsum == nil {
var ret bool
return ret
}
return *o.OffloadCsum
}
// GetOffloadCsumOk returns a tuple with the OffloadCsum field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *NetConfig) GetOffloadCsumOk() (*bool, bool) {
if o == nil || o.OffloadCsum == nil {
return nil, false
}
return o.OffloadCsum, true
}
// HasOffloadCsum returns a boolean if a field has been set.
func (o *NetConfig) HasOffloadCsum() bool {
if o != nil && o.OffloadCsum != nil {
return true
}
return false
}
// SetOffloadCsum gets a reference to the given bool and assigns it to the OffloadCsum field.
func (o *NetConfig) SetOffloadCsum(v bool) {
o.OffloadCsum = &v
}
func (o NetConfig) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if o.Tap != nil {
@@ -607,6 +718,15 @@ func (o NetConfig) MarshalJSON() ([]byte, error) {
if o.RateLimiterConfig != nil {
toSerialize["rate_limiter_config"] = o.RateLimiterConfig
}
if o.OffloadTso != nil {
toSerialize["offload_tso"] = o.OffloadTso
}
if o.OffloadUfo != nil {
toSerialize["offload_ufo"] = o.OffloadUfo
}
if o.OffloadCsum != nil {
toSerialize["offload_csum"] = o.OffloadCsum
}
return json.Marshal(toSerialize)
}

View File

@@ -0,0 +1,151 @@
/*
Cloud Hypervisor API
Local HTTP based API for managing and inspecting a cloud-hypervisor virtual machine.
API version: 0.3.0
*/
// Code generated by OpenAPI Generator (https://openapi-generator.tech); DO NOT EDIT.
package openapi
import (
"encoding/json"
)
// VmResizeDisk struct for VmResizeDisk
type VmResizeDisk struct {
// disk identifier
Id *string `json:"id,omitempty"`
// desired disk size in bytes
DesiredSize *int64 `json:"desired_size,omitempty"`
}
// NewVmResizeDisk instantiates a new VmResizeDisk object
// This constructor will assign default values to properties that have it defined,
// and makes sure properties required by API are set, but the set of arguments
// will change when the set of required properties is changed
func NewVmResizeDisk() *VmResizeDisk {
this := VmResizeDisk{}
return &this
}
// NewVmResizeDiskWithDefaults instantiates a new VmResizeDisk object
// This constructor will only assign default values to properties that have it defined,
// but it doesn't guarantee that properties required by API are set
func NewVmResizeDiskWithDefaults() *VmResizeDisk {
this := VmResizeDisk{}
return &this
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *VmResizeDisk) GetId() string {
if o == nil || o.Id == nil {
var ret string
return ret
}
return *o.Id
}
// GetIdOk returns a tuple with the Id field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VmResizeDisk) GetIdOk() (*string, bool) {
if o == nil || o.Id == nil {
return nil, false
}
return o.Id, true
}
// HasId returns a boolean if a field has been set.
func (o *VmResizeDisk) HasId() bool {
if o != nil && o.Id != nil {
return true
}
return false
}
// SetId gets a reference to the given string and assigns it to the Id field.
func (o *VmResizeDisk) SetId(v string) {
o.Id = &v
}
// GetDesiredSize returns the DesiredSize field value if set, zero value otherwise.
func (o *VmResizeDisk) GetDesiredSize() int64 {
if o == nil || o.DesiredSize == nil {
var ret int64
return ret
}
return *o.DesiredSize
}
// GetDesiredSizeOk returns a tuple with the DesiredSize field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VmResizeDisk) GetDesiredSizeOk() (*int64, bool) {
if o == nil || o.DesiredSize == nil {
return nil, false
}
return o.DesiredSize, true
}
// HasDesiredSize returns a boolean if a field has been set.
func (o *VmResizeDisk) HasDesiredSize() bool {
if o != nil && o.DesiredSize != nil {
return true
}
return false
}
// SetDesiredSize gets a reference to the given int64 and assigns it to the DesiredSize field.
func (o *VmResizeDisk) SetDesiredSize(v int64) {
o.DesiredSize = &v
}
func (o VmResizeDisk) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if o.Id != nil {
toSerialize["id"] = o.Id
}
if o.DesiredSize != nil {
toSerialize["desired_size"] = o.DesiredSize
}
return json.Marshal(toSerialize)
}
type NullableVmResizeDisk struct {
value *VmResizeDisk
isSet bool
}
func (v NullableVmResizeDisk) Get() *VmResizeDisk {
return v.value
}
func (v *NullableVmResizeDisk) Set(val *VmResizeDisk) {
v.value = val
v.isSet = true
}
func (v NullableVmResizeDisk) IsSet() bool {
return v.isSet
}
func (v *NullableVmResizeDisk) Unset() {
v.value = nil
v.isSet = false
}
func NewNullableVmResizeDisk(val *VmResizeDisk) *NullableVmResizeDisk {
return &NullableVmResizeDisk{value: val, isSet: true}
}
func (v NullableVmResizeDisk) MarshalJSON() ([]byte, error) {
return json.Marshal(v.value)
}
func (v *NullableVmResizeDisk) UnmarshalJSON(src []byte) error {
v.isSet = true
return json.Unmarshal(src, &v.value)
}

View File

@@ -163,6 +163,22 @@ paths:
429:
description: The VM instance could not be resized because a cpu removal is still pending.
/vm.resize-disk:
put:
summary: Resize a disk
requestBody:
description: Resizes a disk attached to the VM
content:
application/json:
schema:
$ref: "#/components/schemas/VmResizeDisk"
required: true
responses:
204:
description: The disk was successfully resized.
500:
description: The disk could not be resized.
/vm.resize-zone:
put:
summary: Resize a memory zone
@@ -966,6 +982,15 @@ components:
format: int16
rate_limiter_config:
$ref: "#/components/schemas/RateLimiterConfig"
offload_tso:
type: boolean
default: true
offload_ufo:
type: boolean
default: true
offload_csum:
type: boolean
default: true
RngConfig:
required:
@@ -1194,6 +1219,17 @@ components:
type: integer
format: int64
VmResizeDisk:
type: object
properties:
id:
description: disk identifier
type: string
desired_size:
description: desired disk size in bytes
type: integer
format: int64
VmResizeZone:
type: object
properties:

View File

@@ -51,12 +51,33 @@ default WriteStreamRequest := false
# them and inspect OPA logs for the root cause of a failure.
default AllowRequestsFailingPolicy := false
# Constants
# Constants (containerd keys; CRI-O uses different keys, see *_CRIO below)
S_NAME_KEY = "io.kubernetes.cri.sandbox-name"
S_NAMESPACE_KEY = "io.kubernetes.cri.sandbox-namespace"
S_NAME_KEY_CRIO = "io.kubernetes.cri-o.SandboxName"
S_NAMESPACE_KEY_CRIO = "io.kubernetes.cri-o.Namespace"
SANDBOX_ID_KEY = "io.kubernetes.cri.sandbox-id"
SANDBOX_ID_KEY_CRIO = "io.kubernetes.cri-o.SandboxID"
C_TYPE_KEY = "io.kubernetes.cri.container-type"
C_TYPE_KEY_CRIO = "io.kubernetes.cri-o.ContainerType"
CONTAINER_NAME_KEY = "io.kubernetes.cri.container-name"
CONTAINER_NAME_KEY_CRIO = "io.kubernetes.cri-o.ContainerName"
IMAGE_NAME_KEY = "io.kubernetes.cri.image-name"
IMAGE_NAME_KEY_CRIO = "io.kubernetes.cri-o.ImageName"
SANDBOX_LOG_DIR_KEY = "io.kubernetes.cri.sandbox-log-directory"
SANDBOX_LOG_DIR_KEY_CRIO = "io.kubernetes.cri-o.LogPath"
CDI_VFIO_ANNOTATION_PREFIX = "cdi.k8s.io/vfio"
VFIO_PCI_ADDRESS_REGEX = "^[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[01][0-9a-fA-F]\\.[0-7]=[0-9a-fA-F]{2}/[0-9a-fA-F]{2}$"
# Get annotation value from input OCI: accept either CRI (containerd) or CRI-O key.
get_input_anno(i_oci, cri_key, crio_key) := v if {
v := i_oci.Annotations[cri_key]
}
get_input_anno(i_oci, cri_key, crio_key) := v if {
not i_oci.Annotations[cri_key]
v := i_oci.Annotations[crio_key]
}
CreateContainerRequest := {"ops": ops, "allowed": true} if {
# Check if the input request should be rejected even before checking the
# policy_data.containers information.
@@ -69,8 +90,8 @@ CreateContainerRequest := {"ops": ops, "allowed": true} if {
# array of possible state operations
ops_builder := []
# check sandbox name
sandbox_name = i_oci.Annotations[S_NAME_KEY]
# check sandbox name (containerd or CRI-O)
sandbox_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
add_sandbox_name_to_state := state_allows("sandbox_name", sandbox_name)
ops_builder1 := concat_op_if_not_null(ops_builder, add_sandbox_name_to_state)
@@ -85,9 +106,9 @@ CreateContainerRequest := {"ops": ops, "allowed": true} if {
p_oci := p_container.OCI
# check namespace
# check namespace (containerd or CRI-O)
p_namespace := p_oci.Annotations[S_NAMESPACE_KEY]
i_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
i_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
print("CreateContainerRequest: p_namespace =", p_namespace, "i_namespace =", i_namespace)
add_namespace_to_state := allow_namespace(p_namespace, i_namespace)
ops_builder2 := concat_op_if_not_null(ops_builder1, add_namespace_to_state)
@@ -249,9 +270,13 @@ allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 1: i key =", i_key)
startswith(i_key, "io.kubernetes.cri.")
print("allow_anno_key_value 1: true")
}
allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 1b: i key =", i_key)
startswith(i_key, "io.kubernetes.cri-o.")
print("allow_anno_key_value 1b: true")
}
allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 2: i key =", i_key)
@@ -272,17 +297,17 @@ allow_anno_key_value(i_key, i_value, p_container) if {
print("allow_anno_key_value 3: true")
}
# Get the value of the S_NAME_KEY annotation and
# correlate it with other annotations and process fields.
# Get the value of the sandbox name/namespace annotations (containerd or CRI-O) and
# correlate with other annotations and process fields.
allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_anno 1: start")
not p_oci.Annotations[S_NAME_KEY]
i_s_name := i_oci.Annotations[S_NAME_KEY]
i_s_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
print("allow_by_anno 1: i_s_name =", i_s_name)
i_s_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
i_s_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
print("allow_by_anno 1: i_s_namespace =", i_s_namespace)
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, i_s_name, i_s_namespace)
@@ -293,12 +318,12 @@ allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_anno 2: start")
p_s_name := p_oci.Annotations[S_NAME_KEY]
i_s_name := i_oci.Annotations[S_NAME_KEY]
i_s_name := get_input_anno(i_oci, S_NAME_KEY, S_NAME_KEY_CRIO)
print("allow_by_anno 2: i_s_name =", i_s_name, "p_s_name =", p_s_name)
allow_sandbox_name(p_s_name, i_s_name)
i_s_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
i_s_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
print("allow_by_anno 2: i_s_namespace =", i_s_namespace)
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, i_s_name, i_s_namespace)
@@ -309,7 +334,7 @@ allow_by_anno(p_oci, i_oci, p_storages, i_storages) if {
allow_by_sandbox_name(p_oci, i_oci, p_storages, i_storages, s_name, s_namespace) if {
print("allow_by_sandbox_name: start")
i_namespace := i_oci.Annotations[S_NAMESPACE_KEY]
i_namespace := get_input_anno(i_oci, S_NAMESPACE_KEY, S_NAMESPACE_KEY_CRIO)
allow_by_container_types(p_oci, i_oci, s_name, i_namespace)
allow_by_bundle_or_sandbox_id(p_oci, i_oci, p_storages, i_storages)
@@ -325,18 +350,14 @@ allow_sandbox_name(p_s_name, i_s_name) if {
print("allow_sandbox_name: true")
}
# Check that the "io.kubernetes.cri.container-type" and
# "io.katacontainers.pkg.oci.container_type" annotations designate the
# expected type - either a "sandbox" or a "container". Then, validate
# other annotations based on the actual "sandbox" or "container" value
# from the input container.
# Check that the container-type annotation (containerd or CRI-O) and
# "io.katacontainers.pkg.oci.container_type" designate the expected type -
# either "sandbox" or "container". Then validate other annotations accordingly.
allow_by_container_types(p_oci, i_oci, s_name, s_namespace) if {
print("allow_by_container_types: checking io.kubernetes.cri.container-type")
print("allow_by_container_types: checking container-type")
c_type := "io.kubernetes.cri.container-type"
p_cri_type := p_oci.Annotations[c_type]
i_cri_type := i_oci.Annotations[c_type]
p_cri_type := p_oci.Annotations[C_TYPE_KEY]
i_cri_type := get_input_anno(i_oci, C_TYPE_KEY, C_TYPE_KEY_CRIO)
print("allow_by_container_types: p_cri_type =", p_cri_type, "i_cri_type =", i_cri_type)
p_cri_type == i_cri_type
@@ -375,44 +396,54 @@ allow_by_container_type(i_cri_type, p_oci, i_oci, s_name, s_namespace) if {
print("allow_by_container_type 2: true")
}
# "io.kubernetes.cri.container-name" annotation
# Container name: sandbox has none; container must match (containerd or CRI-O key).
allow_sandbox_container_name(p_oci, i_oci) if {
print("allow_sandbox_container_name: start")
container_annotation_missing(p_oci, i_oci, "io.kubernetes.cri.container-name")
container_annotation_missing_cri_crio(p_oci, i_oci, CONTAINER_NAME_KEY, CONTAINER_NAME_KEY_CRIO)
print("allow_sandbox_container_name: true")
}
allow_container_name(p_oci, i_oci) if {
print("allow_container_name: start")
allow_container_annotation(p_oci, i_oci, "io.kubernetes.cri.container-name")
allow_container_annotation_cri_crio(p_oci, i_oci, CONTAINER_NAME_KEY, CONTAINER_NAME_KEY_CRIO)
print("allow_container_name: true")
}
container_annotation_missing(p_oci, i_oci, key) if {
print("container_annotation_missing:", key)
not p_oci.Annotations[key]
not i_oci.Annotations[key]
print("container_annotation_missing: true")
}
# Both policy and input lack the annotation (input checked for both CRI and CRI-O keys).
container_annotation_missing_cri_crio(p_oci, i_oci, cri_key, crio_key) if {
print("container_annotation_missing_cri_crio:", cri_key)
not p_oci.Annotations[cri_key]
not i_oci.Annotations[cri_key]
not i_oci.Annotations[crio_key]
print("container_annotation_missing_cri_crio: true")
}
allow_container_annotation(p_oci, i_oci, key) if {
print("allow_container_annotation: key =", key)
p_value := p_oci.Annotations[key]
i_value := i_oci.Annotations[key]
print("allow_container_annotation: p_value =", p_value, "i_value =", i_value)
p_value == i_value
print("allow_container_annotation: true")
}
# Policy uses CRI key; input may have CRI or CRI-O key.
allow_container_annotation_cri_crio(p_oci, i_oci, cri_key, crio_key) if {
print("allow_container_annotation_cri_crio: cri_key =", cri_key)
p_value := p_oci.Annotations[cri_key]
i_value := get_input_anno(i_oci, cri_key, crio_key)
print("allow_container_annotation_cri_crio: p_value =", p_value, "i_value =", i_value)
p_value == i_value
print("allow_container_annotation_cri_crio: true")
}
# "nerdctl/network-namespace" annotation
allow_sandbox_net_namespace(p_oci, i_oci) if {
print("allow_sandbox_net_namespace: start")
@@ -439,18 +470,16 @@ allow_net_namespace(p_oci, i_oci) if {
print("allow_net_namespace: true")
}
# "io.kubernetes.cri.sandbox-log-directory" annotation
# Sandbox log directory (containerd or CRI-O: cri-o uses LogPath)
allow_sandbox_log_directory(p_oci, i_oci, s_name, s_namespace) if {
print("allow_sandbox_log_directory: start")
key := "io.kubernetes.cri.sandbox-log-directory"
p_dir := p_oci.Annotations[key]
p_dir := p_oci.Annotations[SANDBOX_LOG_DIR_KEY]
regex1 := replace(p_dir, "$(sandbox-name)", s_name)
regex2 := replace(regex1, "$(sandbox-namespace)", s_namespace)
print("allow_sandbox_log_directory: regex2 =", regex2)
i_dir := i_oci.Annotations[key]
i_dir := get_input_anno(i_oci, SANDBOX_LOG_DIR_KEY, SANDBOX_LOG_DIR_KEY_CRIO)
print("allow_sandbox_log_directory: i_dir =", i_dir)
regex.match(regex2, i_dir)
@@ -460,12 +489,9 @@ allow_sandbox_log_directory(p_oci, i_oci, s_name, s_namespace) if {
allow_log_directory(p_oci, i_oci) if {
print("allow_log_directory: start")
key := "io.kubernetes.cri.sandbox-log-directory"
not p_oci.Annotations[key]
not i_oci.Annotations[key]
not p_oci.Annotations[SANDBOX_LOG_DIR_KEY]
not i_oci.Annotations[SANDBOX_LOG_DIR_KEY]
not i_oci.Annotations[SANDBOX_LOG_DIR_KEY_CRIO]
print("allow_log_directory: true")
}
@@ -776,22 +802,25 @@ allow_linux_sysctl(p_linux, i_linux) if {
print("allow_linux_sysctl 2: true")
}
# Check the consistency of the input "io.katacontainers.pkg.oci.bundle_path"
# and io.kubernetes.cri.sandbox-id" values with other fields.
# Check sandbox_id and derive bundle_id from guest root path (CRI-agnostic: works for containerd and CRI-O).
# Bundle path on the host is runtime-specific; root path in the guest is stable, so we extract bundle_id from it.
allow_by_bundle_or_sandbox_id(p_oci, i_oci, p_storages, i_storages) if {
print("allow_by_bundle_or_sandbox_id: start")
bundle_path := i_oci.Annotations["io.katacontainers.pkg.oci.bundle_path"]
bundle_id := replace(bundle_path, "/run/containerd/io.containerd.runtime.v2.task/k8s.io/", "")
key := "io.kubernetes.cri.sandbox-id"
p_regex := p_oci.Annotations[key]
sandbox_id := i_oci.Annotations[key]
p_regex := p_oci.Annotations[SANDBOX_ID_KEY]
sandbox_id := get_input_anno(i_oci, SANDBOX_ID_KEY, SANDBOX_ID_KEY_CRIO)
print("allow_by_bundle_or_sandbox_id: sandbox_id =", sandbox_id, "regex =", p_regex)
regex.match(p_regex, sandbox_id)
# Derive bundle_id from guest root path (e.g. /run/kata-containers/<bundle_id>/rootfs).
# Match 64-char hex (real runtimes) or any single path segment (e.g. test data: bundle-id, gpu-container, dummy).
i_root := i_oci.Root.Path
p_root_pattern1 := p_oci.Root.Path
p_root_pattern2 := replace(p_root_pattern1, "$(root_path)", policy_data.common.root_path)
p_root_pattern3 := replace(p_root_pattern2, "$(bundle-id)", "([0-9a-f]{64}|[^/]+)")
print("allow_by_bundle_or_sandbox_id: i_root =", i_root, "regex =", p_root_pattern3)
bundle_id := regex.find_all_string_submatch_n(p_root_pattern3, i_root, 1)[0][1]
allow_root_path(p_oci, i_oci, bundle_id)
# Match each input mount with a Policy mount.

View File

@@ -810,16 +810,17 @@ function install_nydus_snapshotter() {
rm -f "${tarball_name}"
}
# version: the CRI-O version to be installe
# version: the CRI-O version to be installed (major.minor, e.g. 1.35)
# Repo: https://github.com/cri-o/packaging (OpenSUSE Build Service, not pkgs.k8s.io)
function install_crio() {
local version=${1}
sudo mkdir -p /etc/apt/keyrings
sudo mkdir -p /etc/apt/sources.list.d
curl -fsSL https://pkgs.k8s.io/addons:/cri-o:/stable:/v${version}/deb/Release.key | \
curl -fsSL https://download.opensuse.org/repositories/isv:/cri-o:/stable:/v${version}/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/v${version}/deb/ /" | \
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://download.opensuse.org/repositories/isv:/cri-o:/stable:/v${version}/deb/ /" | \
sudo tee /etc/apt/sources.list.d/cri-o.list
sudo apt update

View File

@@ -95,6 +95,7 @@ function create_cluster() {
local short_sha
local tags
local rg
local aks_create
# First ensure it didn't fail to get cleaned up from a previous run.
delete_cluster "${test_type}" || true
@@ -117,19 +118,16 @@ function create_cluster() {
# Required by e.g. AKS App Routing for KBS installation.
az extension add --name aks-preview
# Adding a double quote on the last line ends up causing issues
# ine the cbl-mariner installation. Because of that, let's just
# disable the warning for this specific case.
# shellcheck disable=SC2046
az aks create \
-g "${rg}" \
--node-resource-group "node-${rg}" \
-n "$(_print_cluster_name "${test_type}")" \
-s "$(_print_instance_type)" \
--node-count 1 \
--generate-ssh-keys \
--tags "${tags[@]}" \
$([[ "${KATA_HOST_OS}" = "cbl-mariner" ]] && echo "--os-sku AzureLinux --workload-runtime KataVmIsolation")
# Create the cluster.
aks_create=(az aks create
-g "${rg}"
--node-resource-group "node-${rg}"
-n "$(_print_cluster_name "${test_type}")"
-s "$(_print_instance_type)"
--node-count 1
--generate-ssh-keys
--tags "${tags[@]}")
"${aks_create[@]}"
}
function install_bats() {
@@ -397,8 +395,33 @@ EOF
sudo apt-get -y install kubeadm kubelet kubectl --allow-downgrades
sudo apt-mark hold kubeadm kubelet kubectl
# Deploy k8s using kubeadm
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# Deploy k8s using kubeadm with CreateContainerRequest (CRI) timeout set to 600s,
# mainly for CoCo (Confidential Containers) tests (attestation, policy, image pull, VM start).
local cri_socket
case "${CONTAINER_ENGINE:-containerd}" in
crio) cri_socket="/var/run/crio/crio.sock" ;;
containerd) cri_socket="/run/containerd/containerd.sock" ;;
*) cri_socket="/run/containerd/containerd.sock" ;;
esac
local kubeadm_config
kubeadm_config="$(mktemp --tmpdir kubeadm-config.XXXXXX.yaml)"
cat <<EOF | tee "${kubeadm_config}"
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
criSocket: "${cri_socket}"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: "10.244.0.0/16"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
runtimeRequestTimeout: "600s"
EOF
sudo kubeadm init --config "${kubeadm_config}"
rm -f "${kubeadm_config}"
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
@@ -410,8 +433,29 @@ EOF
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
}
# container_engine: containerd (only containerd is supported for now, support for crio is welcome)
# container_engine_version: major.minor (and then we'll install the latest patch release matching that major.minor)
# Try to install CRI-O for the given k8s-matching version (major.minor); if the repo/package
# is not available yet (k8s released before CRI-O), try previous minor (x.y-1).
function try_install_crio_for_k8s() {
local version="${1}"
local major minor
major="${version%%.*}"
minor="${version##*.}"
if install_crio "${version}"; then
return 0
fi
if [[ "${minor}" -gt 0 ]]; then
minor=$((minor - 1))
echo "CRI-O v${version} not available yet, trying v${major}.${minor}"
install_crio "${major}.${minor}"
else
echo "CRI-O v${version} failed and no fallback (minor would be < 0)" >&2
return 1
fi
}
# container_engine: containerd or crio
# container_engine_version: for containerd: major.minor or lts/active; for crio: major.minor (e.g. 1.31) or active
function deploy_vanilla_k8s() {
container_engine="${1}"
container_engine_version="${2}"
@@ -419,6 +463,22 @@ function deploy_vanilla_k8s() {
[[ -z "${container_engine}" ]] && die "container_engine is required"
[[ -z "${container_engine_version}" ]] && die "container_engine_version is required"
# Export so do_deploy_k8s can pick the right CRI socket
export CONTAINER_ENGINE="${container_engine}"
# Resolve lts/active to the actual version from versions.yaml (e.g. v1.7, v2.1)
case "${container_engine_version}" in
lts|active)
if [[ "${container_engine}" == "containerd" ]]; then
container_engine_version=$(get_from_kata_deps ".externals.containerd.${container_engine_version}")
else
# CRI-O version matches k8s: use latest k8s stable major.minor (e.g. 1.31)
container_engine_version=$(curl -Ls https://dl.k8s.io/release/stable.txt | sed -e 's/^v//' | cut -d. -f-2)
fi
;;
*) ;;
esac
install_system_dependencies "runc"
load_k8s_needed_modules
set_k8s_network_parameters
@@ -429,6 +489,11 @@ function deploy_vanilla_k8s() {
sudo mkdir -p /etc/containerd
containerd config default | sed -e 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
;;
crio)
# CRI-O version is major.minor (e.g. 1.31) for download.opensuse.org/isv:cri-o:stable
# If k8s was released before CRI-O, try previous minor (x.y-1)
try_install_crio_for_k8s "${container_engine_version}"
;;
*) die "${container_engine} is not a container engine supported by this script" ;;
esac
sudo systemctl daemon-reload && sudo systemctl restart "${container_engine}"

View File

@@ -36,6 +36,7 @@ export PULL_TYPE="${PULL_TYPE:-default}"
export TEST_CLUSTER_NAMESPACE="${TEST_CLUSTER_NAMESPACE:-kata-containers-k8s-tests}"
export GENPOLICY_PULL_METHOD="${GENPOLICY_PULL_METHOD:-oci-distribution}"
export TARGET_ARCH="${TARGET_ARCH:-x86_64}"
export RUNS_ON_AKS="${RUNS_ON_AKS:-false}"
function configure_devmapper() {
sudo mkdir -p /var/lib/containerd/devmapper
@@ -555,18 +556,22 @@ function main() {
export KATA_HOST_OS="${KATA_HOST_OS:-}"
export K8S_TEST_HOST_TYPE="${K8S_TEST_HOST_TYPE:-}"
AUTO_GENERATE_POLICY="${AUTO_GENERATE_POLICY:-}"
if [[ "${KATA_HOST_OS}" = "cbl-mariner" ]]; then
# Temporary workaround for missing cloud-hypervisor/cloud-hypervisor@bf6f0f8, the fix for a bug
# exposed by the large ttrpc replies intentionally produced by the Kata CI Policy tests.
AUTO_GENERATE_POLICY="no"
else
AUTO_GENERATE_POLICY="${AUTO_GENERATE_POLICY:-}"
# Auto-generate policy on some Host types, if the caller didn't specify an AUTO_GENERATE_POLICY value.
if [[ -z "${AUTO_GENERATE_POLICY}" ]]; then
if [[ "${KATA_HOST_OS}" = "cbl-mariner" ]]; then
AUTO_GENERATE_POLICY="yes"
elif [[ "${KATA_HYPERVISOR}" = qemu-coco-dev* && \
"${TARGET_ARCH}" = "x86_64" && \
"${PULL_TYPE}" != "experimental-force-guest-pull" ]]; then
AUTO_GENERATE_POLICY="yes"
elif [[ "${KATA_HYPERVISOR}" = qemu-nvidia-gpu-* ]]; then
AUTO_GENERATE_POLICY="yes"
# Auto-generate policy on some Host types, if the caller didn't specify an AUTO_GENERATE_POLICY value.
if [[ -z "${AUTO_GENERATE_POLICY}" ]]; then
if [[ "${KATA_HYPERVISOR}" = qemu-coco-dev* && \
"${TARGET_ARCH}" = "x86_64" && \
"${PULL_TYPE}" != "experimental-force-guest-pull" ]]; then
AUTO_GENERATE_POLICY="yes"
elif [[ "${KATA_HYPERVISOR}" = qemu-nvidia-gpu-* ]]; then
AUTO_GENERATE_POLICY="yes"
fi
fi
fi

View File

@@ -10,8 +10,6 @@ load "${BATS_TEST_DIRNAME}/../../common.bash"
load "${BATS_TEST_DIRNAME}/tests_common.sh"
setup() {
[ "$(uname -m)" == "s390x" ] && [ "${KATA_HYPERVISOR}" == "qemu-runtime-rs" ] && skip "See: https://github.com/kata-containers/kata-containers/pull/12105#issuecomment-3551916090"
[ "${KATA_HYPERVISOR}" == "qemu-se-runtime-rs" ] && skip "See: https://github.com/kata-containers/kata-containers/pull/12105#issuecomment-3551916090"
( [ "${KATA_HYPERVISOR}" == "fc" ] || [ "${KATA_HYPERVISOR}" == "stratovirt" ] ) && skip "See: https://github.com/kata-containers/kata-containers/issues/10873"
setup_common || die "setup_common failed"
@@ -93,8 +91,6 @@ setup() {
}
teardown() {
[ "$(uname -m)" == "s390x" ] && [ "${KATA_HYPERVISOR}" == "qemu-runtime-rs" ] && skip "See: https://github.com/kata-containers/kata-containers/pull/12105#issuecomment-3551916090"
[ "${KATA_HYPERVISOR}" == "qemu-se-runtime-rs" ] && skip "See: https://github.com/kata-containers/kata-containers/pull/12105#issuecomment-3551916090"
( [ "${KATA_HYPERVISOR}" == "fc" ] || [ "${KATA_HYPERVISOR}" == "stratovirt" ] ) && skip "See: https://github.com/kata-containers/kata-containers/issues/10873"
# Debugging information

View File

@@ -10,14 +10,15 @@ load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export KBS="${KBS:-false}"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
setup_common || die "setup_common failed"
@@ -174,8 +175,8 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
confidential_teardown_common "${node}" "${node_start_time:-}"

View File

@@ -11,14 +11,15 @@ load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export KBS="${KBS:-false}"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
tag_suffix=""
@@ -243,8 +244,8 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ]; then
skip "Either SNAPSHOTTER=nydus or EXPERIMENTAL_FORCE_GUEST_PULL must be set for this test"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
teardown_common "${node}" "${node_start_time:-}"

View File

@@ -8,12 +8,18 @@
load "${BATS_TEST_DIRNAME}/lib.sh"
load "${BATS_TEST_DIRNAME}/confidential_common.sh"
export SNAPSHOTTER="${SNAPSHOTTER:-}"
export EXPERIMENTAL_FORCE_GUEST_PULL="${EXPERIMENTAL_FORCE_GUEST_PULL:-}"
export PULL_TYPE="${PULL_TYPE:-}"
setup() {
if ! is_confidential_runtime_class; then
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
[ "${SNAPSHOTTER:-}" = "nydus" ] || skip "None snapshotter was found but this test requires one"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
setup_common || die "setup_common failed"
unencrypted_image="quay.io/prometheus/busybox:latest"
@@ -87,9 +93,6 @@ setup() {
}
@test "Test we can pull an image inside the guest using trusted storage" {
[ "$(uname -m)" == "s390x" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-snp" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-tdx" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
# The image pulled in the guest will be downloaded and unpacked in the `/run/kata-containers/image` directory.
# The tests will use `cryptsetup` to encrypt a block device and mount it at `/run/kata-containers/image`.
@@ -107,14 +110,18 @@ setup() {
pod_config=$(mktemp "${BATS_FILE_TMPDIR}/$(basename "${pod_config_template}").XXX")
IMAGE="$image_pulled_time_less_than_default_time" NODE_NAME="$node" envsubst < "$pod_config_template" > "$pod_config"
# Set CreateContainerRequest timeout for qemu-coco-dev
if [[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]]; then
create_container_timeout=300
set_metadata_annotation "$pod_config" \
"io.katacontainers.config.runtime.create_container_timeout" \
"${create_container_timeout}"
# Set CreateContainerRequest timeout in the annotation to allow for enough time for guest-pull where
# the container remains in 'creating' state until the pull completes. Usually pulling this and the large image in
# below test takes 30-60 seconds, but we occasionally observe spikes on all our bare-metal runners.
create_container_timeout=300
# On AKS, so far, these spikes have not been observed. Issue 10299, as referenced in other parts of this test, tells us
# that we cannot modify the runtimeRequestTimeout on AKS. We hence set the timeout to the 120s default value.
if [[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && [ "${KBS_INGRESS}" = "aks" ]; then
create_container_timeout=120
fi
set_metadata_annotation "$pod_config" \
"io.katacontainers.config.runtime.create_container_timeout" \
"${create_container_timeout}"
# Set annotation to pull image in guest
set_metadata_annotation "${pod_config}" \
@@ -126,16 +133,14 @@ setup() {
cat $pod_config
add_allow_all_policy_to_yaml "$pod_config"
local wait_time=120
[[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && wait_time=300
local wait_time=300
if [[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && [ "${KBS_INGRESS}" = "aks" ]; then
wait_time=120
fi
k8s_create_pod "$pod_config" "$wait_time"
}
@test "Test we cannot pull a large image that pull time exceeds createcontainer timeout inside the guest" {
[ "$(uname -m)" == "s390x" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-snp" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-tdx" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
storage_config=$(mktemp "${BATS_FILE_TMPDIR}/$(basename "${storage_config_template}").XXX")
local_device=$(create_loop_device)
LOCAL_DEVICE="$local_device" NODE_NAME="$node" envsubst < "$storage_config_template" > "$storage_config"
@@ -181,10 +186,6 @@ setup() {
}
@test "Test we can pull a large image inside the guest with large createcontainer timeout" {
[ "$(uname -m)" == "s390x" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-snp" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
[ "${KATA_HYPERVISOR}" == "qemu-tdx" ] && skip "See: https://github.com/kata-containers/kata-containers/issues/10838"
if [[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && [ "${KBS_INGRESS}" = "aks" ]; then
skip "skip this specific one due to issue https://github.com/kata-containers/kata-containers/issues/10299"
fi
@@ -203,8 +204,8 @@ setup() {
IMAGE="$large_image" NODE_NAME="$node" envsubst < "$pod_config_template" > "$pod_config"
# Set CreateContainerRequest timeout in the annotation to pull large image in guest
create_container_timeout=120
[[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && create_container_timeout=600
# Bare-metal CI runners' kubelets are configured with an equivalent runtimeRequestTimeout of 600s
create_container_timeout=600
set_metadata_annotation "$pod_config" \
"io.katacontainers.config.runtime.create_container_timeout" \
"${create_container_timeout}"
@@ -219,8 +220,7 @@ setup() {
cat $pod_config
add_allow_all_policy_to_yaml "$pod_config"
local wait_time=120
[[ "${KATA_HYPERVISOR}" == qemu-coco-dev* ]] && wait_time=600
local wait_time=600
k8s_create_pod "$pod_config" "$wait_time"
}
@@ -229,7 +229,9 @@ teardown() {
skip "Test not supported for ${KATA_HYPERVISOR}."
fi
[ "${SNAPSHOTTER:-}" = "nydus" ] || skip "None snapshotter was found but this test requires one"
if [ "${SNAPSHOTTER}" != "nydus" ] && [ -z "${EXPERIMENTAL_FORCE_GUEST_PULL}" ] && [ "${PULL_TYPE}" != "guest-pull" ]; then
skip "Either SNAPSHOTTER=nydus, EXPERIMENTAL_FORCE_GUEST_PULL, or PULL_TYPE=guest-pull must be set for this test"
fi
teardown_common "${node}" "${node_start_time:-}"
kubectl delete --ignore-not-found pvc trusted-pvc

View File

@@ -34,17 +34,10 @@ setup() {
client_secret_template_yaml="${pod_config_dir}/openvpn/openvpn-client-secret.yaml.in"
client_secret_instance_yaml="${pod_config_dir}/openvpn/openvpn-client-secret-instance.yaml"
# See issue https://github.com/kata-containers/kata-containers/issues/11162 and
# other references to this issue in the genpolicy source folder.
if [[ "${SNAPSHOTTER:-}" == "nydus" ]]; then
add_allow_all_policy_to_yaml "$server_pod_yaml"
add_allow_all_policy_to_yaml "$client_pod_yaml"
else
policy_settings_dir="$(create_tmp_policy_settings_dir "${pod_config_dir}")"
add_requests_to_policy_settings "${policy_settings_dir}" "ReadStreamRequest"
auto_generate_policy "${policy_settings_dir}" "$server_pod_yaml" "$server_configmap_yaml" "--config-file $server_secret_template_yaml"
auto_generate_policy "${policy_settings_dir}" "$client_pod_yaml" "$client_configmap_yaml" "--config-file $client_secret_template_yaml"
fi
policy_settings_dir="$(create_tmp_policy_settings_dir "${pod_config_dir}")"
add_requests_to_policy_settings "${policy_settings_dir}" "ReadStreamRequest"
auto_generate_policy "${policy_settings_dir}" "$server_pod_yaml" "$server_configmap_yaml" "--config-file $server_secret_template_yaml"
auto_generate_policy "${policy_settings_dir}" "$client_pod_yaml" "$client_configmap_yaml" "--config-file $client_secret_template_yaml"
}
@test "Pods establishing a VPN connection using openvpn" {
@@ -92,9 +85,7 @@ teardown() {
echo "=== OpenVPN Client Pod Logs ==="
kubectl logs "$client_pod_name" || true
if [[ "${SNAPSHOTTER:-}" != "nydus" ]]; then
delete_tmp_policy_settings_dir "${policy_settings_dir}"
fi
delete_tmp_policy_settings_dir "${policy_settings_dir}"
teardown_common "${node}" "${node_start_time:-}"
# teardown cleans up pods, but not other resources

View File

@@ -11,7 +11,7 @@ load "${BATS_TEST_DIRNAME}/tests_common.sh"
setup() {
auto_generate_policy_enabled || skip "Auto-generated policy tests are disabled."
( [ "${KATA_HYPERVISOR}" == "qemu-tdx" ] || [ "${KATA_HYPERVISOR}" == "qemu-snp" ] ) && skip "https://github.com/kata-containers/kata-containers/issues/9846"
[[ "${RUNS_ON_AKS}" == "true" ]] || skip "https://github.com/kata-containers/kata-containers/issues/9846"
setup_common || die "setup_common failed"
pod_name="policy-pod-pvc"
pvc_name="policy-dev"
@@ -58,7 +58,7 @@ test_pod_policy_error() {
teardown() {
auto_generate_policy_enabled || skip "Auto-generated policy tests are disabled."
( [ "${KATA_HYPERVISOR}" == "qemu-tdx" ] || [ "${KATA_HYPERVISOR}" == "qemu-snp" ] ) && skip "https://github.com/kata-containers/kata-containers/issues/9846"
[[ "${RUNS_ON_AKS}" == "true" ]] || skip "https://github.com/kata-containers/kata-containers/issues/9846"
# Debugging information. Don't print the "Message:" line because it contains a truncated policy log.
kubectl describe pod "${pod_name}" | grep -v "Message:"

View File

@@ -194,8 +194,15 @@ assert_pod_fail() {
echo "Waiting for a container to fail"
sleep "${sleep_time}"
elapsed_time=$((elapsed_time+sleep_time))
if [[ $(kubectl get pod "${pod_name}" \
-o jsonpath='{.status.containerStatuses[0].state.waiting.reason}') = *BackOff* ]]; then
waiting_reason=$(kubectl get pod "${pod_name}" \
-o jsonpath='{.status.containerStatuses[0].state.waiting.reason}' 2>/dev/null || true)
terminated_reason=$(kubectl get pod "${pod_name}" \
-o jsonpath='{.status.containerStatuses[0].state.terminated.reason}' 2>/dev/null || true)
# BackOff/CrashLoopBackOff = container repeatedly failed; RunContainerError = e.g. image pull in guest failed
if [[ "${waiting_reason}" == *BackOff* ]] || [[ "${waiting_reason}" == *RunContainerError* ]]; then
return 0
fi
if [[ "${terminated_reason}" == "StartError" ]] || [[ "${terminated_reason}" == "Error" ]]; then
return 0
fi
if [[ "${elapsed_time}" -gt "${duration}" ]]; then

View File

@@ -11,6 +11,13 @@ metadata:
labels:
app: openvpn-client
spec:
# Explicit user/group/supplementary groups to support nydus guest-pull.
# See issue https://github.com/kata-containers/kata-containers/issues/11162 and
# other references to this issue in the genpolicy source folder.
securityContext:
runAsUser: 0
runAsGroup: 0
supplementalGroups: [1, 2, 3, 4, 6, 10, 11, 20, 26, 27]
containers:
- name: openvpn-client
image: quay.io/kata-containers/alpine:3.22.1-openvpn

View File

@@ -11,6 +11,13 @@ metadata:
labels:
app: openvpn-server
spec:
# Explicit user/group/supplementary groups to support nydus guest-pull.
# See issue https://github.com/kata-containers/kata-containers/issues/11162 and
# other references to this issue in the genpolicy source folder.
securityContext:
runAsUser: 0
runAsGroup: 0
supplementalGroups: [1, 2, 3, 4, 6, 10, 11, 20, 26, 27]
containers:
- name: openvpn-server
image: quay.io/kata-containers/alpine:3.22.1-openvpn

View File

@@ -14,6 +14,7 @@ export AUTO_GENERATE_POLICY="${AUTO_GENERATE_POLICY:-no}"
export KATA_HOST_OS="${KATA_HOST_OS:-}"
export KATA_HYPERVISOR="${KATA_HYPERVISOR:-}"
export PULL_TYPE="${PULL_TYPE:-default}"
export RUNS_ON_AKS="${RUNS_ON_AKS:-false}"
declare -r kubernetes_dir=$(dirname "$(readlink -f "$0")")
declare -r runtimeclass_workloads_work_dir="${kubernetes_dir}/runtimeclass_workloads_work"
@@ -102,13 +103,8 @@ add_annotations_to_yaml() {
add_cbl_mariner_annotation_to_yaml() {
local -r yaml_file="$1"
local -r mariner_annotation_kernel="io.katacontainers.config.hypervisor.kernel"
local -r mariner_kernel_path="/usr/share/cloud-hypervisor/vmlinux.bin"
local -r mariner_annotation_image="io.katacontainers.config.hypervisor.image"
local -r mariner_image_path="/opt/kata/share/kata-containers/kata-containers-mariner.img"
add_annotations_to_yaml "${yaml_file}" "${mariner_annotation_kernel}" "${mariner_kernel_path}"
add_annotations_to_yaml "${yaml_file}" "${mariner_annotation_image}" "${mariner_image_path}"
}

View File

@@ -39,6 +39,7 @@ AUTO_GENERATE_POLICY="${AUTO_GENERATE_POLICY:-}"
GENPOLICY_PULL_METHOD="${GENPOLICY_PULL_METHOD:-}"
KATA_HYPERVISOR="${KATA_HYPERVISOR:-}"
KATA_HOST_OS="${KATA_HOST_OS:-}"
RUNS_ON_AKS="${RUNS_ON_AKS:-false}"
# Common setup for tests.
#
@@ -98,13 +99,11 @@ is_nvidia_gpu_platform() {
}
is_aks_cluster() {
case "${KATA_HYPERVISOR}" in
"qemu-tdx"|"qemu-snp"|qemu-nvidia-gpu*)
return 1
;;
*)
return 0
esac
if [[ "${RUNS_ON_AKS}" = "true" ]]; then
return 0
fi
return 1
}
adapt_common_policy_settings_for_non_coco() {
@@ -172,6 +171,24 @@ adapt_common_policy_settings_for_nvidia_gpu() {
jq '.kata_config.oci_version = "1.2.1"' "${settings_dir}/genpolicy-settings.json" > temp.json && mv temp.json "${settings_dir}/genpolicy-settings.json"
}
# Adapt OCI version in policy settings to match containerd version.
# containerd 2.2.x (active) vendors v1.3.0.
adapt_common_policy_settings_for_containerd_version() {
local settings_dir=${1}
info "Adapting common policy settings for containerd's latest release"
jq '.kata_config.oci_version = "1.3.0"' "${settings_dir}/genpolicy-settings.json" > temp.json && mv temp.json "${settings_dir}/genpolicy-settings.json"
}
# When using experimental-force-guest-pull, genpolicy must not use guest_pull (we pull via oci-distribution for policy generation).
adapt_common_policy_settings_for_experimental_force_guest_pull() {
local settings_dir=$1
info "Adapting common policy settings for experimental-force-guest-pull: disable guest_pull"
jq '.cluster_config.guest_pull = false' "${settings_dir}/genpolicy-settings.json" > temp.json
mv temp.json "${settings_dir}/genpolicy-settings.json"
}
# adapt common policy settings for various platforms
adapt_common_policy_settings() {
local settings_dir=$1
@@ -179,6 +196,8 @@ adapt_common_policy_settings() {
is_coco_platform || adapt_common_policy_settings_for_non_coco "${settings_dir}"
is_aks_cluster && adapt_common_policy_settings_for_aks "${settings_dir}"
is_nvidia_gpu_platform && adapt_common_policy_settings_for_nvidia_gpu "${settings_dir}"
[[ -n "${CONTAINER_ENGINE_VERSION:-}" ]] && adapt_common_policy_settings_for_containerd_version "${settings_dir}"
[[ "${PULL_TYPE:-}" == "experimental-force-guest-pull" ]] && adapt_common_policy_settings_for_experimental_force_guest_pull "${settings_dir}"
case "${KATA_HOST_OS}" in
"cbl-mariner")

View File

@@ -45,8 +45,8 @@ install_nvidia_fabricmanager() {
return
}
echo "chroot: Install NVIDIA fabricmanager"
eval "${APT_INSTALL}" nvidia-fabricmanager libnvidia-nscq
apt-mark hold nvidia-fabricmanager libnvidia-nscq
eval "${APT_INSTALL}" nvidia-fabricmanager libnvidia-nscq nvlsm
apt-mark hold nvidia-fabricmanager libnvidia-nscq nvlsm
}
install_userspace_components() {

View File

@@ -145,8 +145,8 @@ chisseled_nvswitch() {
mkdir -p usr/share/nvidia/nvswitch
cp -a "${stage_one}"/usr/bin/nv-fabricmanager bin/.
cp -a "${stage_one}"/usr/share/nvidia/nvswitch usr/share/nvidia/.
cp -a "${stage_one}"/usr/bin/nv-fabricmanager bin/.
cp -a "${stage_one}"/usr/share/nvidia/nvswitch usr/share/nvidia/.
libdir=usr/lib/"${machine_arch}"-linux-gnu
@@ -156,6 +156,14 @@ chisseled_nvswitch() {
# if the specified log file can't be opened or the path is empty.
# LOG_FILE_NAME=/var/log/fabricmanager.log -> setting to empty for stderr -> kmsg
sed -i 's|^LOG_FILE_NAME=.*|LOG_FILE_NAME=|' usr/share/nvidia/nvswitch/fabricmanager.cfg
# NVLINK SubnetManager dependencies
local nvlsm=usr/share/nvidia/nvlsm
mkdir -p "${nvlsm}"
cp -a "${stage_one}"/opt/nvidia/nvlsm/lib/libgrpc_mgr.so lib/.
cp -a "${stage_one}"/opt/nvidia/nvlsm/sbin/nvlsm sbin/.
cp -a "${stage_one}/${nvlsm}"/*.conf "${nvlsm}"/.
}
chisseled_dcgm() {

View File

@@ -100,6 +100,7 @@ TOOLS_CONTAINER_BUILDER="${TOOLS_CONTAINER_BUILDER:-}"
VIRTIOFSD_CONTAINER_BUILDER="${VIRTIOFSD_CONTAINER_BUILDER:-}"
AGENT_INIT="${AGENT_INIT:-no}"
MEASURED_ROOTFS="${MEASURED_ROOTFS:-no}"
CONFIDENTIAL_GUEST="${CONFIDENTIAL_GUEST:-no}"
USE_CACHE="${USE_CACHE:-}"
BUSYBOX_CONF_FILE=${BUSYBOX_CONF_FILE:-}
NVIDIA_GPU_STACK="${NVIDIA_GPU_STACK:-}"
@@ -141,6 +142,7 @@ docker run \
--env VIRTIOFSD_CONTAINER_BUILDER="${VIRTIOFSD_CONTAINER_BUILDER}" \
--env AGENT_INIT="${AGENT_INIT}" \
--env MEASURED_ROOTFS="${MEASURED_ROOTFS}" \
--env CONFIDENTIAL_GUEST="${CONFIDENTIAL_GUEST}" \
--env USE_CACHE="${USE_CACHE}" \
--env BUSYBOX_CONF_FILE="${BUSYBOX_CONF_FILE}" \
--env NVIDIA_GPU_STACK="${NVIDIA_GPU_STACK}" \

View File

@@ -43,6 +43,7 @@ readonly se_image_builder="${repo_root_dir}/tools/packaging/guest-image/build_se
ARCH=${ARCH:-$(uname -m)}
BUSYBOX_CONF_FILE="${BUSYBOX_CONF_FILE:-}"
MEASURED_ROOTFS=${MEASURED_ROOTFS:-no}
CONFIDENTIAL_GUEST=${CONFIDENTIAL_GUEST:-no}
USE_CACHE="${USE_CACHE:-"yes"}"
ARTEFACT_REGISTRY="${ARTEFACT_REGISTRY:-ghcr.io}"
ARTEFACT_REPOSITORY="${ARTEFACT_REPOSITORY:-kata-containers}"
@@ -452,6 +453,7 @@ install_image() {
#Install guest image for confidential guests
install_image_confidential() {
export CONFIDENTIAL_GUEST="yes"
if [ "${ARCH}" == "s390x" ]; then
export MEASURED_ROOTFS="no"
else
@@ -563,6 +565,7 @@ install_initrd() {
#Install guest initrd for confidential guests
install_initrd_confidential() {
export CONFIDENTIAL_GUEST="yes"
export MEASURED_ROOTFS="no"
install_initrd "confidential"
}
@@ -593,7 +596,7 @@ install_image_nvidia_gpu() {
export MEASURED_ROOTFS="yes"
local version=$(get_from_kata_deps .externals.nvidia.driver.version)
EXTRA_PKGS="apt curl ${EXTRA_PKGS}"
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm"}
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm,nvswitch"}
install_image "nvidia-gpu"
}
@@ -603,27 +606,29 @@ install_initrd_nvidia_gpu() {
export MEASURED_ROOTFS="no"
local version=$(get_from_kata_deps .externals.nvidia.driver.version)
EXTRA_PKGS="apt curl ${EXTRA_PKGS}"
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm"}
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm,nvswitch"}
install_initrd "nvidia-gpu"
}
# Instal NVIDIA GPU confidential image
install_image_nvidia_gpu_confidential() {
export CONFIDENTIAL_GUEST="yes"
export AGENT_POLICY
export MEASURED_ROOTFS="yes"
local version=$(get_from_kata_deps .externals.nvidia.driver.version)
EXTRA_PKGS="apt curl ${EXTRA_PKGS}"
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm"}
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm,nvswitch"}
install_image "nvidia-gpu-confidential"
}
# Install NVIDIA GPU confidential initrd
install_initrd_nvidia_gpu_confidential() {
export CONFIDENTIAL_GUEST="yes"
export AGENT_POLICY
export MEASURED_ROOTFS="no"
local version=$(get_from_kata_deps .externals.nvidia.driver.version)
EXTRA_PKGS="apt curl ${EXTRA_PKGS}"
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm"}
NVIDIA_GPU_STACK=${NVIDIA_GPU_STACK:-"driver=${version},compute,dcgm,nvswitch"}
install_initrd "nvidia-gpu-confidential"
}
@@ -726,10 +731,12 @@ install_kernel() {
local extra_cmd=""
case "${ARCH}" in
s390x)
export CONFIDENTIAL_GUEST="yes"
export MEASURED_ROOTFS="no"
extra_cmd="-x"
;;
x86_64)
export CONFIDENTIAL_GUEST="yes"
export MEASURED_ROOTFS="yes"
extra_cmd="-x"
;;
@@ -741,6 +748,7 @@ install_kernel() {
}
install_kernel_cca_confidential() {
export CONFIDENTIAL_GUEST="yes"
export MEASURED_ROOTFS="yes"
install_kernel_helper \
@@ -765,6 +773,7 @@ install_kernel_nvidia_gpu_dragonball_experimental() {
#Install GPU enabled kernel asset
install_kernel_nvidia_gpu() {
export CONFIDENTIAL_GUEST="yes"
export MEASURED_ROOTFS="yes"
install_kernel_helper \
"assets.kernel.nvidia" \

View File

@@ -520,9 +520,12 @@ build_kernel() {
popd >>/dev/null
if [[ "${gpu_vendor}" == "${VENDOR_NVIDIA}" ]]; then
# We need in-tree modules as well as out-of-tree ones for NVIDIA GPU
make -C "${kernel_path}" -j "$(nproc)" INSTALL_MOD_STRIP=1 INSTALL_MOD_PATH="${kernel_path}" modules_install
pushd open-gpu-kernel-modules
make -j "$(nproc)" CC=gcc SYSSRC="${kernel_path}" > /dev/null
make INSTALL_MOD_STRIP=1 INSTALL_MOD_PATH=${kernel_path} -j "$(nproc)" CC=gcc SYSSRC="${kernel_path}" modules_install
make INSTALL_MOD_STRIP=1 INSTALL_MOD_PATH="${kernel_path}" -j "$(nproc)" CC=gcc SYSSRC="${kernel_path}" modules_install
make -j "$(nproc)" CC=gcc SYSSRC="${kernel_path}" clean > /dev/null
fi
}

View File

@@ -27,3 +27,11 @@ CONFIG_ARM_SMMU_V3_SVA=y
CONFIG_CRYPTO_ECC=y
CONFIG_CRYPTO_ECDH=y
CONFIG_CRYPTO_ECDSA=y
# HGX/DGX platform
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_MAD=m
CONFIG_NET_VENDOR_MELLANOX=y
CONFIG_MLX5_CORE=m
CONFIG_MLX5_INFINIBAND=m

View File

@@ -26,3 +26,11 @@ CONFIG_CRYPTO_ECDSA=y
# Dependency of _CRYPTO_
CONFIG_MODULE_SIG=y
# HGX/DGX platform
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_MAD=m
CONFIG_NET_VENDOR_MELLANOX=y
CONFIG_MLX5_CORE=m
CONFIG_MLX5_INFINIBAND=m

View File

@@ -1 +1 @@
181
182

View File

@@ -79,7 +79,7 @@ build_clh_from_source() {
else
./scripts/dev_cli.sh build --release --libc "${libc}"
fi
rm -f cloud-hypervisor
rm -rf cloud-hypervisor
cp build/cargo_target/$(uname -m)-unknown-linux-${libc}/release/cloud-hypervisor .
popd
}

View File

@@ -26,11 +26,12 @@ DESTDIR=${DESTDIR:-${PWD}}
PREFIX=${PREFIX:-/opt/kata}
container_image="${KERNEL_CONTAINER_BUILDER:-$(get_kernel_image_name)}"
MEASURED_ROOTFS=${MEASURED_ROOTFS:-no}
CONFIDENTIAL_GUEST=${CONFIDENTIAL_GUEST:-no}
KBUILD_SIGN_PIN="${KBUILD_SIGN_PIN:-}"
kernel_builder_args="-a ${ARCH:-} $*"
KERNEL_DEBUG_ENABLED=${KERNEL_DEBUG_ENABLED:-"no"}
if [[ "${MEASURED_ROOTFS}" == "yes" ]]; then
if [[ "${MEASURED_ROOTFS}" == "yes" ]] || [[ "${CONFIDENTIAL_GUEST}" == "yes" ]]; then
kernel_builder_args+=" -m"
fi

View File

@@ -78,19 +78,19 @@ mapping:
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (cbl-mariner, clh, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (cbl-mariner, clh, small, containerd)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (cbl-mariner, clh, small, oci-distribution)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, clh, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, clh, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, cloud-hypervisor, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, cloud-hypervisor, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, dragonball, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, dragonball, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, qemu, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, qemu, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, qemu-runtime-rs, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, qemu-runtime-rs, normal)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (clh, lts)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (clh, active)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (dragonball, lts)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (dragonball, active)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (qemu, lts)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (qemu, active)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (qemu-runtime-rs, lts)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (qemu-runtime-rs, active)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (cloud-hypervisor, lts)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-free-runner / run-k8s-tests (cloud-hypervisor, active)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-zvsi / run-k8s-tests (devmapper, qemu, kubeadm)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-zvsi / run-k8s-tests (nydus, qemu-coco-dev, kubeadm)
# - Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-on-tee (sev-snp, qemu-snp)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-on-tee (sev-snp, qemu-snp)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-coco-nontee (qemu-coco-dev, nydus, guest-pull)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-coco-nontee (qemu-coco-dev-runtime-rs, nydus, guest-pull)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-deploy-tests / run-kata-deploy-tests (qemu, k0s)

View File

@@ -75,7 +75,7 @@ assets:
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
uscan-url: >-
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
version: "v48.0"
version: "v50.0"
firecracker:
description: "Firecracker micro-VMM"
@@ -309,7 +309,7 @@ externals:
# version older than them.
version: "v1.7.25"
lts: "v1.7"
active: "v2.1"
active: "v2.2"
critools:
description: "CLI tool for Container Runtime Interface (CRI)"