Compare commits

..

48 Commits

Author SHA1 Message Date
Fabiano Fidêncio
86ef395595 fixup! runtime: add hypervisor options for NUMA topology 2025-11-28 17:20:21 +00:00
Fabiano Fidêncio
01fd3cd8bc fixup! runtime: add annotation default_maxmemory 2025-11-28 17:20:21 +00:00
Fabiano Fidêncio
7c8de25398 fixup! govmm: setup qemu VM NUMA topology for initial CPUs and memory 2025-11-28 17:20:21 +00:00
Fabiano Fidêncio
a92cc4ede1 fixup! govmm: setup qemu VM NUMA topology for initial CPUs and memory 2025-11-28 17:20:21 +00:00
Fabiano Fidêncio
08a0b71a57 fixup! kernel: enable x86-64 ACPI NUMA detection 2025-11-28 17:20:18 +00:00
Fabiano Fidêncio
2e9bd5441d fixup! runtime: add hypervisor options for NUMA topology 2025-11-28 17:19:28 +00:00
Zvonko Kaiser
4983bd9e1e arm64: Add Memory Topology
Add proper NUMA memory topolgy support to arm64 qemu.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-28 17:19:28 +00:00
Zvonko Kaiser
3e4c875add runtime: Check NUMA config
If NUMA is enabled we also need to set static
resource management since hot-plug of CPU and memory
doe snot work with NUMA.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-28 17:19:28 +00:00
Konstantin Khlebnikov
a5c39aae12 runtime: add annotation default_maxmemory
It seems there is no annotation for disabling VM memory hotplug.
This should be useful at least until hotplug become fully NUMA-aware.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:28 +00:00
Konstantin Khlebnikov
4c7f1ac1e5 govmm: setup qemu VM NUMA topology for initial CPUs and memory
If NUMA topology is enabled:
- groups CPUs into per NUMA node sockets
- split memory into per NUMA node modules
- report NUMA node for VCPU threads

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:28 +00:00
Konstantin Khlebnikov
e7fd1518ee runtime: enforce NUMA topology by VCPU threads affinity
For optimal performance VCPU threads must utilize only NUMA local CPUs.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:27 +00:00
Konstantin Khlebnikov
8381ee44a1 runtime: add hypervisor options for NUMA topology
With enable_numa=true hypervisor will expose host NUMA topology as is:
map vm NUMA nodes to host 1:1 and bind vpus to relates CPUS.

Option "numa_mapping" allows to redefine NUMA nodes mapping:
- map each vm node to particular host node or several numa nodes
- emulate numa on host without numa (useful for tests)

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:27 +00:00
Konstantin Khlebnikov
5fab5f9e5e kernel: enable x86-64 ACPI NUMA detection
CONFIG_NUMA is already on,
but without CONFIG_X86_64_ACPI_NUMA kernel cannot detect NUMA.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:27 +00:00
Konstantin Khlebnikov
2ab5be14c1 qemu: enable NUMA support
Link qemu with libnuma and enable NUMA feature flag.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
2025-11-28 17:19:27 +00:00
Fabiano Fidêncio
e3646adedf gatekeeper: Drop SEV-SNP from required
SEV-SNP machine is failing due to nydus not being deployed in the
machine.

We cannot easily contact the maintainers due to the US Holidays, and I
think this should become a criteria for a machine not be added as
required again (different regions coverage).

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-28 12:46:07 +01:00
Steve Horsman
8534afb9e8 Merge pull request #12150 from stevenhorsman/add-gatekeeper-triggers
ci: Add two extra gatekeeper triggers
2025-11-28 09:34:41 +00:00
Zvonko Kaiser
9dfa6df2cb agent: Bump CDI-rs to latest
Latest version of container-device-interface is v0.1.1

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-27 22:57:50 +01:00
Fabiano Fidêncio
776e08dbba build: Add nvidia image rootfs builds
So far we've only been building the initrd for the nvidia rootfs.
However, we're also interested on having the image beind used for a few
use-cases.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-27 22:46:07 +01:00
stevenhorsman
531311090c ci: Add two extra gatekeeper triggers
We hit a case that gatekeeper was failing due to thinking the WIP check
had failed, but since it ran the PR had been edited to remove that from
the title. We should listen to edits and unlabels of the PR to ensure that
gatekeeper doesn't get outdated in situations like this.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-27 16:45:04 +00:00
Zvonko Kaiser
bfc9e446e1 kernel: Add NUMA config
Add per arch specific NUMA enablement kernel settings

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-27 12:45:27 +01:00
Steve Horsman
c5ae8c4ba0 Merge pull request #12144 from BbolroC/use-runs-on-to-choose-runners
GHA: Use `runs-on` only for choosing proper runners
2025-11-27 09:54:39 +00:00
Fabiano Fidêncio
2e1ca580a6 runtime-rs: Only QEMU supports templating
We can remove the checks and default values attribution from all other
shims.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-27 10:31:28 +01:00
Alex Lyn
df8315c865 Merge pull request #12130 from Apokleos/stability-rs
tests: Enable stability tests for runtime-rs
2025-11-27 14:27:58 +08:00
Fupan Li
50dce0cc89 Merge pull request #12141 from Apokleos/fix-nydus-sn
tests: Properly handle containerd config based on version
2025-11-27 11:59:59 +08:00
Fabiano Fidêncio
fa42641692 kata-deploy: Cover all flavours of QEMU shims with multiInstallSuffix
We were missing all the runtime-rs variants.

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-26 17:44:16 +01:00
Fabiano Fidêncio
96d1e0fe97 kata-deploy: Fix multiInstallSuffix for NV shims
When using the multiInstallSuffix we must be cautelous on using the shim
name, as qemu-nvidia-gpu* doesn't actually have a matching QEMU itself,
but should rather be mapped to:
qemu-nvidia-gpu -> qemu
qemu-nvidia-gpu-snp -> qemu-snp-experimental
qemu-nvidia-gpu-tdx -> qemu-tdx-experimental

Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
2025-11-26 17:44:16 +01:00
Markus Rudy
d8f347d397 Merge pull request #12112 from shwetha-s-poojary/fix_list_routes
agent: fix the list_routes failure
2025-11-26 17:32:10 +01:00
Steve Horsman
3573408f6b Merge pull request #11586 from zvonkok/numa-qemu
qemu: Enable NUMA
2025-11-26 16:28:16 +00:00
Steve Horsman
aae483bf1d Merge pull request #12096 from Amulyam24/enable-ibm-runners
ci: re-enable IBM runners for ppc64le and s390x
2025-11-26 13:51:21 +00:00
Steve Horsman
5c09849fe6 Merge pull request #12143 from kata-containers/topic/add-report-tests-to-workflows
workflows: Add Report tests to all workflows
2025-11-26 13:18:21 +00:00
Steve Horsman
ed7108e61a Merge pull request #12138 from arvindskumar99/SNPrequired
CI: readding SNP as required
2025-11-26 11:33:07 +00:00
Amulyam24
43a004444a ci: re-enable IBM runners for ppc64le and s390x
This PR re-enables the IBM runners for ppc64le/s390x build jobs and s390x static checks.

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2025-11-26 16:20:01 +05:30
Hyounggyu Choi
6f761149a7 GHA: Use runs-on only for choosing proper runners
Fixes: #12123

`include` in #12069, introduced to choose a different runner
based on component, leads to another set of redundant jobs
where `matrix.command` is empty.
This commit gets back to the `runs-on` solution, but makes
the condition human-readable.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2025-11-26 11:35:30 +01:00
Alex Lyn
4e450691f4 tests: Unify nydus configuration to containerd v3 schema
Containerd configuration syntax (`config.toml`) varies across versions,
requiring per-version logic for fields like `runtime`.

However, testing confirms that containerd LTS (1.7.x) and newer
versions fully support the v3 schema for the nydus remote snapshotter.

This commit changes the previous containerd v1 settings in `config.toml`.
Instead, it introduces a unified v3-style configuration for nydus, which
can be vailid for lts and active containerds.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-26 17:58:16 +08:00
stevenhorsman
4c59cf1a5d workflows: Add Report tests to all workflows
In the CoCo tests jobs @wainersm create a report tests step
that summarises the jobs, so they are easier to understand and
get results for. This is very useful, so let's roll it out to all the bats
tests.

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2025-11-26 09:28:36 +00:00
shwetha-s-poojary
4510e6b49e agent: fix the list_routes failure
relax list_routes tests so not every route requires a device

Signed-off-by: shwetha-s-poojary <shwetha.s-poojary@ibm.com>
2025-11-25 20:25:46 -08:00
Xuewei Niu
04e1cf06ed Merge pull request #12137 from Apokleos/fix-netdev-mq
runtime-rs: fix QMP 'mq' parameter type in netdev_add to boolean
2025-11-26 11:49:33 +08:00
Arvind Kumar
c085011a0a CI: readding SNP as required
Reenabling the SNP CI node as a required test.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2025-11-25 17:05:01 +00:00
Zvonko Kaiser
45cce49b72 shellcheckk: Fix [] [[]] SC2166
This file is a beast so doing one shellcheck fix after the other.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:46:16 +01:00
Zvonko Kaiser
b2c9439314 qemu: Update tools/packaging/static-build/qemu/build-qemu.sh
This nit was introduced by 227e717 during the v3.1.0 era. The + sign from the bash substitution ${CI:+...} was copied by mistake.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-25 15:46:09 +01:00
Zvonko Kaiser
2f3d42c0e4 shellcheck: build-qemu.sh is clean
Make shellcheck happy

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:46:07 +01:00
Zvonko Kaiser
f55de74ac5 shellcheck: build-base-qemu.sh is clean
Make shellcheck happy

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:45:49 +01:00
Zvonko Kaiser
040f920de1 qemu: Enable NUMA support
Enable NUMA support with QEMU.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2025-11-25 15:45:00 +01:00
Alex Lyn
7f4d856e38 tests: Enable nydus tests for qemu-runtime-rs
We need enable nydus tests for qemu-runtime-rs, and this commit
aims to do it.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 17:45:57 +08:00
Alex Lyn
98df3e760c runtime-rs: fix QMP 'mq' parameter type in netdev_add to boolean
QEMU netdev_add QMP command requires the 'mq' (multi-queue) argument
to be of boolean type (`true` / `false`). In runtime-rs the virtio-net
device hotplug logic currently passes a string value (e.g. "on"/"off"),
which causes QEMU to reject the command:
```
    Invalid parameter type for 'mq', expected: boolean
```
This patch modifies `hotplug_network_device` to insert 'mq' as a proper
boolean value of `true . This fixes sandbox startup failures when
multi-queue is enabled.

Fixes #12136

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 17:34:36 +08:00
Alex Lyn
23393d47f6 tests: Enable stability tests for qemu-runtime-rs on nontee
Enable the stability tests for qemu-runtime-rs CoCo on non-TEE
environments

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:18:37 +08:00
Alex Lyn
f1d971040d tests: Enable run-nerdctl-tests for qemu-runtime-rs
Enable nerdctl tests for qemu-runtime-rs

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:14:50 +08:00
Alex Lyn
c7842aed16 tests: Enable stability tests for runtime-rs
As previous set without qemu-runtime-rs, we enable it in this commit.

Signed-off-by: Alex Lyn <alex.lyn@antgroup.com>
2025-11-25 16:12:12 +08:00
56 changed files with 610 additions and 233 deletions

View File

@@ -25,6 +25,7 @@ self-hosted-runner:
- ppc64le-k8s
- ppc64le-small
- ubuntu-24.04-ppc64le
- ubuntu-24.04-s390x
- metrics
- riscv-builder
- sev-snp

View File

@@ -12,7 +12,12 @@ name: Build checks
jobs:
check:
name: check
runs-on: ${{ matrix.runner || inputs.instance }}
runs-on: >-
${{
( contains(inputs.instance, 's390x') && matrix.component.name == 'runtime' ) && 's390x' ||
( contains(inputs.instance, 'ppc64le') && (matrix.component.name == 'runtime' || matrix.component.name == 'agent') ) && 'ppc64le' ||
inputs.instance
}}
strategy:
fail-fast: false
matrix:
@@ -70,36 +75,6 @@ jobs:
- protobuf-compiler
instance:
- ${{ inputs.instance }}
include:
- component:
name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
instance: ubuntu-24.04-s390x
runner: s390x
- component:
name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
instance: ubuntu-24.04-ppc64le
runner: ppc64le
- component:
name: agent
path: src/agent
needs:
- rust
- libdevmapper
- libseccomp
- protobuf-compiler
- clang
instance: ubuntu-24.04-ppc64le
runner: ppc64le
steps:
- name: Adjust a permission for repo

View File

@@ -171,6 +171,8 @@ jobs:
- rootfs-image
- rootfs-image-confidential
- rootfs-image-mariner
- rootfs-image-nvidia-gpu
- rootfs-image-nvidia-gpu-confidential
- rootfs-initrd
- rootfs-initrd-confidential
- rootfs-initrd-nvidia-gpu

View File

@@ -150,6 +150,7 @@ jobs:
matrix:
asset:
- rootfs-image
- rootfs-image-nvidia-gpu
- rootfs-initrd
- rootfs-initrd-nvidia-gpu
steps:

View File

@@ -32,7 +32,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
strategy:
matrix:
asset:
@@ -89,7 +89,7 @@ jobs:
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: build-asset
permissions:
contents: read
@@ -170,7 +170,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
@@ -230,7 +230,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read

View File

@@ -32,7 +32,7 @@ permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: s390x
runs-on: ubuntu-24.04-s390x
permissions:
contents: read
packages: write
@@ -257,7 +257,7 @@ jobs:
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: s390x
runs-on: ubuntu-24.04-s390x
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
@@ -319,7 +319,7 @@ jobs:
create-kata-tarball:
name: create-kata-tarball
runs-on: s390x
runs-on: ubuntu-24.04-s390x
needs:
- build-asset
- build-asset-rootfs

View File

@@ -147,7 +147,7 @@ jobs:
tag: ${{ inputs.tag }}-s390x
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: s390x
runner: ubuntu-24.04-s390x
arch: s390x
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
@@ -165,7 +165,7 @@ jobs:
tag: ${{ inputs.tag }}-ppc64le
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ppc64le-small
runner: ubuntu-24.04-ppc64le
arch: ppc64le
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}

View File

@@ -10,7 +10,9 @@ on:
- opened
- synchronize
- reopened
- edited
- labeled
- unlabeled
permissions: {}

View File

@@ -31,7 +31,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: ppc64le-small
runs-on: ubuntu-24.04-ppc64le
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -35,7 +35,7 @@ jobs:
permissions:
contents: read
packages: write
runs-on: s390x
runs-on: ubuntu-24.04-s390x
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0

View File

@@ -142,6 +142,10 @@ jobs:
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -68,6 +68,10 @@ jobs:
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts

View File

@@ -89,6 +89,11 @@ jobs:
run: bash tests/integration/kubernetes/gha-run.sh run-nv-tests
env:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.environment.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts

View File

@@ -75,3 +75,7 @@ jobs:
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -131,6 +131,10 @@ jobs:
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
run: bash tests/integration/kubernetes/gha-run.sh cleanup-zvsi

View File

@@ -140,6 +140,10 @@ jobs:
timeout-minutes: 300
run: bash tests/stability/gha-stability-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -102,6 +102,10 @@ jobs:
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0

View File

@@ -85,3 +85,7 @@ jobs:
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -29,7 +29,7 @@ jobs:
matrix:
instance:
- "ubuntu-24.04-arm"
- "s390x"
- "ubuntu-24.04-s390x"
- "ubuntu-24.04-ppc64le"
uses: ./.github/workflows/build-checks.yaml
with:

View File

@@ -48,6 +48,7 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.hypervisor.block_device_driver` | string | the driver to be used for block device, valid values are `virtio-blk`, `virtio-scsi`, `nvdimm`|
| `io.katacontainers.config.hypervisor.cpu_features` | `string` | Comma-separated list of CPU features to pass to the CPU (QEMU) |
| `io.katacontainers.config.hypervisor.default_max_vcpus` | uint32| the maximum number of vCPUs allocated for the VM by the hypervisor |
| `io.katacontainers.config.hypervisor.default_maxmemory` | uint32| the maximum memory assigned for a VM by the hypervisor in `MiB` |
| `io.katacontainers.config.hypervisor.default_memory` | uint32| the memory assigned for a VM by the hypervisor in `MiB` |
| `io.katacontainers.config.hypervisor.default_vcpus` | float32| the default vCPUs assigned for a VM by the hypervisor |
| `io.katacontainers.config.hypervisor.disable_block_device_use` | `boolean` | disallow a block device from being used |

16
src/agent/Cargo.lock generated
View File

@@ -780,9 +780,9 @@ dependencies = [
[[package]]
name = "container-device-interface"
version = "0.1.0"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "653849f0c250f73d9afab4b2a9a6b07adaee1f34c44ffa6f2d2c3f9392002c1a"
checksum = "62aabe8ef7f15f505201aa88a97f4856fd572cb869b73232db95ade2366090cd"
dependencies = [
"anyhow",
"clap",
@@ -1207,9 +1207,9 @@ dependencies = [
[[package]]
name = "fancy-regex"
version = "0.14.0"
version = "0.16.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
checksum = "998b056554fbe42e03ae0e152895cd1a7e1002aec800fdc6635d20270260c46f"
dependencies = [
"bit-set",
"regex-automata 0.4.9",
@@ -2007,9 +2007,9 @@ dependencies = [
[[package]]
name = "jsonschema"
version = "0.30.0"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1b46a0365a611fbf1d2143104dcf910aada96fafd295bab16c60b802bf6fa1d"
checksum = "d46662859bc5f60a145b75f4632fbadc84e829e45df6c5de74cfc8e05acb96b5"
dependencies = [
"ahash 0.8.12",
"base64 0.22.1",
@@ -3405,9 +3405,9 @@ dependencies = [
[[package]]
name = "referencing"
version = "0.30.0"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8eff4fa778b5c2a57e85c5f2fe3a709c52f0e60d23146e2151cbef5893f420e"
checksum = "9e9c261f7ce75418b3beadfb3f0eb1299fe8eb9640deba45ffa2cb783098697d"
dependencies = [
"ahash 0.8.12",
"fluent-uri 0.3.2",

View File

@@ -186,7 +186,7 @@ base64 = "0.22"
sha2 = "0.10.8"
async-compression = { version = "0.4.22", features = ["tokio", "gzip"] }
container-device-interface = "0.1.0"
container-device-interface = "0.1.1"
[target.'cfg(target_arch = "s390x")'.dependencies]
pv_core = { git = "https://github.com/ibm-s390-linux/s390-tools", rev = "4942504a9a2977d49989a5e5b7c1c8e07dc0fa41", package = "s390_pv_core" }

View File

@@ -401,11 +401,10 @@ impl Handle {
}
if let RouteAttribute::Oif(index) = attribute {
route.device = self
.find_link(LinkFilter::Index(*index))
.await
.context(format!("error looking up device {index}"))?
.name();
route.device = match self.find_link(LinkFilter::Index(*index)).await {
Ok(link) => link.name(),
Err(_) => String::new(),
};
}
}
@@ -1005,10 +1004,6 @@ mod tests {
.expect("Failed to list routes");
assert_ne!(all.len(), 0);
for r in &all {
assert_ne!(r.device.len(), 0);
}
}
#[tokio::test]

View File

@@ -85,11 +85,6 @@ impl ConfigPlugin for CloudHypervisorConfig {
if ch.memory_info.memory_slots == 0 {
ch.memory_info.memory_slots = default::DEFAULT_CH_MEMORY_SLOTS;
}
// Apply factory defaults
if ch.factory.template_path.is_empty() {
ch.factory.template_path = default::DEFAULT_TEMPLATE_PATH.to_string();
}
}
Ok(())

View File

@@ -79,11 +79,6 @@ impl ConfigPlugin for DragonballConfig {
if db.memory_info.memory_slots == 0 {
db.memory_info.memory_slots = default::DEFAULT_DRAGONBALL_MEMORY_SLOTS;
}
// Apply factory defaults
if db.factory.template_path.is_empty() {
db.factory.template_path = default::DEFAULT_TEMPLATE_PATH.to_string();
}
}
Ok(())
}

View File

@@ -69,11 +69,6 @@ impl ConfigPlugin for FirecrackerConfig {
firecracker.memory_info.default_memory =
default::DEFAULT_FIRECRACKER_MEMORY_SIZE_MB;
}
// Apply factory defaults
if firecracker.factory.template_path.is_empty() {
firecracker.factory.template_path = default::DEFAULT_TEMPLATE_PATH.to_string();
}
}
Ok(())

View File

@@ -92,7 +92,6 @@ impl ConfigPlugin for QemuConfig {
qemu.memory_info.memory_slots = default::DEFAULT_QEMU_MEMORY_SLOTS;
}
// Apply factory defaults
if qemu.factory.template_path.is_empty() {
qemu.factory.template_path = default::DEFAULT_TEMPLATE_PATH.to_string();
}

View File

@@ -263,20 +263,6 @@ tx_rate_limiter_max_rate = 0
# disable applying SELinux on the VMM process (default false)
disable_selinux = @DEFDISABLESELINUX@
[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Note: Requires "initrd=" to be set ("image=" is not supported).
#
# Default false
enable_template = false
[agent.@PROJECT_TYPE@]
# If enabled, make the agent display debug-level messages.
# (default: disabled)

View File

@@ -2689,8 +2689,13 @@ type SMP struct {
Sockets uint32
// MaxCPUs is the maximum number of VCPUs that a VM can have.
// This value, if non-zero, MUST BE equal to or greater than CPUs
// This value, if non-zero, MUST BE equal to or greater than CPUs,
// and must be equal to Sockets * Cores * Threads if all are non-zero.
MaxCPUs uint32
// NumNUMA is the number of NUMA nodes that VM have.
// The value MUST NOT be greater than Sockets.
NumNUMA uint32
}
// Memory is the guest memory configuration structure.
@@ -2711,6 +2716,26 @@ type Memory struct {
// Path is the file path of the memory device. It points to a local
// file path used by FileBackedMem.
Path string
// MemoryModules describes memory topology and allocation policy.
MemoryModules []MemoryModule
}
// MemoryModule represents single module of guest memory.
type MemoryModule struct {
// Size of memory module.
// It should be suffixed with M or G for sizes in megabytes or
// gigabytes respectively.
Size string
// NodeId is the guest NUMA node this module belongs to.
NodeId uint32
// HostNodes defines host NUMA nodes mask for binding memory allocation.
HostNodes string
// MemoryPolicy defines host NUMA memory allocation policy.
MemoryPolicy string
}
// Kernel is the guest kernel configuration structure.
@@ -3096,11 +3121,25 @@ func (config *Config) appendCPUs() error {
return fmt.Errorf("MaxCPUs %d must be equal to or greater than CPUs %d",
config.SMP.MaxCPUs, config.SMP.CPUs)
}
topologyCPUs := config.SMP.Sockets * config.SMP.Cores * config.SMP.Threads
if topologyCPUs != 0 && config.SMP.MaxCPUs != topologyCPUs {
return fmt.Errorf("MaxCPUs %d must match CPU topology: sockets %d * cores %d * thread %d",
config.SMP.MaxCPUs, config.SMP.Sockets, config.SMP.Cores, config.SMP.Threads)
}
SMPParams = append(SMPParams, fmt.Sprintf("maxcpus=%d", config.SMP.MaxCPUs))
}
config.qemuParams = append(config.qemuParams, "-smp")
config.qemuParams = append(config.qemuParams, strings.Join(SMPParams, ","))
if config.SMP.NumNUMA > 1 {
// Interleave CPU sockets over NUMA nodes.
for socketId := uint32(0); socketId < config.SMP.Sockets; socketId++ {
nodeId := socketId % config.SMP.NumNUMA
config.qemuParams = append(config.qemuParams, "-numa",
fmt.Sprintf("cpu,node-id=%d,socket-id=%d", nodeId, socketId))
}
}
}
return nil
@@ -3169,34 +3208,49 @@ func (config *Config) appendMemoryKnobs() {
if config.Memory.Size == "" {
return
}
var objMemParam, numaMemParam string
dimmName := "dimm1"
if len(config.Memory.MemoryModules) == 0 {
config.appendMemoryModule("dimm1", MemoryModule{Size: config.Memory.Size})
}
for i, memModule := range config.Memory.MemoryModules {
config.appendMemoryModule(fmt.Sprintf("dimm%d", i), memModule)
}
}
func (config *Config) appendMemoryModule(memoryId string, memoryModule MemoryModule) {
var objMemParams []string
if config.Knobs.HugePages {
objMemParam = "memory-backend-file,id=" + dimmName + ",size=" + config.Memory.Size + ",mem-path=/dev/hugepages"
numaMemParam = "node,memdev=" + dimmName
objMemParams = append(objMemParams, "memory-backend-file", "mem-path=/dev/hugepages")
} else if config.Knobs.FileBackedMem && config.Memory.Path != "" {
objMemParam = "memory-backend-file,id=" + dimmName + ",size=" + config.Memory.Size + ",mem-path=" + config.Memory.Path
numaMemParam = "node,memdev=" + dimmName
objMemParams = append(objMemParams, "memory-backend-file", "mem-path="+config.Memory.Path)
} else {
objMemParam = "memory-backend-ram,id=" + dimmName + ",size=" + config.Memory.Size
numaMemParam = "node,memdev=" + dimmName
objMemParams = append(objMemParams, "memory-backend-ram")
}
objMemParams = append(objMemParams, "id="+memoryId, "size="+memoryModule.Size)
if memoryModule.MemoryPolicy != "" {
objMemParams = append(objMemParams, "policy="+memoryModule.MemoryPolicy)
}
if memoryModule.HostNodes != "" {
objMemParams = append(objMemParams, "host-nodes="+memoryModule.HostNodes)
}
if config.Knobs.MemShared {
objMemParam += ",share=on"
objMemParams = append(objMemParams, "share=on")
}
if config.Knobs.MemPrealloc {
objMemParam += ",prealloc=on"
objMemParams = append(objMemParams, "prealloc=on")
}
config.qemuParams = append(config.qemuParams, "-object")
config.qemuParams = append(config.qemuParams, objMemParam)
config.qemuParams = append(config.qemuParams, "-object", strings.Join(objMemParams, ","))
if isDimmSupported(config) {
config.qemuParams = append(config.qemuParams, "-numa")
config.qemuParams = append(config.qemuParams, numaMemParam)
config.qemuParams = append(config.qemuParams, "-numa",
fmt.Sprintf("node,nodeid=%d,memdev=%s", memoryModule.NodeId, memoryId))
} else {
config.qemuParams = append(config.qemuParams, "-machine")
config.qemuParams = append(config.qemuParams, "memory-backend="+dimmName)
config.qemuParams = append(config.qemuParams, "-machine", "memory-backend="+memoryId)
}
}

View File

@@ -516,8 +516,8 @@ func TestAppendMemoryHugePages(t *testing.T) {
FileBackedMem: true,
MemShared: true,
}
objMemString := "-object memory-backend-file,id=dimm1,size=1G,mem-path=/dev/hugepages,share=on,prealloc=on"
numaMemString := "-numa node,memdev=dimm1"
objMemString := "-object memory-backend-file,mem-path=/dev/hugepages,id=dimm1,size=1G,share=on,prealloc=on"
numaMemString := "-numa node,nodeid=0,memdev=dimm1"
memBackendString := "-machine memory-backend=dimm1"
knobsString := objMemString + " "
@@ -547,7 +547,7 @@ func TestAppendMemoryMemPrealloc(t *testing.T) {
MemShared: true,
}
objMemString := "-object memory-backend-ram,id=dimm1,size=1G,share=on,prealloc=on"
numaMemString := "-numa node,memdev=dimm1"
numaMemString := "-numa node,nodeid=0,memdev=dimm1"
memBackendString := "-machine memory-backend=dimm1"
knobsString := objMemString + " "
@@ -576,8 +576,8 @@ func TestAppendMemoryMemShared(t *testing.T) {
FileBackedMem: true,
MemShared: true,
}
objMemString := "-object memory-backend-file,id=dimm1,size=1G,mem-path=foobar,share=on"
numaMemString := "-numa node,memdev=dimm1"
objMemString := "-object memory-backend-file,mem-path=foobar,id=dimm1,size=1G,share=on"
numaMemString := "-numa node,nodeid=0,memdev=dimm1"
memBackendString := "-machine memory-backend=dimm1"
knobsString := objMemString + " "
@@ -606,8 +606,8 @@ func TestAppendMemoryFileBackedMem(t *testing.T) {
FileBackedMem: true,
MemShared: false,
}
objMemString := "-object memory-backend-file,id=dimm1,size=1G,mem-path=foobar"
numaMemString := "-numa node,memdev=dimm1"
objMemString := "-object memory-backend-file,mem-path=foobar,id=dimm1,size=1G"
numaMemString := "-numa node,nodeid=0,memdev=dimm1"
memBackendString := "-machine memory-backend=dimm1"
knobsString := objMemString + " "
@@ -637,8 +637,8 @@ func TestAppendMemoryFileBackedMemPrealloc(t *testing.T) {
MemShared: true,
MemPrealloc: true,
}
objMemString := "-object memory-backend-file,id=dimm1,size=1G,mem-path=foobar,share=on,prealloc=on"
numaMemString := "-numa node,memdev=dimm1"
objMemString := "-object memory-backend-file,mem-path=foobar,id=dimm1,size=1G,share=on,prealloc=on"
numaMemString := "-numa node,nodeid=0,memdev=dimm1"
memBackendString := "-machine memory-backend=dimm1"
knobsString := objMemString + " "
@@ -687,7 +687,7 @@ func TestAppendMemory(t *testing.T) {
testAppend(memory, memoryString, t)
}
var cpusString = "-smp 2,cores=1,threads=2,sockets=2,maxcpus=6"
var cpusString = "-smp 2,cores=1,threads=2,sockets=2,maxcpus=4"
func TestAppendCPUs(t *testing.T) {
smp := SMP{
@@ -695,7 +695,7 @@ func TestAppendCPUs(t *testing.T) {
Sockets: 2,
Cores: 1,
Threads: 2,
MaxCPUs: 6,
MaxCPUs: 4,
}
testAppend(smp, cpusString, t)
@@ -717,6 +717,22 @@ func TestFailToAppendCPUs(t *testing.T) {
}
}
func TestFailToAppendCPUsWrongTopology(t *testing.T) {
config := Config{
SMP: SMP{
CPUs: 2,
Sockets: 2,
Cores: 1,
Threads: 2,
MaxCPUs: 6,
},
}
if err := config.appendCPUs(); err == nil {
t.Fatalf("Expected appendCPUs to fail")
}
}
var qmpSingleSocketServerString = "-qmp unix:path=cc-qmp,server=on,wait=off"
var qmpSingleSocketString = "-qmp unix:path=cc-qmp"

View File

@@ -25,6 +25,7 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
exp "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/experimental"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
"github.com/pbnjay/memory"
"github.com/sirupsen/logrus"
@@ -154,6 +155,8 @@ type hypervisor struct {
VirtioMem bool `toml:"enable_virtio_mem"`
IOMMU bool `toml:"enable_iommu"`
IOMMUPlatform bool `toml:"enable_iommu_platform"`
NUMA bool `toml:"enable_numa"`
NUMAMapping []string `toml:"numa_mapping"`
Debug bool `toml:"enable_debug"`
DisableNestingChecks bool `toml:"disable_nesting_checks"`
EnableIOThreads bool `toml:"enable_iothreads"`
@@ -719,6 +722,18 @@ func (h hypervisor) getIOMMUPlatform() bool {
return h.IOMMUPlatform
}
func (h hypervisor) defaultNUMANodes() []types.NUMANode {
if !h.NUMA {
return nil
}
numaNodes, err := utils.GetNUMANodes(h.NUMAMapping)
if err != nil {
kataUtilsLogger.WithError(err).Warn("Cannot construct NUMA nodes.")
return nil
}
return numaNodes
}
func (h hypervisor) getRemoteHypervisorSocket() string {
if h.RemoteHypervisorSocket == "" {
return defaultRemoteHypervisorSocket
@@ -974,6 +989,8 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
HugePages: h.HugePages,
IOMMU: h.IOMMU,
IOMMUPlatform: h.getIOMMUPlatform(),
NUMA: h.NUMA,
NUMANodes: h.defaultNUMANodes(),
FileBackedMemRootDir: h.FileBackedMemRootDir,
FileBackedMemRootList: h.FileBackedMemRootList,
Debug: h.Debug,
@@ -1868,6 +1885,20 @@ func checkConfig(config oci.RuntimeConfig) error {
return err
}
if err := checkNumaConfig(config); err != nil {
return err
}
return nil
}
// checkNumaConfig ensures that we have static resource management set since
// NUMA does not support hot-plug of memory or CPU, VFIO devices can be
// hot-plugged.
func checkNumaConfig(config oci.RuntimeConfig) error {
if !config.StaticSandboxResourceMgmt && config.HypervisorConfig.NUMA {
return errors.New("NUMA is enabled but static sandbox resource management is false, NUMA cannot hot-plug CPUs or memory, VFIO hot-plugging works, set static_sandbox_resource_mgmt=true")
}
return nil
}

View File

@@ -705,6 +705,16 @@ func addHypervisorMemoryOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig
return err
}
if err := newAnnotationConfiguration(ocispec, vcAnnotations.DefaultMaxMemory).setUintWithCheck(func(memorySz uint64) error {
if memorySz < vc.MinHypervisorMemory && sbConfig.HypervisorType != vc.RemoteHypervisor {
return fmt.Errorf("Memory specified in annotation %s is less than minimum required %d, please specify a larger value", vcAnnotations.DefaultMemory, vc.MinHypervisorMemory)
}
sbConfig.HypervisorConfig.DefaultMaxMemorySize = memorySz
return nil
}); err != nil {
return err
}
if err := newAnnotationConfiguration(ocispec, vcAnnotations.MemSlots).setUint(func(mslots uint64) {
if mslots > 0 {
sbConfig.HypervisorConfig.MemSlots = uint32(mslots)
@@ -776,6 +786,14 @@ func addHypervisorMemoryOverrides(ocispec specs.Spec, sbConfig *vc.SandboxConfig
return err
}
if annotation, ok := ocispec.Annotations[vcAnnotations.NUMAMapping]; ok {
numaNodes, err := vcutils.GetNUMANodes(strings.Fields(annotation))
if err != nil {
return err
}
sbConfig.HypervisorConfig.NUMANodes = numaNodes
}
return nil
}

View File

@@ -641,6 +641,12 @@ type HypervisorConfig struct {
// IOMMUPlatform is used to indicate if IOMMU_PLATFORM is enabled for supported devices
IOMMUPlatform bool
// NUMA states if we have NUMA enabled or not
NUMA bool
// NUMANodes defines VM NUMA topology and mapping to host NUMA nodes and CPUs.
NUMANodes []types.NUMANode
// DisableNestingChecks is used to override customizations performed
// when running on top of another VMM.
DisableNestingChecks bool
@@ -720,7 +726,8 @@ type HypervisorConfig struct {
// vcpu mapping from vcpu number to thread number
type VcpuThreadIDs struct {
vcpus map[int]int
vcpus map[int]int
vcpuToNodeId map[int]uint32
}
func (conf *HypervisorConfig) CheckTemplateConfig() error {
@@ -916,6 +923,10 @@ func (conf HypervisorConfig) NumVCPUs() uint32 {
return RoundUpNumVCPUs(conf.NumVCPUsF)
}
func (conf HypervisorConfig) NumNUMA() uint32 {
return uint32(len(conf.NUMANodes))
}
func appendParam(params []Param, parameter string, value string) []Param {
return append(params, Param{parameter, value})
}

View File

@@ -44,6 +44,10 @@ func validateHypervisorConfig(conf *HypervisorConfig) error {
conf.MemorySize = defaultMemSzMiB
}
if conf.DefaultMaxMemorySize != 0 && uint64(conf.MemorySize) > conf.DefaultMaxMemorySize {
conf.MemorySize = uint32(conf.DefaultMaxMemorySize)
}
if conf.DefaultBridges == 0 {
conf.DefaultBridges = defaultBridges
}
@@ -58,6 +62,10 @@ func validateHypervisorConfig(conf *HypervisorConfig) error {
conf.DefaultMaxVCPUs = defaultMaxVCPUs
}
if numNUMA := conf.NumNUMA(); numNUMA > 1 {
conf.DefaultMaxVCPUs -= conf.DefaultMaxVCPUs % numNUMA
}
if conf.Msize9p == 0 && conf.SharedFS != config.VirtioFS {
conf.Msize9p = defaultMsize9p
}

View File

@@ -113,7 +113,7 @@ func (m *mockHypervisor) Disconnect(ctx context.Context) {
func (m *mockHypervisor) GetThreadIDs(ctx context.Context) (VcpuThreadIDs, error) {
vcpus := map[int]int{0: os.Getpid()}
return VcpuThreadIDs{vcpus}, nil
return VcpuThreadIDs{vcpus, nil}, nil
}
func (m *mockHypervisor) Cleanup(ctx context.Context) error {

View File

@@ -155,6 +155,9 @@ const (
// DefaultMemory is a sandbox annotation for the memory assigned for a VM by the hypervisor.
DefaultMemory = kataAnnotHypervisorPrefix + "default_memory"
// MaxMemory is a sandbox annotation for the maximum memory assigned for a VM by the hypervisor.
DefaultMaxMemory = kataAnnotHypervisorPrefix + "default_maxmemory"
// MemSlots is a sandbox annotation to specify the memory slots assigned to the VM by the hypervisor.
MemSlots = kataAnnotHypervisorPrefix + "memory_slots"
@@ -182,6 +185,9 @@ const (
// FileBackedMemRootDir is a sandbox annotation to soecify file based memory backend root directory
FileBackedMemRootDir = kataAnnotHypervisorPrefix + "file_mem_backend"
// NUMAMapping is a sandbox annotation that specifies mapping VM NUMA nodes to host NUMA nodes.
NUMAMapping = kataAnnotHypervisorPrefix + "numa_mapping"
//
// Shared File System related annotations
//

View File

@@ -2612,6 +2612,36 @@ func genericMemoryTopology(memoryMb, hostMemoryMb uint64, slots uint8, memoryOff
return memory
}
func genericNUMAMemoryModules(memoryMb, memoryAlign uint64, numaNodes []types.NUMANode) []govmmQemu.MemoryModule {
if len(numaNodes) == 0 {
return nil
}
memoryModules := make([]govmmQemu.MemoryModule, 0, len(numaNodes))
// Divide memory among NUMA nodes.
memoryPerNode := memoryMb / uint64(len(numaNodes))
memoryPerNode -= memoryPerNode % memoryAlign
// First NUMA node gets more if memory is not divide evenly.
moduleSize := memoryMb - memoryPerNode*uint64(len(numaNodes)-1)
for nodeId, numaNode := range numaNodes {
memoryModules = append(memoryModules, govmmQemu.MemoryModule{
Size: fmt.Sprintf("%dM", moduleSize),
NodeId: uint32(nodeId),
HostNodes: numaNode.HostNodes,
MemoryPolicy: "interleave",
})
moduleSize = memoryPerNode
if moduleSize == 0 {
break
}
}
return memoryModules
}
// genericAppendPCIeRootPort appends to devices the given pcie-root-port
func genericAppendPCIeRootPort(devices []govmmQemu.Device, number uint32, machineType string, memSize32bit uint64, memSize64bit uint64) []govmmQemu.Device {
var (
@@ -2736,9 +2766,11 @@ func (q *qemu) GetThreadIDs(ctx context.Context) (VcpuThreadIDs, error) {
}
tid.vcpus = make(map[int]int, len(cpuInfos))
tid.vcpuToNodeId = make(map[int]uint32, len(cpuInfos))
for _, i := range cpuInfos {
if i.ThreadID > 0 {
tid.vcpus[i.CPUIndex] = i.ThreadID
tid.vcpuToNodeId[i.CPUIndex] = uint32(i.Props.Node)
}
}
return tid, nil

View File

@@ -119,6 +119,7 @@ func newQemuArch(config HypervisorConfig) (qemuArch, error) {
qemuArchBase: qemuArchBase{
qemuMachine: *mp,
qemuExePath: defaultQemuPath,
numaNodes: config.NUMANodes,
memoryOffset: config.MemOffset,
kernelParamsNonDebug: kernelParamsNonDebug,
kernelParamsDebug: kernelParamsDebug,
@@ -201,7 +202,9 @@ func (q *qemuAmd64) cpuModel() string {
}
func (q *qemuAmd64) memoryTopology(memoryMb, hostMemoryMb uint64, slots uint8) govmmQemu.Memory {
return genericMemoryTopology(memoryMb, hostMemoryMb, slots, q.memoryOffset)
memory := genericMemoryTopology(memoryMb, hostMemoryMb, slots, q.memoryOffset)
memory.MemoryModules = genericNUMAMemoryModules(memoryMb, 4, q.numaNodes)
return memory
}
// Is Memory Hotplug supported by this architecture/machine type combination?

View File

@@ -192,6 +192,7 @@ type qemuArchBase struct {
kernelParamsDebug []Param
kernelParams []Param
Bridges []types.Bridge
numaNodes []types.NUMANode
memoryOffset uint64
networkIndex int
// Exclude from lint checking for it is ultimately only used in architecture-specific code
@@ -330,12 +331,20 @@ func (q *qemuArchBase) bridges(number uint32) {
}
func (q *qemuArchBase) cpuTopology(vcpus, maxvcpus uint32) govmmQemu.SMP {
numNUMA := uint32(len(q.numaNodes))
numSockets := numNUMA
if numSockets == 0 {
numSockets = maxvcpus
}
smp := govmmQemu.SMP{
CPUs: vcpus,
Sockets: maxvcpus,
Cores: defaultCores,
Sockets: numSockets,
Cores: maxvcpus / numSockets / defaultThreads,
Threads: defaultThreads,
MaxCPUs: maxvcpus,
NumNUMA: numNUMA,
}
return smp

View File

@@ -220,5 +220,7 @@ func (q *qemuArm64) appendProtectionDevice(devices []govmmQemu.Device, firmware,
}
func (q *qemuArm64) memoryTopology(memoryMb, hostMemoryMb uint64, slots uint8) govmmQemu.Memory {
return genericMemoryTopology(memoryMb, hostMemoryMb, slots, q.memoryOffset)
memory := genericMemoryTopology(memoryMb, hostMemoryMb, slots, q.memoryOffset)
memory.MemoryModules = genericNUMAMemoryModules(memoryMb, 4, q.numaNodes)
return memory
}

View File

@@ -2791,11 +2791,12 @@ func (s *Sandbox) fetchContainers(ctx context.Context) error {
// is set to true. Then it fetches sandbox's number of vCPU threads
// and number of CPUs in CPUSet. If the two are equal, each vCPU thread
// is then pinned to one fixed CPU in CPUSet.
// For enforcing NUMA topology vCPU threads are pinned to related host CPUs.
func (s *Sandbox) checkVCPUsPinning(ctx context.Context) error {
if s.config == nil {
return fmt.Errorf("no sandbox config found")
}
if !s.config.EnableVCPUsPinning {
if !s.config.EnableVCPUsPinning && s.config.HypervisorConfig.NumNUMA() == 0 {
return nil
}
@@ -2814,23 +2815,67 @@ func (s *Sandbox) checkVCPUsPinning(ctx context.Context) error {
}
cpuSetSlice := cpuSet.ToSlice()
// check if vCPU thread numbers and CPU numbers are equal
numVCPUs, numCPUs := len(vCPUThreadsMap.vcpus), len(cpuSetSlice)
// if not equal, we should reset threads scheduling to random pattern
if numVCPUs != numCPUs {
if s.isVCPUsPinningOn {
s.isVCPUsPinningOn = false
return s.resetVCPUsPinning(ctx, vCPUThreadsMap, cpuSetSlice)
// build NUMA topology mapping, or fake single node if NUMA is not enabled.
numNodes := max(s.config.HypervisorConfig.NumNUMA(), 1)
numaNodeVCPUs := make([][]int, numNodes)
for vcpuId := range vCPUThreadsMap.vcpus {
nodeId, ok := vCPUThreadsMap.vcpuToNodeId[vcpuId]
if !ok || nodeId > numNodes {
nodeId = 0
}
return nil
numaNodeVCPUs[nodeId] = append(numaNodeVCPUs[nodeId], vcpuId)
}
// if equal, we can use vCPU thread pinning
for i, tid := range vCPUThreadsMap.vcpus {
if err := resCtrl.SetThreadAffinity(tid, cpuSetSlice[i:i+1]); err != nil {
if err := s.resetVCPUsPinning(ctx, vCPUThreadsMap, cpuSetSlice); err != nil {
return err
numaNodeCPUs := make([][]int, numNodes)
numaNodeCPUs[0] = cpuSetSlice
for i, numaNode := range s.config.HypervisorConfig.NUMANodes {
nodeHostCPUs, err := cpuset.Parse(numaNode.HostCPUs)
if err != nil {
return fmt.Errorf("failed to parse NUMA CPUSet string: %v", err)
}
if !cpuSet.IsEmpty() {
nodeHostCPUs = cpuSet.Intersection(nodeHostCPUs)
}
numaNodeCPUs[i] = nodeHostCPUs.ToSlice()
}
// check if vCPU threads have enough host CPUs in each NUMA node
// if not enough, we should reset threads affinity.
for nodeId := range numaNodeVCPUs {
numVCPUs := len(numaNodeVCPUs[nodeId])
numCPUs := len(numaNodeCPUs[nodeId])
// Not enough host CPUs for the number of vCPUs in this NUMA node.
// Two cases trigger a reset:
// 1) Pinning is enabled in config but the counts differ.
// 2) Regardless of config, vCPUs exceed available CPUs.
insufficientCPUs := numVCPUs > numCPUs
pinningMismatch := s.config.EnableVCPUsPinning && (numVCPUs != numCPUs)
if pinningMismatch || insufficientCPUs {
if s.isVCPUsPinningOn {
s.isVCPUsPinningOn = false
return s.resetVCPUsPinning(ctx, vCPUThreadsMap, cpuSetSlice)
}
virtLog.Warningf("cannot pin vcpus in vm numa node %d", nodeId)
return nil
}
}
for nodeId := range numaNodeVCPUs {
nodeCpuSetSlice := numaNodeCPUs[nodeId]
for i, vcpuId := range numaNodeVCPUs[nodeId] {
tid := vCPUThreadsMap.vcpus[vcpuId]
affinity := nodeCpuSetSlice
if s.config.EnableVCPUsPinning {
affinity = affinity[i : i+1]
}
if err := resCtrl.SetThreadAffinity(tid, affinity); err != nil {
if err := s.resetVCPUsPinning(ctx, vCPUThreadsMap, cpuSetSlice); err != nil {
return err
}
return fmt.Errorf("failed to set vcpu thread %d cpu affinity to %v: %v", tid, affinity, err)
}
return fmt.Errorf("failed to set vcpu thread %d affinity to cpu %d: %v", tid, cpuSetSlice[i], err)
}
}
s.isVCPUsPinningOn = true

View File

@@ -342,3 +342,9 @@ type Resources struct {
Memory uint
MemorySlots uint8
}
// NUMANode defines VM NUMA node mapping to host NUMA nodes and CPUs.
type NUMANode struct {
HostNodes string
HostCPUs string
}

View File

@@ -19,6 +19,9 @@ import (
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/cpuset"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types"
)
var ioctlFunc = Ioctl
@@ -197,3 +200,71 @@ func waitForProcessCompletion(pid int, timeoutSecs uint, logger *logrus.Entry) b
return pidRunning
}
func getHostNUMANodes() ([]int, error) {
data, err := os.ReadFile("/sys/devices/system/node/online")
if err != nil {
return nil, err
}
nodes, err := cpuset.Parse(strings.TrimSuffix(string(data), "\n"))
if err != nil {
return nil, err
}
return nodes.ToSlice(), nil
}
func getHostNUMANodeCPUs(nodeId int) (string, error) {
fileName := fmt.Sprintf("/sys/devices/system/node/node%v/cpulist", nodeId)
data, err := os.ReadFile(fileName)
if err != nil {
return "", err
}
return strings.TrimSuffix(string(data), "\n"), nil
}
// GetNUMANodes constructs VM NUMA nodes mapping to host NUMA nodes and host CPUs.
func GetNUMANodes(numaMapping []string) ([]types.NUMANode, error) {
// Add VM NUMA node for each specified subsets of host NUMA nodes.
if numNUMA := len(numaMapping); numNUMA > 0 {
numaNodes := make([]types.NUMANode, numNUMA)
for i, hostNodes := range numaMapping {
hostNodeIds, err := cpuset.Parse(hostNodes)
if err != nil {
return nil, err
}
numaNodes[i].HostNodes = hostNodes
for _, nodeId := range hostNodeIds.ToSlice() {
cpus, err := getHostNUMANodeCPUs(nodeId)
if err != nil {
return nil, err
}
if numaNodes[i].HostCPUs != "" {
numaNodes[i].HostCPUs += ","
}
numaNodes[i].HostCPUs += cpus
}
}
return numaNodes, nil
}
// Add VM NUMA node for each host NUMA node.
nodeIds, err := getHostNUMANodes()
if err != nil {
return nil, err
}
if len(nodeIds) == 0 {
return nil, nil
}
numaNodes := make([]types.NUMANode, len(nodeIds))
for i, nodeId := range nodeIds {
cpus, err := getHostNUMANodeCPUs(nodeId)
if err != nil {
return nil, err
}
numaNodes[i].HostNodes = fmt.Sprintf("%d", nodeId)
numaNodes[i].HostCPUs = cpus
}
return numaNodes, nil
}

View File

@@ -22,7 +22,7 @@ export sleep_time=3
# Timeout for use with `kubectl wait`, unless it needs to wait longer.
# Note: try to keep timeout and wait_time equal.
export timeout=300s
export timeout=90s
# issues that can't test yet.
export fc_limitations="https://github.com/kata-containers/documentation/issues/351"

View File

@@ -79,6 +79,52 @@ function config_kata() {
}
function config_containerd() {
# store pure version number extracted from config
local version_num=""
# store the raw line containing "version = ..."
local version_line=""
# 1) Check if containerd command is available in PATH
if ! command -v containerd >/dev/null 2>&1; then
echo "[ERROR] containerd command not found"
return
fi
# 2) Dump containerd configuration and look for the "version = ..."
# We use awk to match lines starting with "version = X", allowing leading spaces
# The 'exit' ensures we stop at the first match
version_line=$(containerd config dump 2>/dev/null | \
awk '/^[[:space:]]*version[[:space:]]*=/ {print; exit}')
# 3) If no "version = X" line is found, return
if [ -z "$version_line" ]; then
echo "[ERROR] Cannot find version key in containerd config, defaulting to v1 config"
return
fi
# 4) Extract the numeric version from the matched line
# - Remove leading/trailing spaces around the value
# - Remove surrounding double quotes if any
version_num=$(echo "$version_line" | awk -F'=' '
{
gsub(/^[[:space:]]+|[[:space:]]+$/, "", $2) # trim spaces
gsub(/"/, "", $2) # remove double quotes
print $2
}')
# 5) Validate that the extracted value is strictly numeric
# If not numeric, fall back to v1 configuration
if ! echo "$version_num" | grep -Eq '^[0-9]+$'; then
echo "[ERROR] Invalid version format: \"$version_num\". Defaulting to v1 config"
return
fi
# 6) Based on version number, run the appropriate configuration function
echo "[INFO] Running config for containerd version $version_num"
config_containerd_core
}
function config_containerd_core() {
readonly runc_path=$(command -v runc)
sudo mkdir -p /etc/containerd/
if [ -f "$containerd_config" ]; then
@@ -89,27 +135,26 @@ function config_containerd() {
fi
cat <<EOF | sudo tee $containerd_config
[debug]
level = "debug"
[proxy_plugins]
[proxy_plugins.nydus]
type = "snapshot"
address = "/run/containerd-nydus/containerd-nydus-grpc.sock"
[plugins]
[plugins.cri]
disable_hugetlb_controller = false
[plugins.cri.containerd]
snapshotter = "nydus"
disable_snapshot_annotations = false
[plugins.cri.containerd.runtimes]
[plugins.cri.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins.cri.containerd.runtimes.runc.options]
BinaryName = "${runc_path}"
Root = ""
[plugins.cri.containerd.runtimes.kata-${KATA_HYPERVISOR}]
runtime_type = "io.containerd.kata-${KATA_HYPERVISOR}.v2"
privileged_without_host_devices = true
[plugins.'io.containerd.cri.v1.images']
snapshotter = 'nydus'
disable_snapshot_annotations = false
discard_unpacked_layers = false
[plugins.'io.containerd.cri.v1.runtime']
[plugins.'io.containerd.cri.v1.runtime'.containerd]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.kata-${KATA_HYPERVISOR}]
runtime_type = "io.containerd.kata-${KATA_HYPERVISOR}.v2"
sandboxer = 'podsandbox'
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
runtime_type = 'io.containerd.runc.v2'
sandboxer = 'podsandbox'
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
BinaryName = "${runc_path}"
EOF
}

View File

@@ -208,6 +208,13 @@ check_rootfs() {
# check agent or systemd
case "${AGENT_INIT}" in
"no")
# Check if we have alternative init systems installed
# For now check if we have NVRC
if readlink "${rootfs}/sbin/init" | grep -q "NVRC"; then
OK "init is NVRC"
return 0
fi
for systemd_path in $candidate_systemd_paths; do
systemd="${rootfs}${systemd_path}"
if [ -x "${systemd}" ] || [ -L "${systemd}" ]; then

View File

@@ -529,11 +529,13 @@ function adjust_qemu_cmdline() {
# ${dest_dir}/opt/kata/share/kata-qemu/qemu
# ${dest_dir}/opt/kata/share/kata-qemu-snp-experimnental/qemu
# ${dest_dir}/opt/kata/share/kata-qemu-cca-experimental/qemu
[[ "${shim}" =~ ^(qemu-nvidia-gpu-snp|qemu-nvidia-gpu-tdx|qemu-cca)$ ]] && qemu_share=${shim}-experimental
[[ "${shim}" == "qemu-nvidia-gpu-snp" ]] && qemu_share=qemu-snp-experimental
[[ "${shim}" == "qemu-nvidia-gpu-tdx" ]] && qemu_share=qemu-tdx-experimental
[[ "${shim}" == "qemu-cca" ]] && qemu_share=qemu-cca-experimental
# Both qemu and qemu-coco-dev use exactly the same QEMU, so we can adjust
# the shim on the qemu-coco-dev case to qemu
[[ "${shim}" =~ ^(qemu|qemu-coco-dev)$ ]] && qemu_share="qemu"
[[ "${shim}" =~ ^(qemu|qemu-runtime-rs|qemu-snp|qemu-se|qemu-se-runtime-rs|qemu-coco-dev|qemu-coco-dev-runtime-rs|qemu-nvidia-gpu)$ ]] && qemu_share="qemu"
qemu_binary=$(tomlq '.hypervisor.qemu.path' ${config_path} | tr -d \")
qemu_binary_script="${qemu_binary}-installation-prefix"
@@ -852,7 +854,7 @@ function install_artifacts() {
sed -i -e "s|${default_dest_dir}|${dest_dir}|g" "${kata_config_file}"
# Let's only adjust qemu_cmdline for the QEMUs that we build and ship ourselves
[[ "${shim}" =~ ^(qemu|qemu-snp|qemu-nvidia-gpu|qemu-nvidia-gpu-snp|qemu-nvidia-gpu-tdx|qemu-se|qemu-coco-dev|qemu-cca)$ ]] && \
[[ "${shim}" =~ ^(qemu|qemu-runtime-rs|qemu-snp|qemu-nvidia-gpu|qemu-nvidia-gpu-snp|qemu-nvidia-gpu-tdx|qemu-se|qemu-se-runtime-rs|qemu-coco-dev|qemu-coco-dev-runtime-rs|qemu-cca)$ ]] && \
adjust_qemu_cmdline "${shim}" "${kata_config_file}"
fi
fi

View File

@@ -0,0 +1,2 @@
# NUMA setttings
CONFIG_NUMA=y

View File

@@ -4,6 +4,8 @@ CONFIG_X86_INTEL_PSTATE=y
# Firecracker needs this to support `vcpu_count`
CONFIG_X86_MPPARSE=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y

View File

@@ -0,0 +1,4 @@
# NUMA setttings
CONFIG_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y

View File

@@ -1 +1,2 @@
172
173

View File

@@ -123,13 +123,13 @@ check_tag() {
local tag="$1"
local entry="$2"
[ -z "$tag" ] && die "no tag for entry '$entry'"
[ -z "$entry" ] && die "no entry for tag '$tag'"
[[ -z "$tag" ]] && die "no tag for entry '$entry'"
[[ -z "$entry" ]] && die "no entry for tag '$tag'"
value="${recognised_tags[$tag]}"
# each tag MUST have a description
[ -n "$value" ] && return
[[ -n "$value" ]] && return
die "invalid tag '$tag' found for entry '$entry'"
}
@@ -138,8 +138,8 @@ check_tags() {
local tags="$1"
local entry="$2"
[ -z "$tags" ] && die "entry '$entry' doesn't have any tags"
[ -z "$entry" ] && die "no entry for tags '$tags'"
[[ -z "$tags" ]] && die "entry '$entry' doesn't have any tags"
[[ -z "$entry" ]] && die "no entry for tags '$tags'"
tags=$(echo "$tags" | tr ',' '\n')
@@ -173,22 +173,22 @@ show_array() {
local suffix
local one_line="no"
[ "$action" = "dump" ] && show_tags_header
[[ "$action" = "dump" ]] && show_tags_header
for entry in "${_array[@]}"; do
[ -z "$entry" ] && die "found empty entry"
[[ -z "$entry" ]] && die "found empty entry"
tags=$(echo "$entry" | cut -s -d: -f1)
elem=$(echo "$entry" | cut -s -d: -f2-)
[ -z "$elem" ] && die "no option for entry '$entry'"
[[ -z "$elem" ]] && die "no option for entry '$entry'"
check_tags "$tags" "$entry"
if [ "$action" = "dump" ]; then
if [[ "$action" = "dump" ]]; then
printf "%s\t\t%s\n" "$tags" "$elem"
elif [ "$action" = "multi" ]; then
if [ $i -eq $size ]; then
elif [[ "$action" = "multi" ]]; then
if [[ $i -eq $size ]]; then
suffix=""
else
suffix=' \'
@@ -203,14 +203,14 @@ show_array() {
i+=1
done
[ "$one_line" = yes ] && echo
[[ "$one_line" = yes ]] && echo
}
generate_qemu_options() {
#---------------------------------------------------------------------
#check if cross-compile is needed
host=$(uname -m)
if [ $arch != $host ];then
if [[ "$arch" != "$host" ]]; then
case $arch in
aarch64) qemu_options+=(size:--cross-prefix=aarch64-linux-gnu-);;
ppc64le) qemu_options+=(size:--cross-prefix=powerpc64le-linux-gnu-);;
@@ -279,7 +279,7 @@ generate_qemu_options() {
s390x) qemu_options+=(size:--disable-tcg) ;;
esac
if [ "${static}" == "true" ]; then
if [[ "${static}" == "true" ]]; then
qemu_options+=(misc:--static)
fi
@@ -416,7 +416,7 @@ generate_qemu_options() {
# Building static binaries for aarch64 requires disabling PIE
# We get an GOT overflow and the OS libraries are only build with fpic
# and not with fPIC which enables unlimited sized GOT tables.
if [ "${static}" == "true" ] && [ "${arch}" == "aarch64" ]; then
if [[ "${static}" == "true" ]] && [[ "${arch}" == "aarch64" ]]; then
qemu_options+=(arch:"--disable-pie")
fi
@@ -435,7 +435,10 @@ generate_qemu_options() {
qemu_options+=(size:--enable-linux-io-uring)
# Support Ceph RADOS Block Device (RBD)
[ -z "${static}" ] && qemu_options+=(functionality:--enable-rbd)
[[ -z "${static}" ]] && qemu_options+=(functionality:--enable-rbd)
# Support NUMA topology
qemu_options+=(functionality:--enable-numa)
# In "passthrough" security mode
# (-fsdev "...,security_model=passthrough,..."), qemu uses a helper
@@ -447,6 +450,9 @@ generate_qemu_options() {
qemu_options+=(functionality:--enable-cap-ng)
qemu_options+=(functionality:--enable-seccomp)
# Support NUMA topology
qemu_options+=(functionality:--enable-numa)
# AVX2 is enabled by default by x86_64, make sure it's enabled only
# for that architecture
if ! gt_eq "${qemu_version}" "10.1.0" ; then
@@ -475,7 +481,7 @@ generate_qemu_options() {
# Other options
# 64-bit only
if [ "${arch}" = "ppc64le" ]; then
if [[ "${arch}" = "ppc64le" ]]; then
qemu_options+=(arch:"--target-list=ppc64-softmmu")
else
qemu_options+=(arch:"--target-list=${arch}-softmmu")
@@ -484,7 +490,7 @@ generate_qemu_options() {
# SECURITY: Create binary as a Position Independant Executable,
# and take advantage of ASLR, making ROP attacks much harder to perform.
# (https://wiki.debian.org/Hardening)
[ -z "${static}" ] && qemu_options+=(arch:"--enable-pie")
[[ -z "${static}" ]] && qemu_options+=(arch:"--enable-pie")
_qemu_cflags=""
@@ -568,17 +574,17 @@ main() {
shift $((OPTIND - 1))
[ -z "$1" ] && die "need hypervisor name"
[[ -z "$1" ]] && die "need hypervisor name"
hypervisor="$1"
local qemu_version_file="VERSION"
[ -f ${qemu_version_file} ] || die "QEMU version file '$qemu_version_file' not found"
[[ -f ${qemu_version_file} ]] || die "QEMU version file '$qemu_version_file' not found"
# Remove any pre-release identifier so that it returns the version on
# major.minor.patch format (e.g 5.2.0-rc4 becomes 5.2.0)
qemu_version="$(awk 'BEGIN {FS = "-"} {print $1}' ${qemu_version_file})"
[ -n "${qemu_version}" ] ||
[[ -n "${qemu_version}" ]] ||
die "cannot determine qemu version from file $qemu_version_file"
if ! gt_eq "${qemu_version}" "6.1.0" ; then
@@ -586,7 +592,7 @@ main() {
fi
local gcc_version_major=$(gcc -dumpversion | cut -f1 -d.)
[ -n "${gcc_version_major}" ] ||
[[ -n "${gcc_version_major}" ]] ||
die "cannot determine gcc major version, please ensure it is installed"
# -dumpversion only returns the major version since GCC 7.0
if gt_eq "${gcc_version_major}" "7.0.0" ; then
@@ -594,7 +600,7 @@ main() {
else
local gcc_version_minor=$(gcc -dumpversion | cut -f2 -d.)
fi
[ -n "${gcc_version_minor}" ] ||
[[ -n "${gcc_version_minor}" ]] ||
die "cannot determine gcc minor version, please ensure it is installed"
local gcc_version="${gcc_version_major}.${gcc_version_minor}"

View File

@@ -50,6 +50,7 @@ RUN apt-get update && apt-get upgrade -y && \
libglib2.0-dev${DPKG_ARCH} git \
libltdl-dev${DPKG_ARCH} \
libmount-dev${DPKG_ARCH} \
libnuma-dev${DPKG_ARCH} \
libpixman-1-dev${DPKG_ARCH} \
libselinux1-dev${DPKG_ARCH} \
libtool${DPKG_ARCH} \

View File

@@ -8,30 +8,37 @@ set -o errexit
set -o nounset
set -o pipefail
CROSS_BUILD="${CROSS_BUILD:-false}"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly qemu_builder="${script_dir}/build-qemu.sh"
# shellcheck source=/dev/null
source "${script_dir}/../../scripts/lib.sh"
# shellcheck source=/dev/null
source "${script_dir}/../qemu.blacklist"
# Ensure repo_root_dir is available
repo_root_dir="${repo_root_dir:-$(git rev-parse --show-toplevel 2>/dev/null || echo "${script_dir}/../../../..")}"
ARCH=${ARCH:-$(uname -m)}
dpkg_arch=":${ARCH}"
[ ${dpkg_arch} == ":aarch64" ] && dpkg_arch=":arm64"
[ ${dpkg_arch} == ":x86_64" ] && dpkg_arch=""
[ "${dpkg_arch}" == ":ppc64le" ] && dpkg_arch=":ppc64el"
[[ "${dpkg_arch}" == ":aarch64" ]] && dpkg_arch=":arm64"
[[ "${dpkg_arch}" == ":x86_64" ]] && dpkg_arch=""
[[ "${dpkg_arch}" == ":ppc64le" ]] && dpkg_arch=":ppc64el"
packaging_dir="${script_dir}/../.."
qemu_destdir="/tmp/qemu-static/"
container_engine="${USE_PODMAN:+podman}"
container_engine="${container_engine:-docker}"
qemu_repo="${qemu_repo:-$1}"
qemu_version="${qemu_version:-$2}"
qemu_repo="${qemu_repo:-${1:-}}"
qemu_version="${qemu_version:-${2:-}}"
build_suffix="${3:-}"
qemu_tar="${4:-}"
[ -n "$qemu_repo" ] || die "qemu repo not provided"
[ -n "$qemu_version" ] || die "qemu version not provided"
[[ -n "${qemu_repo}" ]] || die "qemu repo not provided"
[[ -n "${qemu_version}" ]] || die "qemu version not provided"
info "Build ${qemu_repo} version: ${qemu_version}"
@@ -41,13 +48,13 @@ prefix="${prefix:-"/opt/kata"}"
CACHE_TIMEOUT=$(date +"%Y-%m-%d")
[ -n "${build_suffix}" ] && HYPERVISOR_NAME="kata-qemu-${build_suffix}" || HYPERVISOR_NAME="kata-qemu"
[ -n "${build_suffix}" ] && PKGVERSION="kata-static-${build_suffix}" || PKGVERSION="kata-static"
[[ -n "${build_suffix}" ]] && HYPERVISOR_NAME="kata-qemu-${build_suffix}" || HYPERVISOR_NAME="kata-qemu"
[[ -n "${build_suffix}" ]] && PKGVERSION="kata-static-${build_suffix}" || PKGVERSION="kata-static"
container_image="${QEMU_CONTAINER_BUILDER:-$(get_qemu_image_name)}"
[ "${CROSS_BUILD}" == "true" ] && container_image="${container_image}-cross-build"
[[ "${CROSS_BUILD}" == "true" ]] && container_image="${container_image}-cross-build"
${container_engine} pull ${container_image} || ("${container_engine}" build \
"${container_engine}" pull "${container_image}" || ("${container_engine}" build \
--build-arg CACHE_TIMEOUT="${CACHE_TIMEOUT}" \
--build-arg http_proxy="${http_proxy}" \
--build-arg https_proxy="${https_proxy}" \

View File

@@ -8,6 +8,15 @@ set -o errexit
set -o nounset
set -o pipefail
# Environment variables passed from container
QEMU_REPO="${QEMU_REPO:-}"
QEMU_VERSION_NUM="${QEMU_VERSION_NUM:-}"
HYPERVISOR_NAME="${HYPERVISOR_NAME:-}"
PKGVERSION="${PKGVERSION:-}"
PREFIX="${PREFIX:-}"
QEMU_DESTDIR="${QEMU_DESTDIR:-}"
QEMU_TARBALL="${QEMU_TARBALL:-}"
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
kata_packaging_dir="${script_dir}/../.."
@@ -22,15 +31,15 @@ git clone --depth=1 "${QEMU_REPO}" qemu
pushd qemu
git fetch --depth=1 origin "${QEMU_VERSION_NUM}"
git checkout FETCH_HEAD
${kata_packaging_scripts}/patch_qemu.sh "${QEMU_VERSION_NUM}" "${kata_packaging_dir}/qemu/patches"
scripts/git-submodule.sh update meson capstone
if [ "$(uname -m)" != "${ARCH}" ] && [ "${ARCH}" == "s390x" ]; then
PREFIX="${PREFIX}" ${kata_packaging_scripts}/configure-hypervisor.sh -s "${HYPERVISOR_NAME}" "${ARCH}" | xargs ./configure --with-pkgversion="${PKGVERSION}" --cc=s390x-linux-gnu-gcc --cross-prefix=s390x-linux-gnu- --prefix="${PREFIX}" --target-list=s390x-softmmu
"${kata_packaging_scripts}/patch_qemu.sh" "${QEMU_VERSION_NUM}" "${kata_packaging_dir}/qemu/patches"
if [[ "$(uname -m)" != "${ARCH}" ]] && [[ "${ARCH}" == "s390x" ]]; then
PREFIX="${PREFIX}" "${kata_packaging_scripts}/configure-hypervisor.sh" -s "${HYPERVISOR_NAME}" "${ARCH}" | xargs ./configure --with-pkgversion="${PKGVERSION}" --cc=s390x-linux-gnu-gcc --cross-prefix=s390x-linux-gnu- --prefix="${PREFIX}" --target-list=s390x-softmmu
else
PREFIX="${PREFIX}" ${kata_packaging_scripts}/configure-hypervisor.sh -s "${HYPERVISOR_NAME}" "${ARCH}" | xargs ./configure --with-pkgversion="${PKGVERSION}"
PREFIX="${PREFIX}" "${kata_packaging_scripts}/configure-hypervisor.sh" -s "${HYPERVISOR_NAME}" "${ARCH}" | xargs ./configure --with-pkgversion="${PKGVERSION}"
fi
make -j"$(nproc +--ignore 1)"
make -j"$(nproc --ignore=1)"
make install DESTDIR="${QEMU_DESTDIR}"
popd
${kata_static_build_scripts}/qemu-build-post.sh
"${kata_static_build_scripts}/qemu-build-post.sh"
mv "${QEMU_DESTDIR}/${QEMU_TARBALL}" /share/

View File

@@ -85,7 +85,7 @@ mapping:
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-aks / run-k8s-tests (ubuntu, qemu, small)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-zvsi / run-k8s-tests (devmapper, qemu, kubeadm)
- Kata Containers CI / kata-containers-ci-on-push / run-k8s-tests-on-zvsi / run-k8s-tests (nydus, qemu-coco-dev, kubeadm)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-on-tee (sev-snp, qemu-snp)
# - Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-on-tee (sev-snp, qemu-snp)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-coco-tests / run-k8s-tests-coco-nontee (qemu-coco-dev, nydus, guest-pull)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-deploy-tests / run-kata-deploy-tests (qemu, k0s)
- Kata Containers CI / kata-containers-ci-on-push / run-kata-deploy-tests / run-kata-deploy-tests (qemu, k3s)
@@ -161,36 +161,35 @@ mapping:
- Static checks / check-kernel-config-version
- Static checks / static-checks (make static-checks)
# static-checks-self-hosted.yaml
- Static checks self-hosted / build-checks (s390x) / check (make check, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, clang, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, s3...
- Static checks self-hosted / build-checks (s390x) / check (make check, dragonball, src/dragonball, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, runtime-rs, src/runtime-rs, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, runtime, src/runtime, golang, XDG_RUNTIME_DIR, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make check, trace-forwarder, src/tools/trace-forwarder, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, clang, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, dragonball, src/dragonball, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, runtime-rs, src/runtime-rs, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, runtime, src/runtime, golang, XDG_RUNTIME_DIR, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make test, trace-forwarder, src/tools/trace-forwarder, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, clang, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, s...
- Static checks self-hosted / build-checks (s390x) / check (make vendor, dragonball, src/dragonball, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, runtime-rs, src/runtime-rs, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, runtime, src/runtime, golang, XDG_RUNTIME_DIR, s390x)
- Static checks self-hosted / build-checks (s390x) / check (make vendor, trace-forwarder, src/tools/trace-forwarder, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, c...
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, dragonball, src/dragonball, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, s...
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, s390x)
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, runtime-rs, src/runtime-rs, rust, s390x)
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, runtime, src/runtime, golang, XDG_RUNTIME_DIR, s390x)
- Static checks self-hosted / build-checks (s390x) / check (sudo -E PATH="$PATH" make test, trace-forwarder, src/tools/trace-forwarder, rust, s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, ub...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, dragonball, src/dragonball, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, runtime-rs, src/runtime-rs, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, runtime, src/runtime, golang, XDG_RUNTIME_DIR, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, trace-forwarder, src/tools/trace-forwarder, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make check, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, clang, ubuntu-24.04-s...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, ubu...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, dragonball, src/dragonball, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, runtime-rs, src/runtime-rs, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, runtime, src/runtime, golang, XDG_RUNTIME_DIR, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make test, trace-forwarder, src/tools/trace-forwarder, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, clang, ubuntu-24.04-...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, agent, src/agent, rust, libdevmapper, libseccomp, protobuf-compiler, clang, u...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, dragonball, src/dragonball, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, runtime-rs, src/runtime-rs, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, runtime, src/runtime, golang, XDG_RUNTIME_DIR, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (make vendor, trace-forwarder, src/tools/trace-forwarder, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, agent-ctl, src/tools/agent-ctl, rust, protobuf-compiler, c...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, dragonball, src/dragonball, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, genpolicy, src/tools/genpolicy, rust, protobuf-compiler, u...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, kata-ctl, src/tools/kata-ctl, rust, protobuf-compiler, ubu...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, runtime-rs, src/runtime-rs, rust, ubuntu-24.04-s390x)
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, runtime, src/runtime, golang, XDG_RUNTIME_DIR, ubuntu-24.0...
- Static checks self-hosted / build-checks (ubuntu-24.04-s390x) / check (sudo -E PATH="$PATH" make test, trace-forwarder, src/tools/trace-forwarder, rust, ubuntu-2...
required-labels:
- ok-to-test