Compare commits

..

335 Commits

Author SHA1 Message Date
Fabiano Fidêncio
d424f3c595 Merge pull request #7523 from fidencio/3.2.0-rc0-branch-bump
# Kata Containers 3.2.0-rc0
2023-08-02 20:04:37 +02:00
Zvonko Kaiser
cf8899f260 Merge pull request #7494 from zvonkok/vfio-mode
vfio: Fix vfio device ordering
2023-08-02 19:45:22 +02:00
David Esparza
542012c8be Merge pull request #7503 from GabyCT/topic/ghafio
metrics: Add FIO test to gha for kata metrics CI
2023-08-02 10:05:09 -06:00
David Esparza
5979f3790b Merge pull request #7516 from GabyCT/topic/addiperf
metrics: Add iperf3 network test
2023-08-02 10:04:51 -06:00
Fabiano Fidêncio
006ecce49a release: Kata Containers 3.2.0-rc0
- ci-on-push: Make the CI also run for the stable-* branches
- ci: k8s: Do not fail when gathering info on AKS nodes
- kata-deploy: enable cross build for non-x86
- runtime-rs: add support for gather metrics in runtime-rs
- kata-ctl: add monitor subcommand for runtime-rs
- release: release-note.sh: Fix typos and reference to images
- metrics: Add sysbench performance test
- Simplify implementation of runtime-rs/service

6ad16d497 release: Adapt kata-deploy for 3.2.0-rc0
025596b28 ci-on-push: Make the CI also run for the stable-* branches
7ffc0c122 static-build: enable cross build for qemu
35d6d86ab static-build: enable cross-build for image build
2205fb9d0 static-build: enable cross build for virtiofsd
11631c681 static-build: enable cross build for shim-v2
7923de899 static-build: cross build kernel
e2c31fce2 kata-deploy: enable cross build for kata deploy script
2fc5f0e2e kata-depoly: prepare env for cross build in lib.sh
f5e9985af release: release-note.sh: Fix typos and reference to images
f910c66d6 ci: k8s: Do not fail when gathering info on AKS nodes
632818176 metrics: Add k8s sysbench documentation
b3901c46d runtime-rs: ignore errors during clean up sandbox resources
5a1b5d367 metrics: Add sysbench pod yaml
ad413d164 metrics: Add sysbench dockerfile
151256011 metrics: Add sysbench performance test
62e328ca5 runtime-rs: refine implementation of TaskService
458e1bc71 runtime-rs: make send_message() as an method of ServiceManager
1cc1c81c9 runtime-rs: fix possibe bug in ServiceManager::run()
1a5f90dc3 runtime-rs: simplify implementation of service crate
731e7c763 kata-ctl: add monitor subcommand for runtime-rs The previous kata-monitor in golang could not communicate with runtime-rs to gather metrics due to different sandbox addresses. This PR adds the subcommand monitor in kata-ctl to gather metrics from runtime-rs and monitor itself.
d74639d8c kata-ctl: provide the global TIMEOUT for creating MgmtClient
02cc4fe9d runtime-rs: add support for gather metrics in runtime-rs

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-08-02 16:59:41 +02:00
Fabiano Fidêncio
6ad16d4977 release: Adapt kata-deploy for 3.2.0-rc0
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-08-02 16:59:41 +02:00
Fabiano Fidêncio
4e812009f5 Merge pull request #7519 from fidencio/topic/gha-ci-run-on-stable-branches
ci-on-push: Make the CI also run for the stable-* branches
2023-08-02 16:13:06 +02:00
Fabiano Fidêncio
29855ed0c6 Merge pull request #7510 from fidencio/topic/ci-k8s-aks-do-not-fail-gathering-info
ci: k8s: Do not fail when gathering info on AKS nodes
2023-08-02 09:44:19 +02:00
Fabiano Fidêncio
025596b289 ci-on-push: Make the CI also run for the stable-* branches
As we only support one stable branch, it'll be used as part of the
stable-3.2 and onwards.

Fixes: #7518

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-08-02 09:26:24 +02:00
Fabiano Fidêncio
e1a69c0c92 Merge pull request #6586 from jongwu/cross_build
kata-deploy: enable cross build for non-x86
2023-08-02 09:11:56 +02:00
Fupan Li
1a6b27bf6a Merge pull request #5797 from Yuan-Zhuo/add-metrics-for-runtime-rs
runtime-rs: add support for gather metrics in runtime-rs
2023-08-02 13:40:22 +08:00
Fupan Li
a536d4a7bf Merge pull request #6672 from Yuan-Zhuo/add-monitor-in-kata-ctl
kata-ctl: add monitor subcommand for runtime-rs
2023-08-02 13:39:02 +08:00
Gabriela Cervantes
ad6e53c399 metrics: Modify boot time values
This PR modifies boot time values limit.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 23:34:15 +00:00
Jianyong Wu
7ffc0c1225 static-build: enable cross build for qemu
Depends on mutiarch feature of ubuntu, we can set up cross build
environment easily and achive as good build performance as native
build.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 23:28:52 +02:00
Jianyong Wu
35d6d86ab5 static-build: enable cross-build for image build
It's too long a time to cross build agent based on docker buildx, thus
we cross build rootfs based on a container with cross compile toolchain
of gcc and rust with musl libc. Then we get fast build just like native
build.

rootfs initrd cross build is disabled as no cross compile tolchain for
rust with musl lib if found for alpine and based on docker buildx takes
too long a time.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 23:28:52 +02:00
Gabriela Cervantes
f764248095 gha: Add FIO test to run metrics yaml
This PR adds FIO test to run metrics yaml.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 20:29:16 +00:00
Jianyong Wu
2205fb9d05 static-build: enable cross build for virtiofsd
Based on messense/rust-musl-cross which offer cross build musl lib
environment to cross compile virtiofsd.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 22:10:46 +02:00
Jianyong Wu
11631c681a static-build: enable cross build for shim-v2
shim-v2 has go and rust code. For rust code, we use messense/rust-musl-cross
to build for speed up as it doesn't depends on qemu emulation. Build go
code based on docker buildx as it doesn't support cross build now.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 22:10:46 +02:00
Jianyong Wu
7923de8999 static-build: cross build kernel
Prepare cross build environment based on current Dockerfile.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 22:10:46 +02:00
Jianyong Wu
e2c31fce23 kata-deploy: enable cross build for kata deploy script
kata-deploy-binaries-in-docker.sh is the entry to build kata components.
set some environment to facilitate the following cross build work.

Fixes: #6557
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 22:10:46 +02:00
Jianyong Wu
2fc5f0e2e0 kata-depoly: prepare env for cross build in lib.sh
We leverage three env, TARGET_ARCH means the buid target tuple;
ARCH nearly the same meaning with TARGET_ARCH but has been widely
used in kata; CROSS_BUILD means if you want to do cross compile.

Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-08-01 22:10:46 +02:00
Fabiano Fidêncio
c0171ea0a7 Merge pull request #7508 from fidencio/topic/fix-release-notes-typos-and-references
release: release-note.sh: Fix typos and reference to images
2023-08-01 22:05:32 +02:00
Gabriela Cervantes
58f9a57c20 metrics: Add network reference to general README metrics
This PR adds network reference to the general metrics README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:54:00 +00:00
Gabriela Cervantes
07694ef3ae metrics: Add Kata Containers network metrics README
This PR adds the Kata Containers network metrics README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:49:09 +00:00
Gabriela Cervantes
d8439dba89 metrics: Add iperf3 deployment yaml
This PR adds the iperf3 deployment yaml.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:45:01 +00:00
Gabriela Cervantes
bda83cee5d metrics: Add iperf3 daemonset for k8s
This PR adds the iperf3 daemonset for k8s.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:42:15 +00:00
Gabriela Cervantes
badff23c71 metrics: Add iperf3 service yaml for k8s
This PR adds the iperf3 service yaml for k8s.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:37:19 +00:00
Gabriela Cervantes
27c02367f9 metrics: Add iperf3 network test
This PR adds the iperf3 benchmark test for kata metrics.

Fixes #7515

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-08-01 16:30:46 +00:00
GabyCT
a0a524efc2 Merge pull request #7486 from kata-containers/topic/addsysbench
metrics: Add sysbench performance test
2023-08-01 10:17:48 -06:00
Fabiano Fidêncio
f5e9985afe release: release-note.sh: Fix typos and reference to images
diferent -> different

And also let's make sure we escape the backticks around the kata-deploy
environment variables, otherwise bash will try to interpret those.

Fixes: #7497

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-08-01 12:42:03 +02:00
Fabiano Fidêncio
f910c66d6f ci: k8s: Do not fail when gathering info on AKS nodes
Otherwise the VM deletion may not delete, leaving us with several
machines behind.

Fixes: #7509

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-08-01 12:36:33 +02:00
Chao Wu
1a94aad44f Merge pull request #7480 from jiangliu/rt-service
Simplify implementation of runtime-rs/service
2023-08-01 16:05:33 +08:00
Chao Wu
2d13e2d71c Merge pull request #7504 from fidencio/topic/gha-release-fix-upload-versions-yaml
release: Fix upload-versions-yaml
2023-08-01 13:58:07 +08:00
GabyCT
b77d69aeee Merge pull request #7396 from GabyCT/topic/addghatensorflow
metrics: Enable Tensorflow metrics for kata CI
2023-07-31 17:13:24 -06:00
Fabiano Fidêncio
743291c6c4 release: Fix upload-versions-yaml
This requires the GITHUB_UPLOAD_TOKEN.  While we're here, let's also fix
the name of the action and remove the "-tarball" suffix, as it's not
really a tarball.

Fixes: #7497

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 23:57:33 +02:00
Fabiano Fidêncio
a71d35c764 Merge pull request #7499 from fidencio/topic/gha-release-ensure-stage-is-defined-for-amr64-s300x
gha: release: `stage` must be defined for arm64 / s390x yamls
2023-07-31 22:55:54 +02:00
Gabriela Cervantes
6328181762 metrics: Add k8s sysbench documentation
This PR adds k8s sysbench documentation at general density documentation.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 20:28:37 +00:00
Chelsea Mafrica
f74b7aba18 Merge pull request #7488 from cmaf/docs-k8s-links
docs: Update links for pods and kubelet
2023-07-31 12:44:24 -07:00
Gabriela Cervantes
8933d54428 metrics: Add FIO to gha run script
This PR adds FIO to gha run script.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 17:51:11 +00:00
Gabriela Cervantes
8a584589ff metrics: Add DAX FIO README
This PR adds DAX FIO README information.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 17:42:44 +00:00
Gabriela Cervantes
21f5b65233 metrics: Add FIO information in storage general README
This PR adds FIO information in storage general README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 17:33:39 +00:00
Gabriela Cervantes
69f05cf9e6 metrics: Add FIO general README
This PR adds FIO general README information.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 17:30:05 +00:00
Gabriela Cervantes
87d41b3dfa metrics: Add FIO test to gha for kata metrics CI
This PR adds FIO test to gha for kata metrics CI.

Fixes #7502

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-31 16:50:16 +00:00
Fabiano Fidêncio
ff8d7e7e41 Merge pull request #7496 from fidencio/topic/topic/kata-deploy-take-nfd-into-consideration-pre-work
k8s: Rely on the USING_NFD environment variable passed by the jobs
2023-07-31 14:56:15 +02:00
Fabiano Fidêncio
1b111a9aab gha: release: stage must be defined for arm64 / s390x yamls
`stage`  has been added, but only hooked up to the amd64 logic, leaving
arm64 and s390x behind.

Let's fix this right now, and make sure no error occurs when passing
this down to the yaml files.

Fixes: #7497

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 14:41:35 +02:00
Fabiano Fidêncio
684a6e1a55 Revert "gha: release: stage must be a string"
This reverts commit 7c857d38c1.

I've misunderstood the error given by github action, let's fix this in
the next commit.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 14:37:52 +02:00
Fabiano Fidêncio
99711f107f Merge pull request #7498 from fidencio/topic/gha-release-stage-must-be-a-string
gha: release: `stage` must be a string
2023-07-31 14:32:47 +02:00
Fabiano Fidêncio
7c857d38c1 gha: release: stage must be a string
Otherwise we'll face the following error as part of our GHA:
```
The workflow is not valid.
kata-containers/kata-containers/.github/workflows/release-$foo.yaml
(Line: 13, Col: 14): Invalid input, stage is not defined in the
referenced workflow.
```

Fixes: #7497

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 13:39:13 +02:00
Fabiano Fidêncio
28e171bf73 Merge pull request #7490 from fidencio/3.2.0-alpha4-branch-bump
# Kata Containers 3.2.0-alpha4
2023-07-31 13:34:15 +02:00
Fabiano Fidêncio
91e1e612c3 k8s: Rely on the USING_NFD environment variable passed by the jobs
Let's make sure we can rely on the tests passing down whether they want
to be tested using Node Feataure Discovery or not.

Right now, only the TDX job has this option set to "true", all the other
jobs have this option set to "false".

We can and have to merge this one before merging the NFD related patches
as:
1) It causes no harm in exporting this environment variable, but not
   having it used
2) It will allow us to test the NFD after this one is merged, as changes
   in the yaml file, in the case of the pull_request_target event,  are
   not taken into consideration before they're merged

Fixes: #7495

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 13:30:18 +02:00
Zvonko Kaiser
cddcde1d40 vfio: Fix vfio device ordering
If modeVFIO is enabled we need 1st to attach the VFIO control group
device /dev/vfio/vfio an 2nd the actuall device(s) afterwards.Sort the
devices starting with device #1 being the VFIO control group device and
the next the actuall device(s)
/dev/vfio/<group>

Fixes: #7493

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-31 11:26:27 +00:00
Fabiano Fidêncio
7edc7172c0 release: Kata Containers 3.2.0-alpha4
- tests: Add `k8s-volume` and `k8s-file-volume` tests to GHA CI
- metrics: Update boot time for kata metrics
- metrics: Add FIO report files for kata metrics
- kata-deploy: Allow runtimeclasses to be created by the daemonset
- runtime-rs: change block index to 0
- agent: fix typo in constant
- metrics: Add FIO benchmark for metrics tests
- gha: dragonball: Run only on the dragonball labeled machine
- tests: Fix `k8s-job` test
- agent,libs: Remove unused 'mut' keywords
- runtime-rs: remove unneeded 'mut' keywords
- tests: QoL improvements for running tests locally
- agent: exclude symlinks from recursive ownership change
- cache: kernel: Fix kernel caching
- runk: Add Docker guide to README
- metrics: General improvements to json.bash script
- kata-deploy: Allow shim creation based on what's passed to the daemonset
- gha: ci: Add skeleton of vfio job
- s390x: Fixing device.Bus assignment
- release: Mention the container images used to build the project
- kata-deploy-binaries: kernel_cache: Take module_dir into account
- ci: nydus: Fix typo in "source"
- gha: ci: Add no-op nydus tests to our CI
- Dragonball: migrate dragonball-sandbox crates to Kata
- ci: gha: Add cri-containerd tests (but still do not enable them)
- packaging/tools: Add kata-debug and use it as part of our CI
- cache: kernel: Consider changes in tools/packaging/kernel
- kata-deploy: Properly get the path of the versions.yaml file
- kata-deploy: Add VERSION and versions.yaml to the final tarball
- metrics: Add C-Ray performance test
- metrics: enable TensorFlow benchmark to be run on gha
- metrics: Add function to memory inside container script
- Revert "metrics: Replace backslashes used to escape double quoted key in jq expr"
- versions: Bump virtiofsd to v1.7.0
- metrics: stop hypervirsor and shim at init_env stage
- ci: k8s: Adapt "source ..." to the new location of gha-run.sh
- ci: Move `tests/integration/gha-run.sh`  to `tests/integration/kuberentes/` ... and also remove KUBECONFIG from the tdx envs
- versions: Update kernel to version v6.1.x
- agent: Fix exec hang issues with a backgroud process
- agent: Ignore already mounted dev/fs/pseudo-fs
- ci: k8s: Bring TDX tests back
- metrics: Update machine learning documentation
- gha: ci: cri-containerd: Fix KATA_HYPERVSIOR typo
- tests: Add MobileNet Tensorflow performance benchmark
- metrics: replace backslashes used to escape double quoted jq key expr.
- runtime-rs: enhancement of Device Manager for network endpoints.
- feat(Tracing): tracing in Rust runtime
- runtime-rs: ignore unconfigured network interfaces
- metrics: Stop running kata-env before kata is properly installed.
- metrics: use rm -f to remove the oldest continerd config file.
- kernel: Update kernel config name
- kata-deploy: Add a debug option to kata-deploy (and also use it as part of our CI)
- runtime-rs: add parameter for propagation of (u)mount events
- kata-ctl: Move GuestProtection code to kata-sys-util
- tests: Add function before function name in common.bash for metrics
- tests: Add metrics storage documentation
- metrics: Fix metrics ts generator to treat numbers as decimals
- gha: ci: Add cri-containerd tests skeleton -- follow up 1
- dragonball/agent: Add some optimization for Makefile and bugfixes of unit tests on aarch64
- metrics: Enable blogbench test
- tests: Add machine learning performance tests
- tests: gha: ci: Add cri-containerd tests skeleton
- metrics: Enable memory inside container metrics
- tools: Use a consistent target name when building mariner initrd
- gha: ci: Gather info about the node / pods
- runtime-rs: Do not scan network if network model is "none"
- gha: k8s: tdx: Temporarily disable TDX tests
- metrics: Update memory usage script
- gha: Cancel previous jobs if a PR is updated
- gha: nightly: Fix long name of AKS clusters issue and make the CI easier to test
- README: Add badge for our Nightly CI
- gha: Do not run all the tests if only docs are updated
- bugfix: plus default_memory when calculating mem size
- gha: ci: Use github.sha to get the last commit reference
- dragonball: Don't fail if a request asks for more CPUs than allowed
- gha: ci: Fix refernce passed to checkout@v3
- gha: ci: Avoid using env also in the ci-nightly and payload-after-push
- gha: k8s: Ensure cluster doesn't exist before creating it
- gha: ci: More follow up fixes after adding a nightly CI
- tests: Enable running k8s tests on Mariner
- gha: ci: Avoid using env unless it's really needed
- gha: ci: Follow up fixes for the nightly jobs
- tests: Enable memory usage metrics tests
- gha: Add nightly jobs
- metrics: storing metrics workflow artifacts
- gha: k8s: Ensure tests are running on a specific namespace
- metrics: Adds blogbench and webtool metrics tests
- gha: dragonball: Correctly propagate PATH update
- versions: Upgrade to Cloud Hypervisor v33.0
- Convert `is_allowed`, `ttrpc_error` and `sl` to functions
- gha: release: Use a specific release of hub
- metrics: Add checkmetrics to gha-run.sh for metrics CI
- packaging: Fix indentation of build.sh script at ovmf
- doc: Add documentation for the virtualization reference architecture
- gpu: Update kernel building to the latest changes
- runtime: fix PCIe topology for GPUDirect use-case
- metrics: Add memory footprint tests
- runtime: Add "none" as a shared_fs option
- metrics: Uniformity across function names in gha-run.sh
- runtime-rs:  support physical endpoint using device manager
- runtime-rs: bugfix for direct volume path's validation.
- metrics: Fix retrieving hypervisor version on metrics
- runtime-rs: fix build error on AArch64
- checkmetrics: Add checkmetrics makefile and documentation
- docs: Add boot time metrics documentation
- runtime-rs: add support spdk/vhost-user based volume.
- static-build: Remove kata-version parameter
- dragonball: avoid obtaining lock twice in create_stdio_console
- metrics: Add checkmetrics for kata metrics CI
- metrics: enable launch-times test on gha-run metrics script
- docs: Add general metrics documentation
- add support vfio device manager
- gha: Don't automatically trigger CI
- kata-ctl: Check for vm capability
- docs: fix spelling of "crate"
- packaging: Fix indentation in init.sh script
- gha: Fix gha actions
- metrics: install kata and launch-times test
- tests: Move tests helper script to this repo
- tests: Add json script for metrics tests
- Cherry pick initramfs caching updates from CCv0
- gha: Fix format for run launchtimes metrics yaml
- tests: Add tests lib common script
- Fix deprecated virtiofsd args (go shim only)
- gha: Add base branch on SHA on pull requst
- gha: ci-on-push: Run metrics tests
- docs: Update Developer Guide
- runtime-rs: Enhance flexibility of virtio-fs config
- versions: Update firecracker version to 1.3.3
- tools: Fix no-op builds
- runtime-rs: update Cargo.lock
- gha: Fix `stage` definition in matrix
- feat(runtime): vcpu resize capability
- packaging: Remove snap package
- gha: Add new build targets for Mariner
- Dragonball: support resize memory
- Port Measured rootfs feature from CCv0 branch to main
- add support direct volume and refactor device manager
- gha: Fix gha-run.sh and unbreak CI
- kata-ctl: Switch to slog logging; add --log-level and --json-logging arguments
- log-parser: Update log parser link at README
- gha: aks: Extract `run` commands to a script
- runtime-rs: handle copy files when share_fs is not available
- agent-ctl: fix the compile error
- agent: fix the issue of exec hang with a backgroud process
- runtime-rs: bugfix: update Cargo.lock
- gha: aks: Use short SHA in cluster name
- README: Display badge for the "Publish Artefacts" job and update the Kata Containers logo
- kata-deploy: Change how we get the Ubuntu k8s key
- gha: aks: Ensure host_os is used everywhere needed
- kubernetes: add agnhost command in pod yaml
- main | release: Standardize kata static file name
- packaging: make BUILDER_REGISTRY configurable
- gha: aks: Add the host_os as part of the aks cluster's name
- kernel: Modify build-kernel.sh to accomodate for changes in version.yaml
- gha: Fix Mariner cluster creation
- gha: Unbreak CI and fix cluster creation step
- Dragonball: support vcpu hotplug on aarch64
- runtime-rs/sandbox_bindmounts: add support for sandbox bindmounts
- runtime-rs/kata-ctl: Enhancement of DirectVolumeMount.
- gha: Create Mariner host as part of k8s tests
- netlink: Fix the issue of update_interface
- gha: Increase timeout for AKS jobs and give more time to start running the tests
- runtime: sending SIGKILL to qemu
- dragonball: convert BlockDeviceMgr and VirtioNetDeviceMgr functions to methods
- dragonball: Remove virtio-net and vsock devices gracefully
- kata-deploy: Improve shim backup / restore
- doc: Update git commands
- kata-deploy: Fix indentation on kata deploy merge script

8353aae41 ci: k8s: Rework get_nodes_and_pods_info()
6ad5d7112 ci: k8s: Do not gather node info before running the tests
5261e3a60 ci: k8s: Group messages to improve readability
9cc6b5f46 ci: k8s: Get logs from kata-deploy
9d285c622 ci: k8s: Let kata-deploy take care of the runtimeclasses
87568ed98 gha: Test split out runtimeclasses are in sync with all-in-one file
39192c608 kata-deploy: Print variables passed to the script
0e157be6f kata-deploy: Allow runtimeclasses to be created by the daemonset
a27433324 kata-deploy: Change default values of DEBUG
69535b808 kata-deploy: runtimeclass: Split out entries
9e1710674 kata-runtimeClasses: Alphabetically sort the enrties
6222bd910 tests: Add k8s-file-volume test
187a72d38 tests: Add k8s-volume test
0c8427035 metrics: Add boot time value for qemu
6520dfee3 metrics: Update boot time for kata metrics
ff2279061 metrics: Update runtime and configuration paths
a5d4e3388 metrics: Add compare virtiofsd dax script
5e937fa62 metrics: Update general FIO tests
b0bea47c5 metrics: Add makefile to report generator
73c57b9a1 metrics: Add FIO report files for kata metrics
c8fcd29d9 runtime-rs: use device manager to handle virtio-pmem
901c19225 runtime-rs: support configure vm_rootfs_driver
5d6199f9b runtime-rs: use device manager to handle vm rootfs
20f1f62a2 runtime-rs: change block index to 0
662f87539 metrics: Add general FIO makefile
c5a87eed2 tests: gha: Add timeout to cluster creation
6daeb08e6 tests: k8s: Clean up node debuggers after running
3aa6c77a0 gha: dragonball: Run only on the dragonball labeled machine
37641a543 metrics: Add example config for fio jobs
314aec73d agent: fix typo in constant
4703434b1 tests: k8s: Allow using custom resource group
350f3f70b tests: Import `common.bash` in `run_kubernetes_tests.sh`
d7f04a64a tests: k8s: Leave `runtimeclass_workloads/` alone
bdde6aa94 tests: k8s: Split deployment and testing commands
91a0b3b40 tests: aks: Simply delete cluster when cleaning up
3c1044d9d metrics: Update FIO paths for k8s runner
6177a0db3 metrics: Add env files for FIO
a45900324 metrics: Add fio exec
ea198fddc metrics: Add FIO runner k8s
8f7ef41c1 metrics: Add FIO vendor code
6293c17bd metrics: Add FIO benchmark for metrics tests
ff4cfcd8a runk: Add Docker guide to README
c8ac56569 cache: kernel: Harmonize commit with fetching side
81775ab1b cache: kernel: Fix SEV kernel caching
717f775f3 gha: ci: Add skeleton of vfio job
b9f100b39 agent,libs: Remove unused 'mut' keywords
a56f96bb2 kata-deploy: Allow shim creation based on what's passed to the daemonset
4a5ab38f1 metrics: General improvements to json.bash script
d4eba3698 kata-deploy-binaries: kernel_cache: Take module_dir into account
b7c9867d6 release: Mention the container images used to build the project
7c4b59781 ci: nydus: Fix typo in "source"
6a680e241 gha: ci: Add placeholder for the nydus tests as part of the CI
fb4f7a002 gha: nydus: Add a no-op GHA for nydus
4a207a16f gha: nydus: Bring tests as they are from the tests repo
2c8f83424 runtime-rs: remove unneeded 'mut' keywords
1fc715bc6 s390x: Add AP Attach/Detach test
e91f5edba ci: cri-containerd: Fix default typo for testContainerStart()
8b8aef09a ci: cri-containerd: Temporarily disable TestContainerSwap
56767001c ci: cri-containerd: Add namespace / uid to the pods
a84773652 ci: cri-containerd: Always use sudo to call crictl
99ba86a1b ci: cri-containerd: Add /usr/local/go/bin to the PATH
7f3b30999 ci: cri-containerd: Add `function` before each function
fde22d6bc ci: cri-containerd: Assume podman is always used
9465a0496 ci: cri-containerd: Adapt "source ..." to this repo
df8d14411 ci: cri-containerd: Remove CI variable
f90570aef ci: cri-containerd: Remove unused runc_runtime_bin
c3637039f ci: cri-containerd: Remove KILL_VMM_TEST env var
bc4919f9b ci: cri-containerd: Always run shim-v2 tests
f9e332c6d ci: cri-containerd: Stop cloning containerd
cfd662fee ci: cri-containerd: Remove ununsed SNAP_CI var
d36c3395c ci: cri-containerd: Update copyright
b5be8a4a8 ci: cri-containerd: Move integration-tests.sh as it was
f2e00c95c ci: cri-containerd: Populate install_dependencies()
897955252 versions: Add "latest" field for cri-tools
1bbcbafa6 ci: Add clone_cri_container()
f66c68a2b ci: Add install_cri_tools()
4dd828414 ci: Add install_cri_containerd()
ad47d1b9f ci: Add download_github_project_tarball()
788c562a9 ci: Add get_latest_patch_release_from_a_github_project()
6742f3a89 ci: Use `function` before each install_go.sh function
5eacecffc ci: Adjust paths for install_go.sh
8ed1595f9 ci: Update copyright for install_go.sh
6123d0db2 ci: Move install_go.sh as it was
8653be71b ci: Do not take cross-build into consideration for kata-arch.sh
6a76bf92c ci: Fix style / identation if kata-arch.sh
72743851c ci: Add `function` before each kata-arch.sh function
9f6d4892c ci: Update copyright for kata-arch.sh
6f73a7283 ci: Move kata-arch.sh as it was
3615d7343 ci: Add get_from_kata_deps()
34779491e gha: kubernetes: Avoid declaring repo_root_dir
f3738beac tests: Use $HOME/go as fallback for $GOPATH
b87ed2741 tests: Move `ensure_yq` to common.bash
124e39033 tests: common: Fix quoting when globbing
db77c9a43 tests: Make install_kata take care of the links
13715db1f tests: Do not call `install_check_metrics` when installing kata
630634c5d ci: k8s: Group logs to make them easier to read
228b30f31 ci: k8s: Gather node info during the cleanup
81f99543e ci: k8s: Cleanup cluster before deleting it
38a7b5325 packaging/tools: Add kata-debug
ae6e8d2b3 kata-deploy: Properly get the path of the versions.yaml file
309e23255 cache: kernel: Consider changes in tools/packaging/kernel
59fdd69b8 kata-deploy: Add VERSION and versions.yaml to the final tarball
5dddd7c5d release: Upload versions.yaml as part of the release
bad3ac84b metrics: Rename C-Ray to cpu performance tests
87d99a71e versions: Remove "kernel-experimental"
545de5042 vfio: Fix tests
62aa6750e vfio: Added better handling of VFIO Control Devices
dd422ccb6 vfio: Remove obsolete HotplugVFIOonRootBus
114542e2b s390x: Fixing device.Bus assignment
371a118ad agent: exclude symlinks from recursive ownership change
e64edf41e metrics: Add tensorflow function in gha-run script
67a6fff4f metrics: Enable tensorflow benchmark on gha
01450deb6 Revert "metrics: Replace backslashes used to escape double quoted key in jq expr."
843006805 metrics: Add function to memory inside container script
bbd3c1b6a Dragonball: migrate dragonball-sandbox crates to Kata
fad801d0f ci: k8s: Adapt "source ..." to the new location of gha-run.sh
55e2f0955 metrics: stop hypervirsor and shim at init_env stage
556e663fc metrics: Add disk link to general metrics README
98c121709 metrics: Add C-Ray README
8e7d9926e metrics: Add C-Ray Dockerfile
e2ee76978 metrics: Add C-Ray performance test
2ee2cd307 ci: k8s: Move gha-run.sh to the kubernetes dir
88eaff533 ci: tdx: Adjust KUBECONFIG
c09e268a1 versions: Downgrade SEV(-SNP) kernel back to v5.19.x
6a7a32365 versions: Bump virtiofsd to v1.7.0
ac5f5353b ci: k8s: Bring TDX tests back
950b89ffa versions: Update kernel to version v6.1.38
8ccc1e5c9 metrics: Update machine learning documentation
f50d2b066 gha: ci: cri-containerd: Fix KATA_HYPERVSIOR typo
620b94597 metrics: Add Tensorflow Mobilenet documentation
6c91af0a2 agent: Fix exec hang issues with a backgroud process
59f4731bb metrics: Stop running kata-env before kata is properly installed.
468f017e2 metrics: Replace backslashes used to escape double quoted key in jq expr.
64f013f3b ci: k8s: Enable debug when running the tests
8f4b1df9c kata-deploy: Give users the ability to run it on DEBUG mode
2c8dfde16 kernel: Update kernel config name
150e54d02 runtime-rs: ignore unconfigured network interfaces
3ae02f920 metrics: use rm -f to remove older continerd config file.
a864d0e34 tests: Add tensorflow mobilenet dockerfile
788d2a254 tests: Add tensorflow mobilenet performance test
3fed61e7a tests: Add storage link to general metrics documentation
b34dda4ca tests: Add storage blogbench metrics documentation
6787c6390 runtime-rs: add parameter for propagation of (u)mount events
6e5679bc4 tests: Add function before function name in common.bash for metrics
62080f83c kata-sys-util: Fix compilation errors
02d99caf6 static-checks: Make cargo clippy pass.
982420682 agent: Make the static checks pass for agent
61e4032b0 kata-ctl: Remove all utility functions to get platform protection
a24dbdc78 kata-sys-util: Move utilities to get platform protection
dacdf7c28 kata-ctl: Remove cpu related functions from kata-ctl
f5d195717 kata-sys-util: Move additional functionality to cpu.rs
304b9d914 kata-sys-util: Move CPU info functions
7319cff77 ci: cri-containerd: Add LTS / Active versions for containerd
2a957d41c ci: cri-containerd: Export GOPATH
75a294b74 ci: cri-containerd: Ensure deps are installed
6924d14df metrics: Fix metrics ts generator to treat numbers as decimals
9e048c8ee checkmetrics: Add blogbench read value for qemu
2935aeb7d checkmetrics: Add blogbench write value for qemu
02031e29a checkmetrics: Add blogbench read value for clh
107fae033 checkmetrics: Add blogbench write value for clh
8c75c2f4b metrics: Update blogbench Dockerfile
49723a9ec metrics: Add double quotes to variables
dc67d902e metrics: Enable blogbench test
438fe3b82 gha: ci: Add cri-containerd tests skeleton
bd08d745f tests: metrics: Move metrics specific function to metrics gha-run.sh
3ffd48bc1 tests: common: Move a few utility functions to common.bash
7f961461b tests: Add machine learning README
bb2ef4ca3 tests: Add `function` before each function
063f7aa7c tests: Add Pytorch Dockerfile
1af03b9b3 tests: Add Pytorch performance test
4cecd6237 tests: Add tensorflow Dockerfile
c4094f62c tests: Add metrics machine learning performance tests
89b622dcb gha: k8s: tdx: Temporarily disable TDX tests
8c9d08e87 gha: ci: Gather info about the node / pods
283f809dd runtime-rs: Enhancing Device Manager for network endpoints.
a65291ad7 agent: rustjail: update test_mknod_dev
46b81dd7d agent: clippy: fix cargo clippy warnings
c4771d9e8 agent: Makefile: enable set SECCOMP dynamically
a88212e2c utils.mk: update BUILD_TYPE argument
883b4db38 dragonball: fix cargo test on aarch64
6822029c8 runtime-rs: Do not scan network if network model is "none"
ce54e43eb metrics: Update memory usage script
fbc2a91ab gha: Cancel previous jobs if a PR is updated
307cfc8f7 tools: Use a consistent target name when building mariner initrd
d780cc08f gha: nightly: Also use `workflow_dispatch` to trigger it
b99ff3026 gha: nightly: Fix name size limit for AKS
aedc586e1 dragonball: Makefile: add coverage target
310e069f7 checkmetrics: Enable checkmetrics for memory inside test
1363fbbf1 README: Add badge for our Nightly CI
1776b18fa gha: Do not run all the tests if only docs are updated
28c29b248 bugfix: plus default_memory when calculating mem size
0c1cbd01d gha: ci: after-push: Use github.sha to get the last commit reference
37a955678 gha: ci: nightly: Use github.sha to get the last commit reference
ed23b47c7 tracing: Add tracing to runtime-rs
96e9374d4 dragonball: Don't fail if a request asks for more CPUs than allowed
38f0aaa51 Revert "gha: k8s: dragonball: Skip k8s-number-cpus"
828a72183 gha: k8s: dragonball: Skip k8s-oom
a79505b66 gha: k8s: dragonball: Skip k8s-number-cpus
275c84e7b Revert "agent: fix the issue of exec hang with a backgroud process"
2be342023 checkmetrics: Add memory usage inside container value for qemu
6ca34f949 checkmetrics: Add memory inside container value for clh
6c6892423 metrics: Enable memory inside container metrics
0ad298895 gha: ci: Fix refernce passed to checkout@v3
86904909a gha: ci: Avoid using env also in the ci-nightly and payload-after-push
f72cb2fc1 agent: Remove shadowed function, add slog-term
1d05b9cc7 gha: ci: Pass down secrets to ci-on-push / ci-nightly
c5b4164cb gha: ci: Fix tarball-suffix passed to the metrics tests
07810bf71 agent: Ignore already mounted dev/fs/pseudo-fs
11e3ccfa4 gha: ci: Avoid using env unless it's really needed
c45f646b9 gha: k8s: Ensure cluster doesn't exist before creating it
1a7bbcd39 gha: ci: Fix typo pull_requesst -> pull_request
ddf4afb96 gha: ci: Fix set-fake-pr-number job
8a0a66655 gha: ci: schedule expects a list, not a map
5c0269dc5 gha: ci: Add pr-number input to the correct job
de83cd9de gha: ci: Use $VAR instead of ${{ env.VAR }}
6acce83e1 metrics: Fix the call to check_metrics function
e067d1833 gha: Add a nightly CI job
7c0de8703 gha: k8s: Ensure tests are running on a specific namespace
106e30571 gha: Create a re-usable `ci.yaml` file
cc3993d86 gha: Pass event specific info from the caller workflow
4e396e728 metrics: Add function keyword to to helper metrics functions
1ca17c2f7 metrics: storing metrics workflow artifacts
5a61065ab checkmetrics: Add checkmetrics value for memory usage in qemu
78086ed1f checkmetrics: Add memory usage value for clh
1c3dbafbf metrics: Fix function of how to retrieve multiple values
18968f428 metrics: Add function to have uniformity
35d096b60 metrics: Adds blogbench and webtool metrics tests
d8f90e89d metrics: Rename function at memory usage script
b9d66e0d5 metrics: Fix double quotes variables in memory usage script
476a11194 tests: Enable memory usage metrics tests
b568c7f7d tests/integration: Provide default value for KATA_HOST_OS
d6e96ea06 tests/integration: Use AzureLinux instead of Mariner
40c46c75e tests/integration: Perform yq install in run_tests()
d8b8f7e94 metrics: Enable launch tests time metrics
72fd562bd gha: release: Use a specific release of hub
0502354b4 checkmetrics: Add checkmetrics json for qemu
b481ef188 makefile: Add -buildvcs=false flag to go build
e94aaed3c ci_worker: Add checkmetrics ci worker for cloud hypervisor
917576e6f metrics: Add double quotes in all variables
cc8f0a24e metrics: Add checkmetrics to gha-run.sh for metrics CI
477856c1e gha: dragonball: Correctly propagate PATH update
1c211cd73 gha: Swap asset/release in build matrix
0152c9aba tools: Introduce `USE_CACHE` environment variable
2b5975689 tests: Build CLH with glibc for Mariner
80c78eadc tests: Use baked-in kernel with Mariner
532755ce3 tests: Build Mariner rootfs initrd
6a21e20c6 runtime: Add "none" as a shared_fs option
5681caad5 versions: Upgrade to Cloud Hypervisor v33.0
b2ce8b4d6 metrics: Add memory footprint tests to the CI
d035955ef doc: Add documentation for the virtualization reference architecture
0f454d0c0 gpu: Fixing typos for PCIe topology changes
6bb2ea819 packaging: Fix indentation of build.sh script at ovmf
0504bd725 agent: convert the `sl` macros to functions
0860fbd41 agent: convert the `ttrpc_error` macro to a function
0e5d6ce6d agent: convert the `is_allowed` macro to a function
f680fc52b agent: change `AGENT_CONFIG`'s lazy type to just `AgentConfig`
beb706368 metrics: Uniformity across function names
1f3e837e4 runtime-rs: fix build error on AArch64
6fd25968c runtime-rs: bugfix for direct volume path's validation.
415578cf3 docs: Add general README
bff4672f7 runtime-rs: support physical endpoint using device manager
32cba7e44 metrics: Fix retrieving hypervisor version on metrics
aa7946de4 checkmetrics: Add general checkmetrics documentation
2fac2b72f checkmetrics: Add checkmetrics makefile
e45899ae0 docs: Add time tests documentation reference
28130d3ce docs: Add boot time metrics documentation
0df2fc270 runtime-rs: add support spdk/vhost-user based volume.
17198089e vendor: Add vendor checkmetrics dependencies
f1dfea6e8 docs: Add metrics documentation reference
8330fb8ee gpu: Update unit tests
859359424 metrics: enable launch-times test on gha-run metrics script
c4ee601bf metrics: Add checkmetrics for kata metrics CI
e0d6475b4 gha: Don't automatically trigger CI
b535c7cbd tests: Enable running k8s tests on Mariner
71071bdb6 docs: Add general metrics documentation
610f7986e check: Relax the unrestricted_guest check when running in a VM
1b406b9d0 kata-ctl:Implement functionality to check host is capable of running VM
adf88eaa8 static-build: Remove kata-version parameter
09720babc docs: fix spelling of "crate"
7185afc50 gha: Fix gha actions
21294b868 packaging: Fix indentation in init.sh script
fad3ac9f5 metrics: install kata and launch-times test
4bbfcfaf1 tests: Move tests helper script to this repo
f152f0e8c metrics: Add launch-times to metrics tests
59510cfee runtime-rs: add support vfio device based volume
1e3b372bb runtime-rs: add support vfio device manager
6b0848930 gha: Fix format for run launchtimes metrics yaml
3cefa43e7 tests: Add json script for metrics tests
6a3710055 initramfs: Build dependencies as part of the Dockerfile
aa2380fdd packaging: Add infra to push the initramfs builder image
1c7fcc6cb packaging: Use existing image to build the initramfs
a43ea24df virtiofsd: Convert legacy `-o` sub-options to their `--` replacement
8e00dc694 virtiofsd: Drop `-o no_posix_lock`
2a15ad978 virtiofsd: Stop using deprecated `-f` option
c3043a6c6 tests: Add tests lib common script
b16e0de73 gha: Add base branch on SHA on pull requst
72f2cb84e gpu: Reset cold or hot plug after overriding
fbacc0964 gpu: PCIe topology, consider vhost-user-block in Virt
bc152b114 gha: ci-on-push: Run metrics tests
dad731d5c docs: Update Developer Guide
b11246c3a gpu: Various fixes for virt machine type
40101ea7d vfio: Added annotation for hot(cold) plug
8f0d4e261 vfio: Cleanup of Cold and Hot Plug
b5c4677e0 vfio: Rearrange the bus assignemnt
b1aa8c8a2 gpu: Moved the PCIe configs to drivers
55a66eb7f gpu: Add config to TOML
da42801c3 gpu: Add config settings tests for hot-plug
de39fb7d3 runtime: Add support for GPUDirect and GPUDirect RDMA PCIe topology
9318e022a gpu: Add CC relates configs
b7932be4b gpu: Add Arm64 Kernel Settings
211b0ab26 gpu: Update Kernel Config
5f103003d gpu: Update kernel building to the latest changes
35e4938e8 tools: Fix no-op builds
347385b4e runtime-rs: Enhance flexibility of virtio-fs config
21d227853 versions: Update firecracker version to 1.3.3
0e2379909 gha: Fix `stage` definition in matrix
ae2cfa826 doc: add vcpu handlint doc for runtime-rs
7b1e67819 fix(clippy): fix clippy error
67972ec48 feat(runtime-rs): calculate initial size
aaa96c749 feat(runtime-rs): modify onlineCpuMemRequest
d66f7572d feat(runtime-rs): clear cpuset in runtime side
a0385e138 feat(runtime-rs): update linux resource when stop_process
a39e1e6cd feat(runtime-rs): merge the update_cgroups in update_linux_resources
fa6dff9f7 feat(runtime-rs): support vcpu resizing on runtime side
8cb4238b4 packaging: Remove snap package
213773998 runtime-rs: update Cargo.lock
56d2ea9b7 kata-ctl: Refactor kernel module check
9f7a45996 gha: Add `rootfs-initrd-mariner` build target
f28a62164 gha: Add `cloud-hypervisor-glibc` build target
8fb7ab751 dragonball: introduce virtio-balloon device
7ed949497 dragonball: introduce virtio-mem device
776a15e09 runtime-rs: add support direct volume.
a8e0f51c5 dragonball: extend DeviceOpContext
abae11404 runtime-rs: refactor device manager implementation
210a15794 dragonball: avoid obtaining lock twice in create_stdio_console
69668ce87 tests: gha-run: Use correct env variable for repo
f487199ed gha: aks: Fix argument in call to gha-run.sh
f6afae9c7 packaging: Add rootfs-image-tdx-tarball target
f62b2670c config: Add root hash value and measure config to kernel params
008058807 kernel: Integrate initramfs into Guest kernel
28b264562 initramfs: Add build script to generate initramfs
5cb02a806 image-build: generate root hash as an separate partition for rootfs
31c0ad207 packaging: Add cryptsetup support in Guest kernel and rootfs
980d084f4 log-parser: Update log parser link at README
410bc1814 agent-ctl: fix the compile error
77519fd12 kata-ctl: Switch to slog logging; add --log-level, --json-logging args
aab603096 gha: aks: Extract `run` commands to a script
e4eb664d2 runtime-rs: update rust to 1.69.0
ed37715e0 runtime-rs: handle copy files when share_fs is not available
5f6fc3ed7 runtime-rs: bugfix: update Cargo.lock
1c6d22c80 gha: aks: Use short SHA in cluster name
3c1f6d36d readme: Update Kata Containers logo
388684113 readme: Add status badge for the "Publish Artefacts" job
26f752038 kata-deploy: Change how we get the Ubuntu k8s key
aebd3b47d gha: aks: Ensure host_os is used everywhere needed
0c8282c22 gha: aks: Add the host_os as part of the aks cluster's name
4b89a6bda release: Standardize kata static file name
9228815ad  kernel: Modify build-kernel.sh to accomodate for changes in version.yaml
03027a739 gha: Fix Mariner cluster creation
43e73bdef packaging: make BUILDER_REGISTRY configurable
ffe3157a4 dragonball: add arm64 patches for upcall
560442e6e dragonball: add vcpu_boot_onlined vector
e31772cfe dragonball: add support resize_vcpu on aarch64
64c764c14 dragonball: update dbs-boot to v0.4.0
fd9b41464 dragonball: update comment for init_microvm
af16d3fca gha: Unbreak CI and fix cluster creation step
5ddc4f94c runtime-rs/kata-ctl: Enhancement of DirectVolumeMount.
25d2fb0fd agent: fix the issue of exec hang with a backgroud process
4af4ced1a gha: Create Mariner host as part of k8s tests
eee7aae71 runtime-rs/sandbox_bindmounts: add support for sandbox bindmounts
557b84081 gha: aks: Wait longer to start running the tests
c04c872c4 gha: aks: Increase the timeout time
428041624 kata-deploy: Improve shim backup / restore
14c3f1e9f kata-deploy: Fix indentation on kata deploy merge script
0e47cfc4c runtime: sending SIGKILL to qemu
6a0035e41 doc: Update git commands
433b5add4 kubernetes: add agnhost command in pod yaml
c477ac551 dragonball: Convert VirtioNetDeviceMgr function to method
4659facb7 dragonball: Convert BlockDeviceMgr function to method
ee6deef09 dragonball: Remove virtio-net and vsock devices gracefully
2bda92fac netlink: Fix the issue of update_interface

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-31 09:02:07 +02:00
Jiang Liu
b3901c46d6 runtime-rs: ignore errors during clean up sandbox resources
Ignore errors during clean up sandbox resources as much as we can.

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-31 13:07:43 +08:00
Chelsea Mafrica
8a2c201719 docs: Update links for pods and kubelet
The links for pods and kubelets no longer work so update to new links
with relevant info.

Fixes #7487

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
2023-07-29 00:38:35 +00:00
Gabriela Cervantes
5a1b5d3672 metrics: Add sysbench pod yaml
This PR adds the sysbench pod yaml for the sysbench performance test.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 20:03:15 +00:00
Gabriela Cervantes
ad413d1646 metrics: Add sysbench dockerfile
This PR adds sysbench dockerfile.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 19:58:10 +00:00
Gabriela Cervantes
1512560111 metrics: Add sysbench performance test
This PR adds the sysbench performance test for kata CI.

Fixes #7485

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 19:54:12 +00:00
Gabriela Cervantes
bee1a628bd metrics: Fix json result for tensorflow
This PR fixes the json result for tensorflow.i

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 17:02:16 +00:00
Jiang Liu
62e328ca5c runtime-rs: refine implementation of TaskService
Refine implementation of TaskService, making handler_message() as a
method.

Fixes: #7479

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-29 00:47:33 +08:00
Jiang Liu
458e1bc712 runtime-rs: make send_message() as an method of ServiceManager
Simplify implementation by making send_message() as an method of
ServiceManager.

Fixes: #7479

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-29 00:47:31 +08:00
Jiang Liu
1cc1c81c9a runtime-rs: fix possibe bug in ServiceManager::run()
Multiple instances of task service may get registered by
ServiceManager::run(), fix it by making operation symmetric.

Fixes: #7479

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-29 00:47:30 +08:00
Jiang Liu
1a5f90dc3f runtime-rs: simplify implementation of service crate
Simplify implementation of service crate.

Fixes: #7479

Signed-off-by: Jiang Liu <gerry@linux.alibaba.com>
2023-07-29 00:47:28 +08:00
Gabriela Cervantes
51cd99c927 metrics: Round axelnet and resnet results
This PR rounds the axelnet and resnet results in order to extract
properly the result.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
3b883bf5a7 metrics: Fix atoi invalid syntax
This PR will avoid to have the strconv.atoi parsing error when we
are retrieving the results from the json.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
f9dec11a8f checkmetrics: Move checkmetrics to gha-run script
This PR moves the checkmetrics to gha-run script to gathered
tensorflow information.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
53af71cfd0 checkmetrics: Add AlexNet value for qemu
This PR adds AlexNet value for qemu for checkmetrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
a435d36fe1 checkmetrics: Add Resnet value for qemu
This PR adds the Resnet value for qemu for checkmetrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
a79a3a8e1d checkmetrics: Add alexnet value for clh
This PR adds the AlexNet value for clh for checkmetrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
3c32875046 checkmetrics: Add Resnet value for clh
This PR adds the checkmetrics Resnet value for clh.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
08dfaa97aa metrics: General improvements to the tensorflow script
This PR adds general improvements to the tensorflow script.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Gabriela Cervantes
63b8534b41 metrics: Enable Tensorflow metrics for kata CI
This PR enables the Tensorflow benchmark metrics for kata CI.

Fixes #7395

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-28 16:15:22 +00:00
Aurélien
e8f8641988 Merge pull request #7132 from sprt/aks-volume-tests
tests: Add `k8s-volume` and `k8s-file-volume` tests to GHA CI
2023-07-28 08:58:03 -07:00
Fabiano Fidêncio
68b9acfd02 Merge pull request #7474 from GabyCT/topic/upboo
metrics: Update boot time for kata metrics
2023-07-28 17:55:43 +02:00
David Esparza
f89abcbad8 Merge pull request #7473 from GabyCT/topic/addfioreport
metrics: Add FIO report files for kata metrics
2023-07-28 09:37:21 -06:00
Fabiano Fidêncio
c9742d6fa9 Merge pull request #7411 from fidencio/topic/kata-deploy-create-runtime-classes
kata-deploy: Allow runtimeclasses to be created by the daemonset
2023-07-28 16:05:49 +02:00
Yuan-Zhuo
731e7c763f kata-ctl: add monitor subcommand for runtime-rs
The previous kata-monitor in golang could not communicate with runtime-rs
to gather metrics due to different sandbox addresses.
This PR adds the subcommand monitor in kata-ctl to gather metrics from
runtime-rs and monitor itself.

Fixes: #5017

Signed-off-by: Yuan-Zhuo <yuanzhuo0118@outlook.com>
2023-07-28 17:30:08 +08:00
Yuan-Zhuo
d74639d8c6 kata-ctl: provide the global TIMEOUT for creating MgmtClient
Several functions in kata-ctl need to establish a connection with runtime-rs through MgmtClient.
This PR provides a global TIMEOUT to avoid multiple definitions.

Fixes: #5017

Signed-off-by: Yuan-Zhuo <yuanzhuo0118@outlook.com>
2023-07-28 17:23:37 +08:00
Yuan-Zhuo
02cc4fe9db runtime-rs: add support for gather metrics in runtime-rs
1. Implemented metrics collection for runtime-rs shim and dragonball hypervisor.
2. Described the current supported metrics in runtime-rs.(docs/design/kata-metrics-in-runtime-rs.md)

Fixes: #5017

Signed-off-by: Yuan-Zhuo <yuanzhuo0118@outlook.com>
2023-07-28 17:16:51 +08:00
Fabiano Fidêncio
8353aae41a ci: k8s: Rework get_nodes_and_pods_info()
The amount of info we've added seemed unnecessary, and ends up making
our lives even harder when trying to find errors.

Let's just rely on the kata-debug container to collect the needed info
for us.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
6ad5d7112e ci: k8s: Do not gather node info before running the tests
It's been proven to not be useful, and ends up making things more
confusing due to the amount of logs printed.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
5261e3a60c ci: k8s: Group messages to improve readability
Right now is getting way too easy to get lost in the logs.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
9cc6b5f461 ci: k8s: Get logs from kata-deploy
Let's make sure we can debug kata-deploy in case something goes wrong
during its execution.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
9d285c6226 ci: k8s: Let kata-deploy take care of the runtimeclasses
By doing this we can test the change done for the daemonset. :-)

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
87568ed985 gha: Test split out runtimeclasses are in sync with all-in-one file
This is needed in order to not lose track of what's been created and
what's been added here and there.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
39192c6084 kata-deploy: Print variables passed to the script
This will help folks to debug / understand what's been passed to the
kata-deploy.sh script.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
0e157be6f2 kata-deploy: Allow runtimeclasses to be created by the daemonset
Let's allow the daemonset to create the runtimeclasses, which will
decrease one manual step a user of kata-deploy should take, and also
help us in the Confidential Containers land as the Operator can just
delegate it to this script.

Fixes: #7409

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 10:04:33 +02:00
Fabiano Fidêncio
a274333248 kata-deploy: Change default values of DEBUG
This can be easily done as there was no official release with the
previous values.

The reason we're doing so is because when using `yq` to replace the
value, even when forcing `--tag '!!str' "yes"`, the content is placed
without quotes, causing errors in our CI.

While here, we're also removing the fallback value for DEBUG, as it is
**always** set in the kata-deploy.yaml file.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 09:50:39 +02:00
Fabiano Fidêncio
69535b8089 kata-deploy: runtimeclass: Split out entries
This will make things simpler to only create the handlers defined by the
kata-deploy user.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 09:43:45 +02:00
Fabiano Fidêncio
9e1710674a kata-runtimeClasses: Alphabetically sort the enrties
This will become handy in the near future, as we want to have separate
enrties for each file, while still keeping this one.

Having the entries sorted will make our lives easier to test those are
always in sync.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-28 09:43:45 +02:00
Zhongtao Hu
61a8eabf8e Merge pull request #7139 from openanolis/fix/devmanager
runtime-rs: change block index to 0
2023-07-28 14:04:19 +08:00
Aurélien Bombo
6222bd9103 tests: Add k8s-file-volume test
This imports the k8s-file-volume test from the tests repo and modifies
it slightly to set up the host volume on the AKS host.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-27 14:07:55 -07:00
Aurélien Bombo
187a72d381 tests: Add k8s-volume test
This imports the k8s-volume test from the tests repo and modifies it
slightly to set up the host volume on the AKS host.

Fixes: #6566

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-27 14:06:43 -07:00
Gabriela Cervantes
0c84270357 metrics: Add boot time value for qemu
This PR adds the boot time value and limit for qemu.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 20:06:24 +00:00
Gabriela Cervantes
6520dfee37 metrics: Update boot time for kata metrics
This PR updates the boot time limit for kata metrics.

Fixes #7475

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 19:14:19 +00:00
Gabriela Cervantes
ff22790617 metrics: Update runtime and configuration paths
This PR updates the runtime and configuration paths for kata containers.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 17:14:03 +00:00
Gabriela Cervantes
a5d4e33880 metrics: Add compare virtiofsd dax script
This PR adds the compare virtiofsd dax script for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 16:53:50 +00:00
Gabriela Cervantes
5e937fa622 metrics: Update general FIO tests
This PR updates general FIO tests by adding the recent date of a change.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 16:47:17 +00:00
Gabriela Cervantes
b0bea47c53 metrics: Add makefile to report generator
This PR adds the makefile to report generator for the FIO test.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 16:42:11 +00:00
Gabriela Cervantes
73c57b9a19 metrics: Add FIO report files for kata metrics
This PR adds FIO report files for kata metrics.

Fixes #7472

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-27 16:39:35 +00:00
Chelsea Mafrica
e941b3a094 Merge pull request #7456 from alakesh/agent-fix-typo
agent: fix typo in constant
2023-07-27 09:31:24 -07:00
David Esparza
ba8a8fcbf2 Merge pull request #7442 from GabyCT/topic/addgofilesfio
metrics: Add FIO benchmark for metrics tests
2023-07-27 10:20:43 -06:00
Zhongtao Hu
c8fcd29d9b runtime-rs: use device manager to handle virtio-pmem
use device manager to handle virtio-pmem device

Fixes: #7119
Signed-off-by: Zhongtao Hu <zhongtaohu.tim@linux.alibaba.com>
2023-07-27 20:18:49 +08:00
Zhongtao Hu
901c192251 runtime-rs: support configure vm_rootfs_driver
support configure vm_rootfs_driver in toml config

Fixes: #7119
Signed-off-by: Zhongtao Hu <zhongtaohu.tim@linux.alibaba.com>
2023-07-27 20:12:53 +08:00
Zhongtao Hu
5d6199f9bc runtime-rs: use device manager to handle vm rootfs
use device manager to handle vm rootfs, after attach the block device of
vm rootfs, we need to increase index number

Fixes: #7119
Signed-off-by: Zhongtao Hu <zhongtaohu.tim@linux.alibaba.com>
Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2023-07-27 20:12:45 +08:00
James O. D. Hunt
20f1f62a2a runtime-rs: change block index to 0
Change block index in SharedInfo to 0 for vda.

Fixes #7119

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2023-07-27 20:11:44 +08:00
Chao Wu
ede1dae65d Merge pull request #7465 from fidencio/topic/fix-dragonball-static-check-runner-selector
gha: dragonball: Run only on the dragonball labeled machine
2023-07-27 10:19:26 +08:00
Gabriela Cervantes
662f87539e metrics: Add general FIO makefile
This PR adds a general FIO makefile for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-26 20:46:02 +00:00
Fabiano Fidêncio
f28af98ac6 Merge pull request #7453 from sprt/fix-ci-node-debugger
tests: Fix `k8s-job` test
2023-07-26 22:27:21 +02:00
Fabiano Fidêncio
8a22b5f075 Merge pull request #7439 from ManaSugi/fix/remove-unused-mut
agent,libs: Remove unused 'mut' keywords
2023-07-26 21:25:41 +02:00
Fabiano Fidêncio
9792ac49fe Merge pull request #7425 from jongwu/remove_mut
runtime-rs: remove unneeded 'mut' keywords
2023-07-26 21:24:40 +02:00
Fabiano Fidêncio
24564a8499 Merge pull request #7455 from sprt/local-tests
tests: QoL improvements for running tests locally
2023-07-26 21:23:43 +02:00
Aurélien Bombo
c5a87eed29 tests: gha: Add timeout to cluster creation
This has been intermittently taking a while lately so let's add a
timeout.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-26 10:19:07 -07:00
Aurélien Bombo
6daeb08e69 tests: k8s: Clean up node debuggers after running
This deletes node debugger pods after execution since their presence may
affect tests that assume only test workloads pods are present.

For example, in `k8s-job` we wait for *any* pod to be in the `Succeeded`
state before proceeding, which causes failures.

Fixes: #7452

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-26 10:19:07 -07:00
Fabiano Fidêncio
3aa6c77a01 gha: dragonball: Run only on the dragonball labeled machine
Static checks for dragonball are landing on any of the self-hosted
runners, and the reason for that is because "self-hosted" was the label
selector used.

Let's use "dragonball" instead, as the machine has that label as well.

Fixes: #7464

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-26 18:15:04 +02:00
Gabriela Cervantes
37641a5430 metrics: Add example config for fio jobs
This PR adds example config for fio jobs.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-26 16:03:12 +00:00
Alakesh Haloi
314aec73d4 agent: fix typo in constant
It fixes a constant name to have the right spelling

Fixes: #7457
Signed-off-by: Alakesh Haloi <a_haloi@apple.com>
2023-07-26 00:06:34 -05:00
Aurélien Bombo
4703434b12 tests: k8s: Allow using custom resource group
This simply allows setting a custom resource group when debugging
locally, so as to prevent name collisions and not pollute the namespace.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-25 15:45:44 -07:00
Aurélien Bombo
350f3f70b7 tests: Import common.bash in run_kubernetes_tests.sh
Not sure why this works in GHA, but the `info` call on line 65 would
fail locally.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-25 15:45:44 -07:00
Aurélien Bombo
d7f04a64a0 tests: k8s: Leave runtimeclass_workloads/ alone
Makes it so that `setup.sh` doesn't make changes in
`runtimeclass_workloads/` directly. Instead we treat that as a template
directory and we use the new directory `runtimeclass_workloads_work/` as
a work dir.

This has two advantages:

 * Allows rerunning tests without the assumption that `setup.sh` must be
   idempotent. E.g. the `set_runtime_class()` step would break.
 * Doesn't pollute your git environment with a bunch of changes when
   developing.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-25 15:45:44 -07:00
Aurélien Bombo
bdde6aa948 tests: k8s: Split deployment and testing commands
This splits deploying Kata and running the tests into separate commands
to make it possible to rerun tests locally without having to redeploy
Kata each time.

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-25 15:44:46 -07:00
Aurélien Bombo
91a0b3b406 tests: aks: Simply delete cluster when cleaning up
If we're going to delete the cluster anyway, no need to call
kata-cleanup.

Fixes: #7454

Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
2023-07-25 15:44:46 -07:00
Gabriela Cervantes
3c1044d9d5 metrics: Update FIO paths for k8s runner
This PR updates the FIO paths for k8s runner.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 20:50:03 +00:00
Eric Ernst
5385ddc560 Merge pull request #7365 from alakesh/symlink-fix
agent: exclude symlinks from recursive ownership change
2023-07-25 11:27:48 -07:00
Gabriela Cervantes
6177a0db3e metrics: Add env files for FIO
This PR adds the env files for FIO for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 17:48:45 +00:00
Gabriela Cervantes
a45900324d metrics: Add fio exec
This PR adds fio exec for the FIO benchmark.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 17:36:08 +00:00
Gabriela Cervantes
ea198fddcc metrics: Add FIO runner k8s
Add program to execute FIO workloads using k8s.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 17:34:29 +00:00
Gabriela Cervantes
8f7ef41c14 metrics: Add FIO vendor code
This PR adds the FIO vendor code.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 17:24:29 +00:00
Gabriela Cervantes
6293c17bde metrics: Add FIO benchmark for metrics tests
This PR adds the FIO benchmark scripts and resources for the metrics
tests section.

Fixes #7441

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-25 16:36:33 +00:00
Fabiano Fidêncio
cdf04e5018 Merge pull request #7437 from jepio/fix-sev-kernel-cache
cache: kernel: Fix kernel caching
2023-07-25 18:10:03 +02:00
GabyCT
7a3b55ce67 Merge pull request #7432 from ManaSugi/runk/doc-docker
runk: Add Docker guide to README
2023-07-25 09:56:02 -06:00
GabyCT
c1bd527163 Merge pull request #7430 from GabyCT/topic/fixjson
metrics: General improvements to json.bash script
2023-07-25 09:45:53 -06:00
Fabiano Fidêncio
6efd684a46 Merge pull request #7408 from fidencio/topic/kata-deploy-add-SHIMS-and-SHIM_DEFAULT-as-env
kata-deploy: Allow shim creation based on what's passed to the daemonset
2023-07-25 16:56:46 +02:00
Fabiano Fidêncio
5b82268d2c Merge pull request #7436 from jepio/vfio-gha
gha: ci: Add skeleton of vfio job
2023-07-25 14:44:04 +02:00
Manabu Sugimoto
ff4cfcd8a2 runk: Add Docker guide to README
`runk` can launch containers using Docker, so add the guide
to it's README.

```sh
$ sudo dockerd --experimental --add-runtime="runk=/usr/local/bin/runk"
$ sudo docker run -it --rm --runtime runk busybox echo hello runk
hello runk
```

Fixes: #7431

Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
2023-07-25 20:10:49 +09:00
Jeremi Piotrowski
c8ac56569a cache: kernel: Harmonize commit with fetching side
kata-deploy-binaries.sh uses the last commit in
tools/packaging/static-build/kernel for its version check, while the cache
generation uses tools/packaging/kernel. Use tools/packaging/static-build/kernel
as $kata_config_version is already part of the version string and covers any
changes to tools/packaging/kernel.

Fixes: #7403
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-25 12:23:05 +02:00
Jeremi Piotrowski
81775ab1b3 cache: kernel: Fix SEV kernel caching
The SEV kernel cache calls create_cache_asset() twice, once for the kernel and
once for modules. Both calls need to use the same version string, otherwise the
second call overwrites the "latest" file of the first one and the cache is not
used.

Fixes: #7403
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-25 11:58:19 +02:00
Jeremi Piotrowski
717f775f30 gha: ci: Add skeleton of vfio job
This job will run on a nested virt capable Azure VM (improving test
concurrency). This is just a placeholder while we adapt the test to GHA.

Fixes: #6555
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-25 11:13:04 +02:00
Manabu Sugimoto
b9f100b391 agent,libs: Remove unused 'mut' keywords
Remove unused `mut` because the agent compilation fails
when the rust compiler is >= 1.71. This is related to #7425

Fixes: #7438

Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
2023-07-25 17:41:08 +09:00
Fabiano Fidêncio
a56f96bb2b kata-deploy: Allow shim creation based on what's passed to the daemonset
Instead of hardcoding shims as part of the script, let's ensure we can
allow them to be created based on environment variables passed to the
daemonset.

This change brings no functionality change as the default values in the
daemonset are exactly what has been used as part of the scripts.

Fixes: #7407

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-25 08:30:00 +02:00
Fabiano Fidêncio
5ce0b4743f Merge pull request #7382 from zvonkok/vfio-ap-debug
s390x: Fixing device.Bus assignment
2023-07-25 08:26:25 +02:00
David Esparza
b11d618a3f Merge pull request #7413 from fidencio/topic/release-publish-builder-images
release: Mention the container images used to build the project
2023-07-24 15:46:31 -06:00
Fabiano Fidêncio
56fdeb1247 Merge pull request #7417 from fidencio/topic/kata-deploy-binaries-cached-kernel-fix
kata-deploy-binaries: kernel_cache: Take module_dir into account
2023-07-24 22:26:09 +02:00
Gabriela Cervantes
4a5ab38f16 metrics: General improvements to json.bash script
This PR adds general improvements like putting function before function
name and consistency in how we declare variables and so on to have
uniformity across the metrics scripts.

Fixes #7429

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-24 16:51:38 +00:00
Fabiano Fidêncio
d4eba36980 kata-deploy-binaries: kernel_cache: Take module_dir into account
`module_dir` has been passed to the function but was never assigned to a
var, leading to errors when trying to use it.

Fixes: #7416

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 18:19:13 +02:00
Fabiano Fidêncio
b7c9867d60 release: Mention the container images used to build the project
This is a small step towards build reproducibility.

Fixes: #7412

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 18:01:57 +02:00
Wainer Moschetta
2e9853c761 Merge pull request #7427 from fidencio/topic/gha-port-nydus-tests-follow-up-1
ci: nydus: Fix typo in "source"
2023-07-24 11:20:05 -03:00
Fabiano Fidêncio
7c4b597816 ci: nydus: Fix typo in "source"
We should source from `nydus_dir`, instead of `cri_containerd_dir`, and
that was a leftover from fb4f7a002c.

Fixes: #6543

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 14:55:09 +02:00
Fabiano Fidêncio
589672d510 Merge pull request #7426 from fidencio/topic/gha-port-nydus-tests
gha: ci: Add no-op nydus tests to our CI
2023-07-24 13:56:57 +02:00
Fabiano Fidêncio
6a680e241b gha: ci: Add placeholder for the nydus tests as part of the CI
This will triger the nydus tests, but as they currently are they'll just
return "okay" without actually executing.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 13:37:36 +02:00
Fabiano Fidêncio
fb4f7a002c gha: nydus: Add a no-op GHA for nydus
This newly added GHA does nothing, is not even triggered, and it's just
a placeholder that we'll grow in the next commits / PRs, so we can
actually start running the nydus tests as part of our CI.

Fixes: #6543

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 13:37:33 +02:00
Fupan Li
0ae987973b Merge pull request #7367 from openanolis/chao/migrate_dragonball_sandbox
Dragonball: migrate dragonball-sandbox crates to Kata
2023-07-24 17:52:11 +08:00
Fabiano Fidêncio
4a207a16f9 gha: nydus: Bring tests as they are from the tests repo
Let's bring the nydus tests, without any kind of modification, from the
tests repo.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-24 10:56:41 +02:00
Jianyong Wu
2c8f83424d runtime-rs: remove unneeded 'mut' keywords
These unneeded 'mut' keywords blocks built by rust 1.71.0. Remove them.

Fixes: #7424
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2023-07-24 08:47:15 +00:00
Zvonko Kaiser
1fc715bc65 s390x: Add AP Attach/Detach test
Now that we have propper AP device support add a
unit test for testing the correct Attach/Detach of AP devices.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-23 13:44:19 +00:00
Fabiano Fidêncio
e1a4040a6c Merge pull request #7326 from fidencio/topic/gha-ci-add-cri-containerd-tests
ci: gha: Add cri-containerd tests (but still do not enable them)
2023-07-21 19:29:38 +02:00
Fabiano Fidêncio
6a59e227b6 Merge pull request #7399 from fidencio/topic/add-kata-debug
packaging/tools: Add kata-debug and use it as part of our CI
2023-07-21 17:05:27 +02:00
Fabiano Fidêncio
e91f5edba0 ci: cri-containerd: Fix default typo for testContainerStart()
It must but {1:-0}, instead of {1-0}.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
8b8aef09af ci: cri-containerd: Temporarily disable TestContainerSwap
The test is currently failing with GHA, and I don't think it makes sense
to block all the other tests to get merged while it's happening.

For now, let's disable it and re-enable it as soon as we have it
passing.

Reference: https://github.com/kata-containers/kata-containers/issues/7410

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
56767001cb ci: cri-containerd: Add namespace / uid to the pods
Otherwise crictl will fail to remove them with:
```
getting sandbox status of pod "$pod": metadata.Name, metadata.Namespace
or metadata.Uid is not in metadata "..."
```

A huge shout out to Steven Horsman for helping to debug this one.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
a84773652c ci: cri-containerd: Always use sudo to call crictl
Otherwise we may get the following error:
```
time="2023-07-15T21:12:13Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: permission denied\""
```

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
99ba86a1b2 ci: cri-containerd: Add /usr/local/go/bin to the PATH
Otherwise go is not picked up.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
7f3b309997 ci: cri-containerd: Add function before each function
We've been doing this for all files moved to this repo.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
fde22d6bce ci: cri-containerd: Assume podman is always used
For this set of tests, we'll always be using podman in order to avoid
having containerd pulled in by docker.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
9465a04963 ci: cri-containerd: Adapt "source ..." to this repo
Let's adapt what we "source" to the kata-containers repo.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
df8d144119 ci: cri-containerd: Remove CI variable
We always want to run the tests using as much debug as possible.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
f90570aef0 ci: cri-containerd: Remove unused runc_runtime_bin
The variable is not used anywhere in our tests.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
c3637039f4 ci: cri-containerd: Remove KILL_VMM_TEST env var
We don't need the env var, we just need to restrict the test according
to the KATA_HYPERVISOR used, as right now it's very specifict to QEMU.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
bc4919f9b2 ci: cri-containerd: Always run shim-v2 tests
We only have shim-v2 as the runtime type, so we always need to run tests
using it. :-)

We had to adjust the script in order to properly run the tests with the
current logic.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
f9e332c6db ci: cri-containerd: Stop cloning containerd
It's already done as part of the install_dependencies()

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
cfd662fee9 ci: cri-containerd: Remove ununsed SNAP_CI var
We don't support SNAP anymore, thus we can remove the var.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
d36c3395c0 ci: cri-containerd: Update copyright
As we're touching the file already, let's update its Copyright info.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
b5be8a4a8f ci: cri-containerd: Move integration-tests.sh as it was
Let's move the `integration/containerd/cri/integration-tests.sh` file
from the tests repo to this one.

The file has been moved as it is, it's not used, and in the following
commits we'll clean it up before actually using it.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
f2e00c95c0 ci: cri-containerd: Populate install_dependencies()
Let's install all the dependencies needed for running the
`cri-containerd` tests.

The list of dependencies we have are:
* From the system
  - build-essential
  - jq
  - podman-docker
* From our own repo
  - yq
  - go
* From GitHub projects
  - containerd
  - cri-tools

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
8979552527 versions: Add "latest" field for cri-tools
As we don't want to disrupt what we have on the `tests` repo, let's
create a "latest" entry and use that for the GitHub actions tests.

Once we deprecate the `tests` repo we can decide whether we want to
stick to using "latest" or switch back to "version".

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
1bbcbafa67 ci: Add clone_cri_container()
This function will simply clone containerd repo, specifically on a tag
we want to use to test.

This can be expanded for different projects, and it will be the case as
soon as we grow the tests.  But, for now, let's keep it simple.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
f66c68a2bf ci: Add install_cri_tools()
This function will install cri-tools in the host, and soon enough (as
part of this PR) we'll be using it to install cri-tools as part of the
cri-containerd tests.

I've decided to have this as part of the `common.bash` as other tests
that will be added in the future will require cri-tools to be installed
as well.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
4dd828414f ci: Add install_cri_containerd()
This function will install cri-containerd in the host, and soon enough
(as part of this PR) we'll be using it to install cri-containerd as part
of the cri-containerd tests.

I've decided to have this as part of the `common.bash` as other tests
that will be added in the future will require cri-containerd to be
installed as well.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
ad47d1b9f8 ci: Add download_github_project_tarball()
This function will hel us to get the tarball, from a github project,
that we're going to use as part of our tests.

Right now this is not used anywhere, but it'll soon enough (as part of
this series) be used to download the cri-containerd / cri-tools / cni
tarballs.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
788c562a95 ci: Add get_latest_patch_release_from_a_github_project()
This function will help us to get the latest patch release from a
GitHub project.

The idea behind this function is that we don't have to keep updating
versions.yaml that frequently (or worse, have it outdated as it
currently is), and always test against the latest patch release of a
given project's version that we care about.

Although right now this is not used anywhere, this will be used with the
coming cri-containerd tests, which will be part of this series.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
6742f3a898 ci: Use function before each install_go.sh function
We've been doing this for all files moved to this repo.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
5eacecffc3 ci: Adjust paths for install_go.sh
Let's adjust paths for what we source and the scripts we call, after
moving from the tests repo to this one.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
8ed1595f96 ci: Update copyright for install_go.sh
As we're touching the file already, let's update its Copyright info.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
6123d0db2c ci: Move install_go.sh as it was
Let's move `.ci/install_go.sh` file from the tests repo to this one.

The file has been moved as it is, it's not used, and in the following
commits we'll clean it up before actually using it.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
8653be71b2 ci: Do not take cross-build into consideration for kata-arch.sh
Right now we'd need to import lib.sh just in order to get cross-build
information for rust, and it seems a little bit premature to do so at
this stage and only for rust.

Let's skip it and keep this transition simple.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
6a76bf92cb ci: Fix style / identation if kata-arch.sh
We've been using:
```
function foo() {
}
```

instead of
```
function foo()
{
}
```

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
72743851c1 ci: Add function before each kata-arch.sh function
We've been doing this for all files moved to this repo.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
9f6d4892c8 ci: Update copyright for kata-arch.sh
As we're touching the file already, let's update its Copyright info.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
6f73a72839 ci: Move kata-arch.sh as it was
Let's move `.ci/kata-arch.sh` file from the tests repo to this one.

The file has been moved as it is, it's not used, and in the following
commits we'll clean it up before actually using it.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
3615d73433 ci: Add get_from_kata_deps()
First of all, I'm 100% aware that I'm duplicating this function here as
I've copied it from the packaging stuff, and I'm not exactly proud of
that.

However, right now it seems a little bit premature to combine that set
of scripts with this set of scripts in a single one and make them used
by both pieces of our project.

Anyways, this functions helps to get information from the
`versions.yaml` file, and it'll be used as part of the cri-containerd
tests and a few others in the future.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
34779491e0 gha: kubernetes: Avoid declaring repo_root_dir
This is already declared as part of the `common.bash` file, so let's
just make sure we use it from there.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
f3738beaca tests: Use $HOME/go as fallback for $GOPATH
Considering that someone may want to run the tests locally, we shouldn't
rely on having GITHUB_WORKSPACE exported, and fallback to $HOME/go if
needed.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
b87ed27416 tests: Move ensure_yq to common.bash
As this function will be used by different scripts, let's move it to a
common place.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Jeremi Piotrowski
124e390333 tests: common: Fix quoting when globbing
When the glob star is inside quotes, there is only one iteration of the loop
and b holds all matches at once. Move the glob out of the quotes so that we
actually iterate over matched paths.

Fixes: #6543
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
db77c9a438 tests: Make install_kata take care of the links
It makes the kata-containers installation more complete.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
13715db1f8 tests: Do not call install_check_metrics when installing kata
The `install_kata` function was moved from the metrics' `gha-run.sh`
file to the `common.bash` in the commit 3ffd48bc16, but I didn't notice
that it brought with it a call to `install_check_metrics`, which is
totally unrelated to installing Kata Containers.

Let's remove the call so the function is a little bit less specific, and
move the call to install_check_metrics to the metrics `gha-run.sh` file.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 16:54:27 +02:00
Fabiano Fidêncio
e149a3c783 Merge pull request #7404 from fidencio/topic/cache-consider-changes-in-the-scripts-used-to-build-the-kernel
cache: kernel: Consider changes in tools/packaging/kernel
2023-07-21 15:05:01 +02:00
Fabiano Fidêncio
630634c5df ci: k8s: Group logs to make them easier to read
Otherwise it becomes really hard to find the info you're looking for.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 14:05:30 +02:00
Fabiano Fidêncio
228b30f31c ci: k8s: Gather node info during the cleanup
This will make our lives easier to debug issues with the CI.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 14:05:30 +02:00
Fabiano Fidêncio
81f99543ec ci: k8s: Cleanup cluster before deleting it
This will help us to in two fronts:
* catching possible issues related to kata-deploy cleanup
* do more (like, in the future, collect logs) after the tests run

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 14:05:30 +02:00
Fabiano Fidêncio
38a7b5325f packaging/tools: Add kata-debug
kata-debug is a tool that is used as part of the Kata Containers CI to gather
information from the node, in order to help debugging issues with Kata
Containers.

As one can imagine, this can be expanded and used outside of the CI context,
and any contribution back to the script is very much welcome.

The resulting container is stored at the [Kata Containers quay.io
space](https://quay.io/repository/kata-containers/kata-debug) and can
be used as shown below:
```sh
kubectl debug $NODE_NAME -it --image=quay.io/kata-containers/kata-debug:latest
```

Fixes: #7397

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 14:05:30 +02:00
Fabiano Fidêncio
a0fd41fd37 Merge pull request #7406 from fidencio/topic/merge-tarball-fix-version-yaml-not-found
kata-deploy: Properly get the path of the versions.yaml file
2023-07-21 14:04:18 +02:00
Fabiano Fidêncio
ae6e8d2b38 kata-deploy: Properly get the path of the versions.yaml file
We need to correctly get the full path of the versions.yaml file as part
of the merge-builds.sh script, as we do a `pushd` there and that leads
to a fail merging the artefacts as the `versions.yaml` file does not
exists in that path.

Fixes: #7405

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 12:02:11 +02:00
Fabiano Fidêncio
309e232553 cache: kernel: Consider changes in tools/packaging/kernel
Any change in the script used to build the kernel should invalidate the
cache.

Fixes: #7403

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-21 11:48:29 +02:00
GabyCT
f95a7896b1 Merge pull request #7394 from fidencio/topic/ship-VERSIOB-and-versions.yaml-as-part-of-release-tarball
kata-deploy: Add VERSION and versions.yaml to the final tarball
2023-07-20 14:38:21 -06:00
GabyCT
14025baafe Merge pull request #7376 from GabyCT/topic/addcray
metrics: Add C-Ray performance test
2023-07-20 14:37:53 -06:00
GabyCT
b629f6a822 Merge pull request #7363 from GabyCT/topic/enabletensorflow
metrics: enable TensorFlow benchmark to be run on gha
2023-07-20 13:36:55 -06:00
Fabiano Fidêncio
59fdd69b85 kata-deploy: Add VERSION and versions.yaml to the final tarball
Let's make things simpler to figure out which version of Kata
Containers has been deployed, and also which artefacts come with it.

This will help us immensely in the future, for the TEEs use case, so we
can easily know whether we can deploy a specific guest kernel for a
specific host kernel.

Fixes: #7394

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-20 18:33:14 +02:00
Fabiano Fidêncio
5dddd7c5d1 release: Upload versions.yaml as part of the release
Although this file is far away from being a SBOM, it'll help folks to
easily visualise which components are part of a release, and even have
SBOMs generated from that.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-20 18:31:21 +02:00
Gabriela Cervantes
bad3ac84b0 metrics: Rename C-Ray to cpu performance tests
This PR renames C-Ray tests to cpu category.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-20 15:56:02 +00:00
Fabiano Fidêncio
87d99a71ec versions: Remove "kernel-experimental"
We've not been using nor shipping this kernel for a very long time.

Regardless, we're leaving behind the logic in the kernel scripts to
build it, in case it becomes necessary in the future.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-20 17:14:22 +02:00
Zvonko Kaiser
545de5042a vfio: Fix tests
Now with more elaborate checking of cold|hot plug ports
we needed to update some of the tests.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-20 13:42:44 +00:00
Zvonko Kaiser
62aa6750ec vfio: Added better handling of VFIO Control Devices
Depending on the vfio_mode we need to mount the
VFIO control device additionally into the container.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-20 13:42:42 +00:00
Fabiano Fidêncio
fe07ac662d Merge pull request #7387 from GabyCT/topic/fixmemoryinsidec
metrics: Add function to memory inside container script
2023-07-20 10:06:15 +02:00
Zvonko Kaiser
dd422ccb69 vfio: Remove obsolete HotplugVFIOonRootBus
Removing HotplugVFIOonRootBus which is obsolete with the latest PCI
topology changes, users can set cold_plug_vfio or hot_plug_vfio either
in the configuration.toml or via annotations.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-20 07:25:40 +00:00
Zvonko Kaiser
114542e2ba s390x: Fixing device.Bus assignment
The device.Bus was reset if a specific combination of
configuration parameters were not met. With the new
PCIe topology this should not happen anymore

Fixes: #7381

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-20 07:24:26 +00:00
Alakesh Haloi
371a118ad0 agent: exclude symlinks from recursive ownership change
currently when fsGroup is used with direct-assign, kata agent
recursively changes ownership and permission for each file including
symlinks. However the problem with symlinks is, the permission of
the symlink itself may not be same as the underlying file. So while
doing recursive ownership and permission changes we should skip
symlinks.

Fixes: #7364
Signed-off-by: Alakesh Haloi <a_haloi@apple.com>
2023-07-19 20:42:55 -07:00
Gabriela Cervantes
e64edf41e5 metrics: Add tensorflow function in gha-run script
This PR adds the tensorflow function in gha-run script in order to
be triggered in the gha.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-19 21:31:51 +00:00
Gabriela Cervantes
67a6fff4f7 metrics: Enable tensorflow benchmark on gha
This PR enables the TensorFlow benchmark on gha for the kata metrics CI.

Fixes #7362

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-19 21:31:51 +00:00
GabyCT
c3f21c36f3 Merge pull request #7388 from dborquez/revert-commit-broke-checkmetrics-baseline-values
Revert "metrics: Replace backslashes used to escape double quoted key in jq expr"
2023-07-19 14:36:16 -06:00
David Esparza
01450deb6a Revert "metrics: Replace backslashes used to escape double quoted key in jq expr."
This reverts commit 468f017e21.

Fixes: #7385

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-19 10:07:11 -06:00
Gabriela Cervantes
8430068058 metrics: Add function to memory inside container script
This PR adds function before function of the variables at the memory
inside container script in order to have uniformity across the script.

Fixes #7386

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-19 16:00:53 +00:00
Chao Wu
bbd3c1b6ab Dragonball: migrate dragonball-sandbox crates to Kata
In order to make it easier for developers to contribute to Dragonball,
we decide to migrate all dragonball-sandbox crates to Kata.

fixes: #7262

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
2023-07-19 19:41:57 +08:00
Chao Wu
7153b51578 Merge pull request #7372 from fidencio/topic/bump-virtiofsd-to-v1.7.0
versions: Bump virtiofsd to v1.7.0
2023-07-19 10:51:49 +08:00
GabyCT
8c662916ab Merge pull request #7377 from dborquez/add_verbosity_to_blogbench
metrics: stop hypervirsor and shim at init_env stage
2023-07-18 15:57:54 -06:00
Fabiano Fidêncio
5f7da301fd Merge pull request #7378 from fidencio/topic/ci-k8s-fix-source-path
ci: k8s: Adapt "source ..." to the new location of gha-run.sh
2023-07-18 22:30:55 +02:00
Fabiano Fidêncio
fad801d0fb ci: k8s: Adapt "source ..." to the new location of gha-run.sh
This is a follow up of 2ee2cd307b, which
changed the location of gha-run.sh

Fixes: #7373

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-18 21:26:41 +02:00
David Esparza
55e2f0955b metrics: stop hypervirsor and shim at init_env stage
This PR kills the hypervisor and the kata shim in the
init_env stage prior to launch any metric test.
Additionally this PR adds info messages in the main blocks
of the blogbench test to help in debugging.

Fixes: #7366

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-18 12:05:29 -06:00
Gabriela Cervantes
556e663fce metrics: Add disk link to general metrics README
This PR adds the disk link information to the general metrics README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-18 16:42:35 +00:00
Gabriela Cervantes
98c1217093 metrics: Add C-Ray README
This PR adds the C-Ray documentation at the README file.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-18 16:35:54 +00:00
Gabriela Cervantes
8e7d9926e4 metrics: Add C-Ray Dockerfile
This PR adds the C-Ray Dockerfile for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-18 16:33:55 +00:00
Gabriela Cervantes
e2ee769783 metrics: Add C-Ray performance test
This PR adds C-Ray performance test in order to be part of the kata
metrics CI.

Fixes #7375

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-18 16:32:23 +00:00
Fabiano Fidêncio
2011e3d72a Merge pull request #7374 from fidencio/topic/ci-tdx-adjust-kubeconfig-path
ci: Move `tests/integration/gha-run.sh`  to `tests/integration/kuberentes/` ... and also remove KUBECONFIG from the tdx envs
2023-07-18 17:32:57 +02:00
Fabiano Fidêncio
8e09e04f48 Merge pull request #6788 from jepio/kernel-update-6.1-lts
versions: Update kernel to version v6.1.x
2023-07-18 17:29:21 +02:00
Chao Wu
935432c36d Merge pull request #7352 from justxuewei/exec-hang
agent: Fix exec hang issues with a backgroud process
2023-07-18 23:02:18 +08:00
Fabiano Fidêncio
2ee2cd307b ci: k8s: Move gha-run.sh to the kubernetes dir
The file belongs there, as it's only used for k8s related tests.

Fixes: #7373

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-18 15:45:06 +02:00
Fabiano Fidêncio
88eaff5330 ci: tdx: Adjust KUBECONFIG
We don't need to export KUBECONFIG there.  Let's just make sure we have
the server correctly setup and avoid doing that.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-18 15:39:52 +02:00
Jeremi Piotrowski
c09e268a1b versions: Downgrade SEV(-SNP) kernel back to v5.19.x
CC-GPU seems to have issues with v6.1, so downgrade the kernels used for
SEV-SNP to a known-working version. It is worth mentioning that TDX is also
still on 5.19.

Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-18 15:29:46 +02:00
Fabiano Fidêncio
25d80fcec2 Merge pull request #6993 from zvonkok/kata-agent-init-mount
agent: Ignore already mounted dev/fs/pseudo-fs
2023-07-18 14:11:44 +02:00
Fabiano Fidêncio
4687f2bf9d Merge pull request #7369 from fidencio/topic/gha-ci-bring-tdx-back
ci: k8s: Bring TDX tests back
2023-07-18 13:28:33 +02:00
Fabiano Fidêncio
6a7a323656 versions: Bump virtiofsd to v1.7.0
https://gitlab.com/virtio-fs/virtiofsd/-/releases/v1.7.0 was released
Today.

Fixes: #7371

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-18 12:33:13 +02:00
Fabiano Fidêncio
ac5f5353ba ci: k8s: Bring TDX tests back
Now that we have a new TDX machine plugged into our CI, let's re-enable
the TDX tests.

Fixes: #7368

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-18 10:33:43 +02:00
Jeremi Piotrowski
950b89ffac versions: Update kernel to version v6.1.38
Kernel v6.1.38 is the current latest LTS version, switch to it.  No
patches should be necessary. Some CONFIG options have been removed:

- CONFIG_MEMCG_SWAP is covered by CONFIG_SWAP and CONFIG_MEMCG
- CONFIG_ARCH_RANDOM is unconditionally compiled in
- CONFIG_ARM64_CRYPTO is covered by CONFIG_CRYPTO and ARCH=arm64

Fixes: #6086
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-18 10:04:21 +02:00
GabyCT
7729d82e6e Merge pull request #7360 from GabyCT/topic/updategraldoc
metrics: Update machine learning documentation
2023-07-17 15:30:13 -06:00
Fabiano Fidêncio
26d525fcf3 Merge pull request #7361 from fidencio/topic/gha-ci-add-cri-containerd-tests-skeleton-follow-up-2
gha: ci: cri-containerd: Fix KATA_HYPERVSIOR typo
2023-07-17 22:38:50 +02:00
GabyCT
b4852c8544 Merge pull request #7335 from kata-containers/topic/addmobilenet
tests: Add MobileNet Tensorflow performance benchmark
2023-07-17 14:36:59 -06:00
Gabriela Cervantes
8ccc1e5c93 metrics: Update machine learning documentation
This PR updates the machine learning documentation related with
Tensorflow and Pytorch benchmarks.

Fixes #7359

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-17 20:32:49 +00:00
Fabiano Fidêncio
f50d2b0664 gha: ci: cri-containerd: Fix KATA_HYPERVSIOR typo
KATA_HYPERVSIOR should be KATA_HYPERVISOR

Fixes: #6543

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-17 21:56:51 +02:00
David Esparza
687596ae41 Merge pull request #7320 from dborquez/fix_jq_checkmetrics_checkvar_expression
metrics: replace backslashes used to escape double quoted jq key expr.
2023-07-17 13:50:18 -06:00
Gabriela Cervantes
620b945975 metrics: Add Tensorflow Mobilenet documentation
This PR adds the Tensorflow mobilinet documentation for the machine
learning README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-17 17:39:05 +00:00
Zhongtao Hu
d50f3888af Merge pull request #7219 from Apokleos/network-refactor
runtime-rs: enhancement of Device Manager for network endpoints.
2023-07-17 14:13:51 +08:00
QuanweiZhou
ce14f26d82 Merge pull request #5450 from openanolis/trace_rs
feat(Tracing): tracing in Rust runtime
2023-07-17 09:27:13 +08:00
Zhongtao Hu
419f8a5db7 Merge pull request #7021 from cheriL/7020/ignore-unconfigured-netinterface
runtime-rs: ignore unconfigured network interfaces
2023-07-16 10:11:15 +08:00
Xuewei Niu
6c91af0a26 agent: Fix exec hang issues with a backgroud process
Issue #4747 and pull request #4748 fix exec hang issues where the exec
command hangs when a process's stdout is not closed. However, the PR might
cause the exec command not to work as expected, leading to CI failure. The
PR was reverted in #7042. This PR resolves the exec hang issues and has
undergone 1000 rounds of testing to verify that it would not cause any CI
failures.

Fixes: #4747

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-07-16 08:32:45 +08:00
David Esparza
5a9829996c Merge pull request #7349 from dborquez/fix_extract_kata_env_for_metrics
metrics: Stop running kata-env before kata is properly installed.
2023-07-14 15:20:52 -06:00
David Esparza
59f4731bb2 metrics: Stop running kata-env before kata is properly installed.
This PR makes kata-env is called only after some metrics have
completed his workload. This fixes a bug that occurs when
kata-env was being called before kata is already installed on the
testing platform.

Fixes: #7348

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-14 13:40:48 -06:00
David Esparza
468f017e21 metrics: Replace backslashes used to escape double quoted key in jq expr.
This PR uses squared brackets in a jq expression to access
key values corresponding to metric results in json format.

The values are the data inputs into the checkmetrics tool.

Fixes: #7319

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-14 18:41:41 +00:00
GabyCT
b9535fb187 Merge pull request #7337 from dborquez/fix_remove_old_metrics_config
metrics: use rm -f to remove the oldest continerd config file.
2023-07-14 09:19:41 -06:00
Fabiano Fidêncio
7a854507cc Merge pull request #7333 from zvonkok/main
kernel: Update kernel config name
2023-07-14 13:49:27 +02:00
Fabiano Fidêncio
cfc90fad84 Merge pull request #7344 from fidencio/topic/kata-deploy-add-a-debug-option
kata-deploy: Add a debug option to kata-deploy (and also use it as part of our CI)
2023-07-14 13:16:55 +02:00
Fabiano Fidêncio
64f013f3bf ci: k8s: Enable debug when running the tests
This will help us to gather more information about Kata Containers in
case of failure.

Fixes: #7343

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-14 12:18:11 +02:00
Fabiano Fidêncio
8f4b1df9cf kata-deploy: Give users the ability to run it on DEBUG mode
The DEBUG env var introduced to the kata-deploy / kata-cleanup yaml file
will be responsible for:
* Setting up the CRI Engine to run with the debug log level set to debug
  * The default is usually info
* Setting up Kata Containers to enable:
  * debug logs
  * debug console
  * agent logs

This will help a lot folks trying to debug Kata Containers while using
kata-deploy, and also help us to always run with DEBUG=yes as part of
our CI.

Fixes: #7342

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-14 12:18:08 +02:00
Chao Wu
9b3dc572ae Merge pull request #7018 from nubificus/feat_bindmount_propagation
runtime-rs: add parameter for propagation of (u)mount events
2023-07-14 15:21:41 +08:00
Zvonko Kaiser
2c8dfde168 kernel: Update kernel config name
Fixes: #7294

When installing the kernel config adjust the name like
the vmlinuz and vmlinux files so that any added suffixes
are also reflected in the kernel config name.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-14 06:50:35 +00:00
Archana Shinde
b9b8ccca0c Merge pull request #7236 from amshinde/move-guestprotection
kata-ctl: Move GuestProtection code to kata-sys-util
2023-07-13 23:50:17 -07:00
soup
150e54d02b runtime-rs: ignore unconfigured network interfaces
Fixes: #7020

Signed-off-by: soup <lqh348659137@outlook.com>
2023-07-14 14:16:03 +08:00
David Esparza
3ae02f9202 metrics: use rm -f to remove older continerd config file.
In order to run kata metrics we need to check that the containerd
config file is properly set. When this is not the case, we
need to remove that file, and generate a valid one.

This PR runs rm -f in order to ignore errors in case the
file to delete does not exist.

Fixes: #7336

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-13 16:20:03 -06:00
David Esparza
22d4e4c5a6 Merge pull request #7328 from GabyCT/topic/updatecommon
tests: Add function before function name in common.bash for metrics
2023-07-13 16:11:30 -06:00
Gabriela Cervantes
a864d0e349 tests: Add tensorflow mobilenet dockerfile
This PR adds the tensorflow mobilenet dockerfile.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-13 21:24:40 +00:00
Gabriela Cervantes
788d2a254e tests: Add tensorflow mobilenet performance test
This PR adds tensorflow mobilenet performance test for
kata metrics.

Fixes #7334

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-13 21:18:25 +00:00
David Esparza
e8917d7321 Merge pull request #7330 from GabyCT/topic/storagedoc
tests: Add metrics storage documentation
2023-07-13 15:10:53 -06:00
GabyCT
8db43eae44 Merge pull request #7318 from dborquez/fix_timestamp_generator_on_metrics
metrics: Fix metrics ts generator to treat numbers as decimals
2023-07-13 11:21:09 -06:00
Gabriela Cervantes
3fed61e7a4 tests: Add storage link to general metrics documentation
This PR adds storage link to general metrics README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-13 16:03:49 +00:00
Gabriela Cervantes
b34dda4ca6 tests: Add storage blogbench metrics documentation
This PR adds the storage metrics documentation for blogbench for kata
metrics.

Fixes #7329

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-13 16:00:14 +00:00
Anastassios Nanos
6787c63900 runtime-rs: add parameter for propagation of (u)mount events
Add an extra parameter in `bind_mount_unchecked` to specify
the propagation type: "shared" or "slave".

Fixes: #7017

Signed-off-by: Anastassios Nanos <ananos@nubificus.co.uk>
2023-07-13 15:58:22 +00:00
Gabriela Cervantes
6e5679bc46 tests: Add function before function name in common.bash for metrics
This PR adds function before the function name in common.bash script
in order to have uniformity across all the script.

Fixes #7327

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-13 15:48:47 +00:00
Archana Shinde
62080f83cb kata-sys-util: Fix compilation errors
Fix compilation errors for aarch64 and s390x

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:09:43 +05:30
Archana Shinde
02d99caf6d static-checks: Make cargo clippy pass.
Get rid of cargo clippy warnings.

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Archana Shinde
9824206820 agent: Make the static checks pass for agent
The static checks for the agent require Cargo.lock to be updated.

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Archana Shinde
61e4032b08 kata-ctl: Remove all utility functions to get platform protection
Since these have been added to kata-sys-util, remove these from
kata-ctl. Change all invocations to get platform protection to make use
of kata-sys-util.

Fixes: #7144

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Archana Shinde
a24dbdc781 kata-sys-util: Move utilities to get platform protection
Add utilities to get platform protection to kata-sys-util

Fixes: #7144

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Archana Shinde
dacdf7c282 kata-ctl: Remove cpu related functions from kata-ctl
Remove cpu related functions which have been moved to kata-sys-util.
Change invocations in kata-ctl to make use of functions now moved to
kata-sys-util.

Signed-off-by: Nathan Whyte <nathanwhyte35@gmail.com>
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Archana Shinde
f5d1957174 kata-sys-util: Move additional functionality to cpu.rs
Make certain imports architecture specific as these are not used on all
architectures.
Move additional constants and functionality to cpu.rs.

Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Nathan Whyte
304b9d9146 kata-sys-util: Move CPU info functions
Move get_single_cpu_info and get_cpu_flags into kata-sys-util.
Add new functions that get a list of flags and check if a flag
exists in that list.

Fixes #6383

Signed-off-by: Nathan Whyte <nathanwhyte35@gmail.com>
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
2023-07-13 20:08:13 +05:30
Fabiano Fidêncio
eed3c7c046 Merge pull request #7322 from fidencio/topic/gha-ci-add-cri-containerd-tests-skeleton-follow-up
gha: ci: Add cri-containerd tests skeleton -- follow up 1
2023-07-13 13:53:48 +02:00
Fabiano Fidêncio
7319cff77a ci: cri-containerd: Add LTS / Active versions for containerd
As we'll be testing against the LTS and the Active versions of
containers, let's add those entries to the versions.yaml file and make
sure we export what we want to use for the tests as an env var.

The approach taken should not break the current way of getting the
containerd version.

LTS and Active versions of containerd can be found at:
https://containerd.io/releases/#support-horizon

Fixes: #6543

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-13 12:05:47 +02:00
Fabiano Fidêncio
2a957d41c8 ci: cri-containerd: Export GOPATH
Let's make sure this is exported, as it'll be needed in order to install
`yq`, which will be used to get the versions of the dependencies to be
installed.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-13 12:05:47 +02:00
Fabiano Fidêncio
75a294b74b ci: cri-containerd: Ensure deps are installed
Let's make sure we install the needed dependencies for running the
`cri-containerd` tests.

Right now this commit is basically adding a placeholder, and later on,
when we'll actually be able to test the job, we'll add the logic of
installing the needed dependencies.

The obvious dependencies we've spotted so far are:
* From the OS
  * jq
  * curl (already present)
* From our repo
  * yq (using the install_yq script)
* From GitHub
  * cri-containerd
  * cri-tools
  * cni plugins

We may need a few more packages, but we will only figure this out as
part of the actual work.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-13 12:04:22 +02:00
Zhongtao Hu
b69cdb5c21 Merge pull request #7286 from xuejun-xj/xuejun/up-fix
dragonball/agent: Add some optimization for Makefile and bugfixes of unit tests on aarch64
2023-07-13 09:39:23 +08:00
GabyCT
ee17097e88 Merge pull request #7282 from GabyCT/topic/enableblogbench
metrics: Enable blogbench test
2023-07-12 16:35:52 -06:00
David Esparza
f63673838b Merge pull request #7315 from GabyCT/topic/machinelearning
tests: Add machine learning performance tests
2023-07-12 15:57:11 -06:00
David Esparza
6924d14df5 metrics: Fix metrics ts generator to treat numbers as decimals
Use bc tool to perform math operations even when variables contain
values with leading zero.

Fixes: #7317

Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
2023-07-12 20:57:33 +00:00
Gabriela Cervantes
9e048c8ee0 checkmetrics: Add blogbench read value for qemu
This PR adds the blogbench read value for qemu.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:38:27 +00:00
Gabriela Cervantes
2935aeb7d7 checkmetrics: Add blogbench write value for qemu
This PR adds the blogbench write value for qemu limit.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:27 +00:00
Gabriela Cervantes
02031e29aa checkmetrics: Add blogbench read value for clh
This PR adds the blogbench read value for clh limit.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:27 +00:00
Gabriela Cervantes
107fae033b checkmetrics: Add blogbench write value for clh
This PR adds the blogbench write value limit for clh.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:27 +00:00
Gabriela Cervantes
8c75c2f4bd metrics: Update blogbench Dockerfile
This PR udpates the blogbench dockerfile to have non interactive mode.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:27 +00:00
Gabriela Cervantes
49723a9ecf metrics: Add double quotes to variables
This PR adds double quotes to variables in the blogbench script to
have uniformity across all the tests.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:27 +00:00
Gabriela Cervantes
dc67d902eb metrics: Enable blogbench test
This PR enables the blogbench performance test for the kata metrics CI.

Fixes #7281

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 20:37:24 +00:00
Fabiano Fidêncio
3f38f75918 Merge pull request #7314 from fidencio/topic/gha-ci-add-cri-containerd-tests-skeleton
tests: gha: ci: Add cri-containerd tests skeleton
2023-07-12 22:21:47 +02:00
Fabiano Fidêncio
438fe3b829 gha: ci: Add cri-containerd tests skeleton
This PR builds the foundation for us to start migrating the
cri-containerd tests from Jenkins to GitHub Actions.

Right now the test does nothing and should always finish successfully.
The coming PRs will actually introduce logic to the `gha-run.sh` script
where we'll be able to run the tests and make sure those pass before
having them actually merged.

Fixes: #6543

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 20:57:39 +02:00
Fabiano Fidêncio
bd08d745f4 tests: metrics: Move metrics specific function to metrics gha-run.sh
`compress_metrics_results_dir()` is only used by the metrics GHA.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 20:56:55 +02:00
Fabiano Fidêncio
3ffd48bc16 tests: common: Move a few utility functions to common.bash
Those functions were originally introduced as part of the
`metrics/gha-run.sh` file, but those will be very hand at the time we
start adding more tests.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 20:55:05 +02:00
Gabriela Cervantes
7f961461bd tests: Add machine learning README
This PR adds machine learning README.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 16:37:15 +00:00
Fabiano Fidêncio
bb2ef4ca34 tests: Add function before each function
Let's just keep this standardised.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 18:36:09 +02:00
Gabriela Cervantes
063f7aa7cb tests: Add Pytorch Dockerfile
This PR adds Pytorch Dockerfile for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 16:34:17 +00:00
Fabiano Fidêncio
b6282f7053 Merge pull request #7255 from GabyCT/topic/memoryinsideenabled
metrics: Enable memory inside container metrics
2023-07-12 18:33:36 +02:00
Gabriela Cervantes
1af03b9b32 tests: Add Pytorch performance test
This PR adds Pytorch performance test for kata metrics.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 16:33:02 +00:00
Gabriela Cervantes
4cecd62370 tests: Add tensorflow Dockerfile
This PR adds the tensorflow Dockerfile.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 16:31:32 +00:00
Gabriela Cervantes
c4094f62c9 tests: Add metrics machine learning performance tests
This PR adds metrics machine learning performance tests like
Tensorflow and Pytorch.

Fixes #7313

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-12 16:28:25 +00:00
Jeremi Piotrowski
b9a63d66a4 Merge pull request #7297 from jepio/fix-mariner-cache
tools: Use a consistent target name when building mariner initrd
2023-07-12 13:43:47 +02:00
Fabiano Fidêncio
1ab99bd6bb Merge pull request #7276 from fidencio/topic/gha-debug-gha-tests-start
gha: ci: Gather info about the node / pods
2023-07-12 12:35:10 +02:00
Chao Wu
f6a51a8a78 Merge pull request #7306 from justxuewei/none-network-model
runtime-rs: Do not scan network if network model is "none"
2023-07-12 14:53:52 +08:00
Zvonko Kaiser
4e352a73ee Merge pull request #7308 from fidencio/topic/gha-temporarily-disable-tdx-runs
gha: k8s: tdx: Temporarily disable TDX tests
2023-07-12 08:39:02 +02:00
Fabiano Fidêncio
89b622dcb8 gha: k8s: tdx: Temporarily disable TDX tests
TDX tests need to be temporarily disabled as the current machine
allocated for this will be off for some time, and a new machine only
will be added next week.

Fixes: #7307

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 08:26:10 +02:00
Fabiano Fidêncio
8c9d08e872 gha: ci: Gather info about the node / pods
This is a very simple addition, that should be expanded by
https://github.com/kata-containers/kata-containers/pull/7185, and it's
targetting gathering more info that will help us to debug CI failures.

Fixes: #7296

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-12 08:04:37 +02:00
alex.lyn
283f809dda runtime-rs: Enhancing Device Manager for network endpoints.
Currently, network endpoints are separate from the device manager
and need to be included for proper management. In order to do so,
we need to refactor the implementation of the network endpoints.

The first step is to restructure the NetworkConfig and NetworkDevice
structures.
Next, we will implement the virtio-net driver and add the Network
device to the Device Manager.
Finally, we'll unify entries with do_handle_device for each endpoint.

Fixes: #7215

Signed-off-by: alex.lyn <alex.lyn@antgroup.com>
2023-07-12 11:27:12 +08:00
xuejun-xj
a65291ad72 agent: rustjail: update test_mknod_dev
When running cargo test in container, test_mknod_dev may fail sometimes
because of "Operation not permitted". Change the device path to
"/dev/fifo-test" to avoid this case.

Fixes: #7284

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-12 11:22:32 +08:00
xuejun-xj
46b81dd7d2 agent: clippy: fix cargo clippy warnings
Replace "if let Ok(_) = ..." with ".is_ok()" method.

Fixes: #7284

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-12 11:22:32 +08:00
xuejun-xj
c4771d9e89 agent: Makefile: enable set SECCOMP dynamically
Change ":=" to "?:".

Fixes: #7284

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-12 11:22:32 +08:00
xuejun-xj
a88212e2c5 utils.mk: update BUILD_TYPE argument
Enable to dynamically set BUILD_TYPE argument.

Fixes: #7284

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-12 11:22:32 +08:00
xuejun-xj
883b4db380 dragonball: fix cargo test on aarch64
1. Update memory end assert because address space layout differs between
x86 and arm.
2. Set guest_addr for aarch64 in test_handler_insert_region case.

Fixes: #7284
TODO: #7290

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-12 11:22:31 +08:00
Xuewei Niu
6822029c81 runtime-rs: Do not scan network if network model is "none"
Skip to scan network from netns if the network model is specified to
"none".

Fixes: #7305

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2023-07-12 10:00:50 +08:00
Fabiano Fidêncio
ae55893deb Merge pull request #7303 from GabyCT/topic/cleanupmemoryusage
metrics: Update memory usage script
2023-07-11 23:52:05 +02:00
Gabriela Cervantes
ce54e43ebe metrics: Update memory usage script
This PR updates memory usage script by applying the clean_env_ctr at the main
in order to avoid failures of leaving certain processes not removed.

Fixes #7302

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-11 17:03:25 +00:00
Fabiano Fidêncio
ceb5c69ee8 Merge pull request #7299 from fidencio/topic/gha-stop-previous-workflows-if-a-pr-is-updated
gha: Cancel previous jobs if a PR is updated
2023-07-11 16:22:47 +02:00
Fabiano Fidêncio
fbc2a91ab5 gha: Cancel previous jobs if a PR is updated
Let's make sure we cancel previous runs, mainly as we have some of those
that take a lot of time to run, whenever the PR is updated.

This is based on the following stack overflow suggestion:
https://stackoverflow.com/questions/66335225/how-to-cancel-previous-runs-in-the-pr-when-you-push-new-commitsupdate-the-curre

This is very much needed as we don't want to wait for a long time to
have access to a runner because of other runners are still being used
performing a task that's meaningless due to the PR update.

Fixes: #7298

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-11 14:37:10 +02:00
Jeremi Piotrowski
307cfc8f7a tools: Use a consistent target name when building mariner initrd
Currently a mixture of cbl-mariner and mariner is used when creating the
mariner initrd. The kata-static tarball has mariner in the name, but the
jenkins url uses cbl-mariner. This breaks cache usage.

Use mariner as the target name throughout the build, so that caching works.

Fixes: #7292
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
2023-07-11 14:17:14 +02:00
xuejun-xj
aedc586e14 dragonball: Makefile: add coverage target
Add "coverage" target to compute code coverage for dragonball.

Fixes: #7284

Signed-off-by: xuejun-xj <jiyunxue@linux.alibaba.com>
2023-07-11 14:36:25 +08:00
Gabriela Cervantes
310e069f73 checkmetrics: Enable checkmetrics for memory inside test
This PR enables the checkmetrics to include the memory inside
container test.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-10 17:05:13 +00:00
Ji-Xinyou
ed23b47c71 tracing: Add tracing to runtime-rs
Introduce tracing into runtime-rs, only some functions are instrumented.

Fixes: #5239

Signed-off-by: Ji-Xinyou <jerryji0414@outlook.com>
Signed-off-by: Yushuo <y-shuo@linux.alibaba.com>
2023-07-09 22:09:43 +08:00
Gabriela Cervantes
2be342023b checkmetrics: Add memory usage inside container value for qemu
This PR adds the memory usage inside container value for qemu.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-07 16:28:28 +00:00
Gabriela Cervantes
6ca34f949e checkmetrics: Add memory inside container value for clh
Add memory inside container value for clh.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-07 16:28:28 +00:00
Gabriela Cervantes
6c68924230 metrics: Enable memory inside container metrics
This PR will enable the memory inside container metrics for the Kata CI.

Fixes #7254

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2023-07-07 16:28:28 +00:00
Zvonko Kaiser
f72cb2fc12 agent: Remove shadowed function, add slog-term
Remove shadowed get_mounts(), added slog-term as a new crate,
slog can directly log to stdout and we can capture output
in the test-cases that are created in the function to be tested.

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-07 11:28:14 +00:00
Zvonko Kaiser
07810bf71f agent: Ignore already mounted dev/fs/pseudo-fs
Using an initrd and setting KATA_INIT=yes meaning we're using the kata-agent
as the init process we need to make sure that the agent is not segfaulting
if mounts are already happened. Some workloads need to configure several
things in the initrd before the kata-agent starts which involves having
/proc or /sys already mounted.

Fixes: #6992

Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
2023-07-07 07:36:04 +00:00
818 changed files with 69833 additions and 50181 deletions

View File

@@ -9,6 +9,10 @@ on:
- labeled
- unlabeled
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
pr_wip_check:
runs-on: ubuntu-latest

View File

@@ -10,6 +10,10 @@ on:
- labeled
- unlabeled
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-issues:
if: ${{ github.event.label.name != 'auto-backport' }}

View File

@@ -11,6 +11,10 @@ on:
- opened
- reopened
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
add-new-issues-to-backlog:
runs-on: ubuntu-latest

View File

@@ -12,6 +12,10 @@ on:
- reopened
- synchronize
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
add-pr-size-label:
runs-on: ubuntu-latest

View File

@@ -2,6 +2,10 @@ on:
pull_request_target:
types: ["labeled", "closed"]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
backport:
name: Backport PR

View File

@@ -99,7 +99,7 @@ jobs:
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
- name: store-artifacts
uses: actions/upload-artifact@v3
with:

View File

@@ -2,6 +2,10 @@ name: CI | Build kata-static tarball for arm64
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
@@ -29,6 +33,8 @@ jobs:
- rootfs-initrd
- shim-v2
- virtiofsd
stage:
- ${{ inputs.stage }}
steps:
- name: Adjust a permission for repo
run: |
@@ -83,7 +89,7 @@ jobs:
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
- name: store-artifacts
uses: actions/upload-artifact@v3
with:

View File

@@ -2,6 +2,10 @@ name: CI | Build kata-static tarball for s390x
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
@@ -25,6 +29,8 @@ jobs:
- rootfs-initrd
- shim-v2
- virtiofsd
stage:
- ${{ inputs.stage }}
steps:
- name: Adjust a permission for repo
run: |
@@ -80,7 +86,7 @@ jobs:
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
- name: store-artifacts
uses: actions/upload-artifact@v3
with:

View File

@@ -7,6 +7,11 @@ on:
- reopened
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
cargo-deny-runner:
runs-on: ubuntu-latest

View File

@@ -1,170 +0,0 @@
name: CI | Publish CC runtime payload for amd64
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
measured_rootfs:
- no
asset:
- cc-cloud-hypervisor
- cc-qemu
- cc-virtiofsd
- cc-sev-kernel
- cc-sev-ovmf
- cc-x86_64-ovmf
- cc-snp-qemu
- cc-sev-rootfs-initrd
- cc-tdx-qemu
- cc-tdx-td-shim
- cc-tdx-tdvf
include:
- measured_rootfs: yes
asset: cc-kernel
- measured_rootfs: yes
asset: cc-tdx-kernel
- measured_rootfs: yes
asset: cc-rootfs-image
- measured_rootfs: yes
asset: cc-tdx-rootfs-image
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
with:
fetch-depth: 0 # This is needed in order to keep the commit ids history
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: yes
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_tdx.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/root_hash_tdx.txt
retention-days: 1
if-no-files-found: ignore
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: ubuntu-latest
needs: build-asset
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: Get root_hash_tdx.txt
uses: actions/download-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
PUSH_TO_REGISTRY: yes
MEASURED_ROOTFS: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,171 +0,0 @@
name: CI | Publish CC runtime payload for s390x
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: s390x
strategy:
matrix:
measured_rootfs:
- no
asset:
- cc-qemu
- cc-rootfs-initrd
- cc-se-image
- cc-virtiofsd
include:
- measured_rootfs: yes
asset: cc-kernel
- measured_rootfs: yes
asset: cc-rootfs-image
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
with:
fetch-depth: 0 # This is needed in order to keep the commit ids history
- name: Place a host key document
run: |
mkdir -p "host-key-document"
cp "${CI_HKD_PATH}" "host-key-document"
env:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
sudo chown -R $(id -u):$(id -g) "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: yes
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
HKD_PATH: "host-key-document"
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: s390x
needs: build-asset
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
PUSH_TO_REGISTRY: yes
MEASURED_ROOTFS: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: s390x
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball-s390x
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: s390x
steps:
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-s390x
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,47 +0,0 @@
name: CI | Publish Kata Containers payload for Confidential Containers
on:
push:
branches:
- CCv0
workflow_dispatch:
jobs:
build-assets-amd64:
uses: ./.github/workflows/cc-payload-after-push-amd64.yaml
with:
target-arch: amd64
secrets: inherit
build-assets-s390x:
uses: ./.github/workflows/cc-payload-after-push-s390x.yaml
with:
target-arch: s390x
secrets: inherit
publish:
runs-on: ubuntu-latest
needs: [build-assets-amd64, build-assets-s390x]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Push commit multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA} \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}-amd64 \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}-s390x
docker manifest push quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}
- name: Push latest multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-amd64 \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-s390x
docker manifest push quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest

View File

@@ -1,154 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers (amd64)
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
measured_rootfs:
- no
asset:
- cc-cloud-hypervisor
- cc-qemu
- cc-virtiofsd
- cc-sev-kernel
- cc-sev-ovmf
- cc-x86_64-ovmf
- cc-snp-qemu
- cc-sev-rootfs-initrd
- cc-tdx-qemu
- cc-tdx-td-shim
- cc-tdx-tdvf
include:
- measured_rootfs: yes
asset: cc-kernel
- measured_rootfs: yes
asset: cc-tdx-kernel
- measured_rootfs: yes
asset: cc-rootfs-image
- measured_rootfs: yes
asset: cc-tdx-rootfs-image
steps:
- uses: actions/checkout@v3
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_tdx.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/root_hash_tdx.txt
retention-days: 1
if-no-files-found: ignore
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: ubuntu-latest
needs: build-asset
steps:
- uses: actions/checkout@v3
- name: Get root_hash_tdx.txt
uses: actions/download-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
MEASURED_ROOTFS: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: Login to quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz \
"quay.io/confidential-containers/runtime-payload" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,142 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers (s390x)
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: s390x
strategy:
matrix:
measured_rootfs:
- no
asset:
- cc-qemu
- cc-virtiofsd
include:
- measured_rootfs: yes
asset: cc-kernel
- measured_rootfs: yes
asset: cc-rootfs-image
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: s390x
needs: build-asset
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
MEASURED_ROOTFS: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: s390x
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball-s390x
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: s390x
steps:
- name: Login to quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-s390x
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz \
"quay.io/confidential-containers/runtime-payload" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,46 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers
on:
push:
tags:
- 'CC\-[0-9]+.[0-9]+.[0-9]+'
jobs:
build-assets-amd64:
uses: ./.github/workflows/cc-payload-amd64.yaml
with:
target-arch: amd64
secrets: inherit
build-assets-s390x:
uses: ./.github/workflows/cc-payload-s390x.yaml
with:
target-arch: s390x
secrets: inherit
publish:
runs-on: ubuntu-latest
needs: [build-assets-amd64, build-assets-s390x]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Push commit multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA} \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}-amd64 \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}-s390x
docker manifest push quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}
- name: Push latest multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload:kata-containers-latest \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-amd64 \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-s390x
docker manifest push quay.io/confidential-containers/runtime-payload:kata-containers-latest

View File

@@ -4,6 +4,10 @@ on:
- cron: '0 0 * * *'
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
kata-containers-ci-on-push:
uses: ./.github/workflows/ci.yaml

View File

@@ -3,6 +3,7 @@ on:
pull_request_target:
branches:
- 'main'
- 'stable-*'
types:
# Adding 'labeled' to the list of activity types that trigger this event
# (default: opened, synchronize, reopened) so that we can run this
@@ -14,6 +15,11 @@ on:
- labeled
paths-ignore:
- 'docs/**'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
kata-containers-ci-on-push:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}

View File

@@ -74,3 +74,24 @@ jobs:
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
run-cri-containerd-tests:
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-cri-containerd-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
run-nydus-tests:
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-nydus-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
run-vfio-tests:
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-vfio-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}

View File

@@ -6,6 +6,10 @@ on:
- reopened
- synchronize
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
error_msg: |+
See the document below for help on formatting commits for the project.
@@ -47,7 +51,7 @@ jobs:
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^.{0,75}(\n.*)*$|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
pattern: '^.{0,75}(\n.*)*$'
error: 'Subject too long (max 75)'
post_error: ${{ env.error_msg }}
@@ -98,6 +102,6 @@ jobs:
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:'
error: 'Failed to find subsystem in subject'
post_error: ${{ env.error_msg }}

View File

@@ -6,6 +6,11 @@ on:
- reopened
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Darwin tests
jobs:
test:

View File

@@ -1,124 +0,0 @@
on:
issue_comment:
types: [created, edited]
name: deploy-ccv0-demo
jobs:
check-comment-and-membership:
runs-on: ubuntu-latest
if: |
github.event.issue.pull_request
&& github.event_name == 'issue_comment'
&& github.event.action == 'created'
&& startsWith(github.event.comment.body, '/deploy-ccv0-demo')
steps:
- name: Check membership
uses: kata-containers/is-organization-member@1.0.1
id: is_organization_member
with:
organization: kata-containers
username: ${{ github.event.comment.user.login }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: Fail if not member
run: |
result=${{ steps.is_organization_member.outputs.result }}
if [ $result == false ]; then
user=${{ github.event.comment.user.login }}
echo Either ${user} is not part of the kata-containers organization
echo or ${user} has its Organization Visibility set to Private at
echo https://github.com/orgs/kata-containers/people?query=${user}
echo
echo Ensure you change your Organization Visibility to Public and
echo trigger the test again.
exit 1
fi
build-asset:
runs-on: ubuntu-latest
needs: check-comment-and-membership
strategy:
matrix:
asset:
- cloud-hypervisor
- firecracker
- kernel
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
steps:
- uses: actions/checkout@v2
- name: Prepare confidential container rootfs
if: ${{ matrix.asset == 'rootfs-initrd' }}
run: |
pushd include_rootfs/etc
curl -LO https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json
mkdir kata-containers
envsubst < docs/how-to/data/confidential-agent-config.toml.in > kata-containers/agent.toml
popd
env:
AA_KBC_PARAMS: offline_fs_kbc::null
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
AA_KBC: offline_fs_kbc
INCLUDE_ROOTFS: include_rootfs
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v2
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: build-asset
steps:
- uses: actions/checkout@v2
- name: get-artifacts
uses: actions/download-artifact@v2
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: kata-static-tarball
path: kata-static.tar.xz
kata-deploy:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: get-kata-tarball
uses: actions/download-artifact@v2
with:
name: kata-static-tarball
- name: build-and-push-kata-deploy-ci
id: build-and-push-kata-deploy-ci
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE
git checkout $tag
pkg_sha=$(git rev-parse HEAD)
popd
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/confidential-containers/runtime-payload:$pkg_sha $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
docker push quay.io/confidential-containers/runtime-payload:$pkg_sha
mkdir -p packaging/kata-deploy
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
echo "::set-output name=PKG_SHA::${pkg_sha}"

View File

@@ -0,0 +1,36 @@
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
kata-deploy-runtime-classes-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Ensure the split out runtime classes match the all-in-one file
run: |
pushd tools/packaging/kata-deploy/runtimeclasses/
echo "::group::Combine runtime classes"
for runtimeClass in `find . -type f \( -name "*.yaml" -and -not -name "kata-runtimeClasses.yaml" \) | sort`; do
echo "Adding ${runtimeClass} to the resultingRuntimeClasses.yaml"
cat ${runtimeClass} >> resultingRuntimeClasses.yaml;
done
echo "::endgroup::"
echo "::group::Displaying the content of resultingRuntimeClasses.yaml"
cat resultingRuntimeClasses.yaml
echo "::endgroup::"
echo ""
echo "::group::Displaying the content of kata-runtimeClasses.yaml"
cat kata-runtimeClasses.yaml
echo "::endgroup::"
echo ""
diff resultingRuntimeClasses.yaml kata-runtimeClasses.yaml

View File

@@ -5,6 +5,10 @@ on:
- main
- stable-*
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build-assets-amd64:
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml

View File

@@ -4,6 +4,10 @@ on:
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build-and-push-assets-amd64:
uses: ./.github/workflows/release-amd64.yaml
@@ -117,6 +121,21 @@ jobs:
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd
upload-versions-yaml:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: upload versions.yaml
env:
GITHUB_TOKEN: ${{ secrets.GIT_UPLOAD_TOKEN }}
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE
versions_file="kata-containers-$tag-versions.yaml"
cp versions.yaml ${versions_file}
hub release edit -m "" -a "${versions_file}" "${tag}"
popd
upload-cargo-vendored-tarball:
needs: upload-multi-arch-static-tarball
runs-on: ubuntu-latest

View File

@@ -15,6 +15,10 @@ on:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
check-pr-porting-labels:
runs-on: ubuntu-latest

View File

@@ -0,0 +1,42 @@
name: CI | Run cri-containerd tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
jobs:
run-cri-containerd:
strategy:
fail-fast: true
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'qemu']
runs-on: garm-ubuntu-2204
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Install dependencies
run: bash tests/integration/cri-containerd/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/cri-containerd/gha-run.sh install-kata kata-artifacts
- name: Run cri-containerd tests
run: bash tests/integration/cri-containerd/gha-run.sh run

View File

@@ -40,37 +40,43 @@ jobs:
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HOST_OS: ${{ matrix.host_os }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
USING_NFD: "false"
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Download Azure CLI
run: bash tests/integration/gha-run.sh install-azure-cli
run: bash tests/integration/kubernetes/gha-run.sh install-azure-cli
- name: Log into the Azure account
run: bash tests/integration/gha-run.sh login-azure
run: bash tests/integration/kubernetes/gha-run.sh login-azure
env:
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_PASSWORD: ${{ secrets.AZ_PASSWORD }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
- name: Create AKS cluster
run: bash tests/integration/gha-run.sh create-cluster
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/integration/gha-run.sh install-bats
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Install `kubectl`
run: bash tests/integration/gha-run.sh install-kubectl
run: bash tests/integration/kubernetes/gha-run.sh install-kubectl
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/integration/gha-run.sh get-cluster-credentials
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
- name: Run tests
timeout-minutes: 60
run: bash tests/integration/gha-run.sh run-tests-aks
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Delete AKS cluster
if: always()
run: bash tests/integration/gha-run.sh delete-cluster
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster

View File

@@ -29,15 +29,20 @@ jobs:
DOCKER_TAG: ${{ inputs.tag }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBECONFIG: /home/kata/.kube/config
USING_NFD: "false"
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-sev
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/gha-run.sh run-tests-sev
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Delete kata-deploy
if: always()
run: bash tests/integration/gha-run.sh cleanup-sev
run: bash tests/integration/kubernetes/gha-run.sh cleanup-sev

View File

@@ -29,15 +29,20 @@ jobs:
DOCKER_TAG: ${{ inputs.tag }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBECONFIG: /home/kata/.kube/config
USING_NFD: "false"
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-snp
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/gha-run.sh run-tests-snp
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Delete kata-deploy
if: always()
run: bash tests/integration/gha-run.sh cleanup-snp
run: bash tests/integration/kubernetes/gha-run.sh cleanup-snp

View File

@@ -28,16 +28,20 @@ jobs:
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
USING_NFD: "true"
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-tdx
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/gha-run.sh run-tests-tdx
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Delete kata-deploy
if: always()
run: bash tests/integration/gha-run.sh cleanup-tdx
run: bash tests/integration/kubernetes/gha-run.sh cleanup-tdx

View File

@@ -46,9 +46,15 @@ jobs:
- name: run blogbench test
run: bash tests/metrics/gha-run.sh run-test-blogbench
- name: run tensorflow test
run: bash tests/metrics/gha-run.sh run-test-tensorflow
- name: run fio test
run: bash tests/metrics/gha-run.sh run-test-fio
- name: make metrics tarball ${{ matrix.vmm }}
run: bash tests/metrics/gha-run.sh make-tarball-results
- name: archive metrics results ${{ matrix.vmm }}
uses: actions/upload-artifact@v3
with:

42
.github/workflows/run-nydus-tests.yaml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: CI | Run nydus tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
jobs:
run-nydus:
strategy:
fail-fast: true
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'qemu', 'dragonball']
runs-on: garm-ubuntu-2204
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Install dependencies
run: bash tests/integration/nydus/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/nydus/gha-run.sh install-kata kata-artifacts
- name: Run nydus tests
run: bash tests/integration/nydus/gha-run.sh run

37
.github/workflows/run-vfio-tests.yaml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: CI | Run vfio tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
jobs:
run-vfio:
strategy:
fail-fast: false
matrix:
vmm: ['clh', 'qemu']
runs-on: garm-ubuntu-2204
env:
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ inputs.commit-hash }}
- name: Install dependencies
run: bash tests/functional/vfio/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Run vfio tests
run: bash tests/functional/vfio/gha-run.sh run

View File

@@ -7,10 +7,14 @@ on:
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Static checks dragonball
jobs:
test-dragonball:
runs-on: self-hosted
runs-on: dragonball
env:
RUST_BACKTRACE: "1"
steps:

View File

@@ -6,6 +6,10 @@ on:
- reopened
- synchronize
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Static checks
jobs:
static-checks:
@@ -41,8 +45,8 @@ jobs:
cd "${{ github.workspace }}/src/github.com/${{ github.repository }}"
kernel_dir="tools/packaging/kernel/"
kernel_version_file="${kernel_dir}kata_config_version"
modified_files=$(git diff --name-only origin/CCv0..HEAD)
if git diff --name-only origin/CCv0..HEAD "${kernel_dir}" | grep "${kernel_dir}"; then
modified_files=$(git diff --name-only origin/main..HEAD)
if git diff --name-only origin/main..HEAD "${kernel_dir}" | grep "${kernel_dir}"; then
echo "Kernel directory has changed, checking if $kernel_version_file has been updated"
if echo "$modified_files" | grep -v "README.md" | grep "${kernel_dir}" >>"/dev/null"; then
echo "$modified_files" | grep "$kernel_version_file" >>/dev/null || ( echo "Please bump version in $kernel_version_file" && exit 1)

View File

@@ -24,6 +24,10 @@ TOOLS += trace-forwarder
STANDARD_TARGETS = build check clean install static-checks-build test vendor
# Variables for the build-and-publish-kata-debug target
KATA_DEBUG_REGISTRY ?= ""
KATA_DEBUG_TAG ?= ""
default: all
include utils.mk
@@ -44,6 +48,9 @@ static-checks: static-checks-build
docs-url-alive-check:
bash ci/docs-url-alive-check.sh
build-and-publish-kata-debug:
bash tools/packaging/kata-debug/kata-debug-build-and-upload-payload.sh ${KATA_DEBUG_REGISTRY} ${KATA_DEBUG_TAG}
.PHONY: \
all \
kata-tarball \

View File

@@ -134,6 +134,7 @@ The table below lists the remaining parts of the project:
| [packaging](tools/packaging) | infrastructure | Scripts and metadata for producing packaged binaries<br/>(components, hypervisors, kernel and rootfs). |
| [kernel](https://www.kernel.org) | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored [here](tools/packaging/kernel). |
| [osbuilder](tools/osbuilder) | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
| [kata-debug](tools/packaging/kata-debug/README.md) | infrastructure | Utility tool to gather Kata Containers debug information from Kubernetes clusters. |
| [`agent-ctl`](src/tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
| [`kata-ctl`](src/tools/kata-ctl) | utility | Tool that provides advanced commands and debug facilities. |
| [`log-parser-rs`](src/tools/log-parser-rs) | utility | Tool that aid in analyzing logs from the kata runtime. |

View File

@@ -1 +1 @@
3.2.0-alpha3
3.2.0-rc0

View File

@@ -72,8 +72,7 @@ build_and_install_gperf() {
curl -sLO "${gperf_tarball_url}"
tar -xf "${gperf_tarball}"
pushd "gperf-${gperf_version}"
# gperf is a build time dependency of libseccomp and not to be used in the target.
# Unset $CC since that might point to a cross compiler.
# Unset $CC for configure, we will always use native for gperf
CC= ./configure --prefix="${gperf_install_dir}"
make
make install
@@ -88,7 +87,8 @@ build_and_install_libseccomp() {
curl -sLO "${libseccomp_tarball_url}"
tar -xf "${libseccomp_tarball}"
pushd "libseccomp-${libseccomp_version}"
./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
[ "${arch}" == $(uname -m) ] && cc_name="" || cc_name="${arch}-linux-gnu-gcc"
CC=${cc_name} ./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
make
make install
popd

View File

@@ -64,86 +64,3 @@ run_get_pr_changed_file_details()
source "$tests_repo_dir/.ci/lib.sh"
get_pr_changed_file_details
}
# Check if the 1st argument version is greater than and equal to 2nd one
# Version format: [0-9]+ separated by period (e.g. 2.4.6, 1.11.3 and etc.)
#
# Parameters:
# $1 - a version to be tested
# $2 - a target version
#
# Return:
# 0 if $1 is greater than and equal to $2
# 1 otherwise
version_greater_than_equal() {
local current_version=$1
local target_version=$2
smaller_version=$(echo -e "$current_version\n$target_version" | sort -V | head -1)
if [ "${smaller_version}" = "${target_version}" ]; then
return 0
else
return 1
fi
}
# Build a IBM zSystem secure execution (SE) image
#
# Parameters:
# $1 - kernel_parameters
# $2 - a source directory where kernel and initrd are located
# $3 - a destination directory where a SE image is built
#
# Return:
# 0 if the image is successfully built
# 1 otherwise
build_secure_image() {
kernel_params="${1:-}"
install_src_dir="${2:-}"
install_dest_dir="${3:-}"
if [ ! -f "${install_src_dir}/vmlinuz.container" ] ||
[ ! -f "${install_src_dir}/kata-containers-initrd.img" ]; then
cat << EOF >&2
Either kernel or initrd does not exist or is mistakenly named
A file name for kernel must be vmlinuz.container (raw binary)
A file name for initrd must be kata-containers-initrd.img
EOF
return 1
fi
cmdline="${kernel_params} panic=1 scsi_mod.scan=none swiotlb=262144"
parmfile="$(mktemp --suffix=-cmdline)"
echo "${cmdline}" > "${parmfile}"
chmod 600 "${parmfile}"
[ -n "${HKD_PATH:-}" ] || (echo >&2 "No host key document specified." && return 1)
cert_list=($(ls -1 $HKD_PATH))
declare hkd_options
eval "for cert in ${cert_list[*]}; do
hkd_options+=\"--host-key-document=\\\"\$HKD_PATH/\$cert\\\" \"
done"
command -v genprotimg > /dev/null 2>&1 || { apt update; apt install -y s390-tools; }
extra_arguments=""
genprotimg_version=$(genprotimg --version | grep -Po '(?<=version )[^-]+')
if ! version_greater_than_equal "${genprotimg_version}" "2.17.0"; then
extra_arguments="--x-pcf '0xe0'"
fi
eval genprotimg \
"${extra_arguments}" \
"${hkd_options}" \
--output="${install_dest_dir}/kata-containers-secure.img" \
--image="${install_src_dir}/vmlinuz.container" \
--ramdisk="${install_src_dir}/kata-containers-initrd.img" \
--parmfile="${parmfile}" \
--no-verify # no verification for CI testing purposes
build_result=$?
rm -f "${parmfile}"
if [ $build_result -eq 0 ]; then
return 0
else
return 1
fi
}

View File

@@ -14,6 +14,7 @@ Kata Containers design documents:
- [`Inotify` support](inotify.md)
- [`Hooks` support](hooks-handling.md)
- [Metrics(Kata 2.0)](kata-2-0-metrics.md)
- [Metrics in Rust Runtime(runtime-rs)](kata-metrics-in-runtime-rs.md)
- [Design for Kata Containers `Lazyload` ability with `nydus`](kata-nydus-design.md)
- [Design for direct-assigned volume](direct-blk-device-assignment.md)
- [Design for core-scheduling](core-scheduling.md)

View File

@@ -3,11 +3,11 @@
[Kubernetes](https://github.com/kubernetes/kubernetes/), or K8s, is a popular open source
container orchestration engine. In Kubernetes, a set of containers sharing resources
such as networking, storage, mount, PID, etc. is called a
[pod](https://kubernetes.io/docs/user-guide/pods/).
[pod](https://kubernetes.io/docs/concepts/workloads/pods/).
A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster
only needs to run a container runtime and a container agent (called a
[Kubelet](https://kubernetes.io/docs/admin/kubelet/)).
[Kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet)).
Kata Containers represents a Kubelet pod as a VM.

View File

@@ -0,0 +1,50 @@
# Kata Metrics in Rust Runtime(runtime-rs)
Rust Runtime(runtime-rs) is responsible for:
- Gather metrics about `shim`.
- Gather metrics from `hypervisor` (through `channel`).
- Get metrics from `agent` (through `ttrpc`).
---
Here are listed all the metrics gathered by `runtime-rs`.
> * Current status of each entry is marked as:
> * ✅DONE
> * 🚧TODO
### Kata Shim
| STATUS | Metric name | Type | Units | Labels |
| ------ | ------------------------------------------------------------ | ----------- | -------------- | ------------------------------------------------------------ |
| 🚧 | `kata_shim_agent_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_fds`: <br> Kata containerd shim v2 open FDs. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_io_stat`: <br> Kata containerd shim v2 process IO statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebytes`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_netdev`: <br> Kata containerd shim v2 network devices statistics. | `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recv_bytes`</li><li>`recv_compressed`</li><li>`recv_drop`</li><li>`recv_errs`</li><li>`recv_fifo`</li><li>`recv_frame`</li><li>`recv_multicast`</li><li>`recv_packets`</li><li>`sent_bytes`</li><li>`sent_carrier`</li><li>`sent_colls`</li><li>`sent_compressed`</li><li>`sent_drop`</li><li>`sent_errs`</li><li>`sent_fifo`</li><li>`sent_packets`</li></ul></li><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_pod_overhead_cpu`: <br> Kata Pod overhead for CPU resources(percent). | `GAUGE` | percent | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_pod_overhead_memory_in_bytes`: <br> Kata Pod overhead for memory resources(bytes). | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_proc_stat`: <br> Kata containerd shim v2 process statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_proc_status`: <br> Kata containerd shim v2 process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntary_ctxt_switches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpmd`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntary_ctxt_switches`</li></ul></li><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_cpu_seconds_total`: <br> Total user and system CPU time spent in seconds. | `COUNTER` | `seconds` | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_max_fds`: <br> Maximum number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_open_fds`: <br> Number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_resident_memory_bytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_start_time_seconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_virtual_memory_bytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_process_virtual_memory_max_bytes`: <br> Maximum amount of virtual memory available in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
| 🚧 | `kata_shim_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (Kata shim v2 actions)<ul><li>`checkpoint`</li><li>`close_io`</li><li>`connect`</li><li>`create`</li><li>`delete`</li><li>`exec`</li><li>`kill`</li><li>`pause`</li><li>`pids`</li><li>`resize_pty`</li><li>`resume`</li><li>`shutdown`</li><li>`start`</li><li>`state`</li><li>`stats`</li><li>`update`</li><li>`wait`</li></ul></li><li>`sandbox_id`</li></ul> |
| ✅ | `kata_shim_threads`: <br> Kata containerd shim v2 process threads. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
### Kata Hypervisor
Different from golang runtime, hypervisor and shim in runtime-rs belong to the **same process**, so all previous metrics for hypervisor and shim only need to be gathered once. Thus, we currently only collect previous metrics in kata shim.
At the same time, we added the interface(`VmmAction::GetHypervisorMetrics`) to gather hypervisor metrics, in case we design tailor-made metrics for hypervisor in the future. Here're metrics exposed from [src/dragonball/src/metric.rs](https://github.com/kata-containers/kata-containers/blob/main/src/dragonball/src/metric.rs).
| Metric name | Type | Units | Labels |
| ------------------------------------------------------------ | ---------- | ----- | ------------------------------------------------------------ |
| `kata_hypervisor_scrape_count`: <br> Metrics scrape count | `COUNTER` | | <ul><li>`sandbox_id`</li></ul> |
| `kata_hypervisor_vcpu`: <br>Hypervisor metrics specific to VCPUs' mode of functioning. | `IntGauge` | | <ul><li>`item`<ul><li>`exit_io_in`</li><li>`exit_io_out`</li><li>`exit_mmio_read`</li><li>`exit_mmio_write`</li><li>`failures`</li><li>`filter_cpuid`</li></ul></li><li>`sandbox_id`</li></ul> |
| `kata_hypervisor_seccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`num_faults`</li></ul></li><li>`sandbox_id`</li></ul> |
| `kata_hypervisor_seccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`sigbus`</li><li>`sigsegv`</li></ul></li><li>`sandbox_id`</li></ul> |

View File

@@ -45,8 +45,4 @@
- [How to run Kata Containers with `nydus`](how-to-use-virtio-fs-nydus-with-kata.md)
- [How to run Kata Containers with AMD SEV-SNP](how-to-run-kata-containers-with-SNP-VMs.md)
- [How to use EROFS to build rootfs in Kata Containers](how-to-use-erofs-build-rootfs.md)
- [How to run Kata Containers with kinds of Block Volumes](how-to-run-kata-containers-with-kinds-of-Block-Volumes.md)
## Confidential Containers
- [How to use build and test the Confidential Containers `CCv0` proof of concept](how-to-build-and-test-ccv0.md)
- [How to generate a Kata Containers payload for the Confidential Containers Operator](how-to-generate-a-kata-containers-payload-for-the-confidential-containers-operator.md)
- [How to run Kata Containers with kinds of Block Volumes](how-to-run-kata-containers-with-kinds-of-Block-Volumes.md)

View File

@@ -1,635 +0,0 @@
#!/bin/bash -e
#
# Copyright (c) 2021, 2023 IBM Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# Disclaimer: This script is work in progress for supporting the CCv0 prototype
# It shouldn't be considered supported by the Kata Containers community, or anyone else
# Based on https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md,
# but with elements of the tests/.ci scripts used
readonly script_name="$(basename "${BASH_SOURCE[0]}")"
# By default in Golang >= 1.16 GO111MODULE is set to "on", but not all modules support it, so overwrite to "auto"
export GO111MODULE="auto"
# Setup kata containers environments if not set - we default to use containerd
export CRI_CONTAINERD=${CRI_CONTAINERD:-"yes"}
export CRI_RUNTIME=${CRI_RUNTIME:-"containerd"}
export CRIO=${CRIO:-"no"}
export KATA_HYPERVISOR="${KATA_HYPERVISOR:-qemu}"
export KUBERNETES=${KUBERNETES:-"no"}
export AGENT_INIT="${AGENT_INIT:-${TEST_INITRD:-no}}"
export AA_KBC="${AA_KBC:-offline_fs_kbc}"
export KATA_BUILD_CC=${KATA_BUILD_CC:-"yes"}
export TEE_TYPE=${TEE_TYPE:-}
export PREFIX="${PREFIX:-/opt/confidential-containers}"
export RUNTIME_CONFIG_PATH="${RUNTIME_CONFIG_PATH:-${PREFIX}/share/defaults/kata-containers/configuration.toml}"
# Allow the user to overwrite the default repo and branch names if they want to build from a fork
export katacontainers_repo="${katacontainers_repo:-github.com/kata-containers/kata-containers}"
export katacontainers_branch="${katacontainers_branch:-CCv0}"
export kata_default_branch=${katacontainers_branch}
export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
export tests_branch="${tests_branch:-CCv0}"
export target_branch=${tests_branch} # kata-containers/ci/lib.sh uses target branch var to check out tests repo
# if .bash_profile exists then use it, otherwise fall back to .profile
export PROFILE="${HOME}/.profile"
if [ -r "${HOME}/.bash_profile" ]; then
export PROFILE="${HOME}/.bash_profile"
fi
# Stop PS1: unbound variable error happening
export PS1=${PS1:-}
# Create a bunch of common, derived values up front so we don't need to create them in all the different functions
. ${PROFILE}
if [ -z ${GOPATH} ]; then
export GOPATH=${HOME}/go
fi
export tests_repo_dir="${GOPATH}/src/${tests_repo}"
export katacontainers_repo_dir="${GOPATH}/src/${katacontainers_repo}"
export ROOTFS_DIR="${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder/rootfs"
export PULL_IMAGE="${PULL_IMAGE:-quay.io/kata-containers/confidential-containers:signed}" # Doesn't need authentication
export CONTAINER_ID="${CONTAINER_ID:-0123456789}"
source /etc/os-release || source /usr/lib/os-release
grep -Eq "\<fedora\>" /etc/os-release 2> /dev/null && export USE_PODMAN=true
# If we've already checked out the test repo then source the confidential scripts
if [ "${KUBERNETES}" == "yes" ]; then
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/kubernetes/confidential"
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
else
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/containerd/confidential"
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
fi
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
usage() {
exit_code="$1"
cat <<EOF
Overview:
Build and test kata containers from source
Optionally set kata-containers and tests repo and branch as exported variables before running
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/${script_name} build_and_install_all
Usage:
${script_name} [options] <command>
Commands:
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
- agent_pull_image: Run PullImage command against the agent with agent-ctl
- all: Build and install everything, test kata with containerd and capture the logs
- build_and_add_agent_to_rootfs: Builds the kata-agent and adds it to the rootfs
- build_and_install_all: Build and install everything
- build_and_install_rootfs: Builds and installs the rootfs image
- build_kata_runtime: Build and install the kata runtime
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
- build_qemu: Checkout, patch, build and install QEMU
- configure: Configure Kata to use rootfs and enable debug
- connect_to_ssh_demo_pod: Ssh into the ssh demo pod, showing that the decryption succeeded
- copy_signature_files_to_guest Copies signature verification files to guest
- create_rootfs: Create a local rootfs
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
- crictl_create_cc_pod Use crictl to create a new kata cc pod
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
- help: Display this help
- init_kubernetes: initialize a Kubernetes cluster on this system
- initialize: Install dependencies and check out kata-containers source
- install_guest_kernel: Setup, build and install the guest kernel
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_create_ssh_demo_pod: Create a Kata CC runtime pod based on the ssh demo
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_delete_ssh_demo_pod: Delete the Kata CC runtime pod based on the ssh demo
- open_kata_shell: Open a shell into the kata runtime
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
- shim_pull_image: Run PullImage command against the shim with ctr
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
- test: Test using kata with containerd
Options:
-d: Enable debug
-h: Display this help
EOF
# if script sourced don't exit as this will exit the main shell, just return instead
[[ $_ != $0 ]] && return "$exit_code" || exit "$exit_code"
}
build_and_install_all() {
initialize
build_and_install_kata_runtime
configure
create_a_local_rootfs
build_and_install_rootfs
install_guest_kernel_image
case "$KATA_HYPERVISOR" in
"qemu")
build_qemu
;;
"cloud-hypervisor")
build_cloud_hypervisor
;;
*)
echo "Invalid option: $KATA_HYPERVISOR is not supported." >&2
;;
esac
check_kata_runtime
if [ "${KUBERNETES}" == "yes" ]; then
init_kubernetes
fi
}
rebuild_and_install_kata() {
checkout_tests_repo
checkout_kata_containers_repo
build_and_install_kata_runtime
build_and_add_agent_to_rootfs
build_and_install_rootfs
check_kata_runtime
}
# Based on the jenkins_job_build.sh script in kata-containers/tests/.ci - checks out source code and installs dependencies
initialize() {
# We need git to checkout and bootstrap the ci scripts and some other packages used in testing
sudo apt-get update && sudo apt-get install -y curl git qemu-utils
grep -qxF "export GOPATH=\${HOME}/go" "${PROFILE}" || echo "export GOPATH=\${HOME}/go" >> "${PROFILE}"
grep -qxF "export GOROOT=/usr/local/go" "${PROFILE}" || echo "export GOROOT=/usr/local/go" >> "${PROFILE}"
grep -qxF "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" "${PROFILE}" || echo "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" >> "${PROFILE}"
# Load the new go and PATH parameters from the profile
. ${PROFILE}
mkdir -p "${GOPATH}"
checkout_tests_repo
pushd "${tests_repo_dir}"
local ci_dir_name=".ci"
sudo -E PATH=$PATH -s "${ci_dir_name}/install_go.sh" -p -f
sudo -E PATH=$PATH -s "${ci_dir_name}/install_rust.sh"
# Need to change ownership of rustup so later process can create temp files there
sudo chown -R ${USER}:${USER} "${HOME}/.rustup"
checkout_kata_containers_repo
# Run setup, but don't install kata as we will build it ourselves in locations matching the developer guide
export INSTALL_KATA="no"
sudo -E PATH=$PATH -s ${ci_dir_name}/setup.sh
# Reload the profile to pick up installed dependencies
. ${PROFILE}
popd
}
checkout_tests_repo() {
echo "Creating repo: ${tests_repo} and branch ${tests_branch} into ${tests_repo_dir}..."
# Due to git https://github.blog/2022-04-12-git-security-vulnerability-announced/ the tests repo needs
# to be owned by root as it is re-checked out in rootfs.sh
mkdir -p $(dirname "${tests_repo_dir}")
[ -d "${tests_repo_dir}" ] || sudo -E git clone "https://${tests_repo}.git" "${tests_repo_dir}"
sudo -E chown -R root:root "${tests_repo_dir}"
pushd "${tests_repo_dir}"
sudo -E git fetch
if [ -n "${tests_branch}" ]; then
sudo -E git checkout ${tests_branch}
fi
sudo -E git reset --hard origin/${tests_branch}
popd
source "${BATS_TEST_DIRNAME}/lib.sh"
source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
}
# Note: clone_katacontainers_repo using go, so that needs to be installed first
checkout_kata_containers_repo() {
source "${tests_repo_dir}/.ci/lib.sh"
echo "Creating repo: ${katacontainers_repo} and branch ${kata_default_branch} into ${katacontainers_repo_dir}..."
clone_katacontainers_repo
sudo -E chown -R ${USER}:${USER} "${katacontainers_repo_dir}"
}
build_and_install_kata_runtime() {
export DEFAULT_HYPERVISOR=${KATA_HYPERVISOR}
${tests_repo_dir}/.ci/install_runtime.sh
}
configure() {
# configure kata to use rootfs, not initrd
sudo sed -i 's/^\(initrd =.*\)/# \1/g' ${RUNTIME_CONFIG_PATH}
enable_full_debug
enable_agent_console
# Switch image offload to true in kata config
switch_image_service_offload "on"
configure_cc_containerd
# From crictl v1.24.1 the default timoout leads to the pod creation failing, so update it
sudo crictl config --set timeout=10
# Verity checks aren't working locally, as we aren't re-genning the hash maybe? so remove it from the kernel parameters
remove_kernel_param "cc_rootfs_verity.scheme"
}
build_and_add_agent_to_rootfs() {
build_a_custom_kata_agent
add_custom_agent_to_rootfs
}
build_a_custom_kata_agent() {
# Install libseccomp for static linking
sudo -E PATH=$PATH GOPATH=$GOPATH ${katacontainers_repo_dir}/ci/install_libseccomp.sh /tmp/kata-libseccomp /tmp/kata-gperf
export LIBSECCOMP_LINK_TYPE=static
export LIBSECCOMP_LIB_PATH=/tmp/kata-libseccomp/lib
. "$HOME/.cargo/env"
pushd ${katacontainers_repo_dir}/src/agent
sudo -E PATH=$PATH make
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
# Run a make install into the rootfs directory in order to create the kata-agent.service file which is required when we add to the rootfs
sudo -E PATH=$PATH make install DESTDIR="${ROOTFS_DIR}"
popd
}
create_a_local_rootfs() {
sudo rm -rf "${ROOTFS_DIR}"
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
export distro="ubuntu"
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
sudo -E OS_VERSION="${OS_VERSION:-}" GOPATH=$GOPATH EXTRA_PKGS="vim iputils-ping net-tools" DEBUG="${DEBUG:-}" USE_DOCKER="${use_docker:-}" SKOPEO=${SKOPEO:-} AA_KBC=${AA_KBC:-} UMOCI=yes SECCOMP=yes ./rootfs.sh -r ${ROOTFS_DIR} ${distro}
# Install_rust.sh during rootfs.sh switches us to the main branch of the tests repo, so switch back now
pushd "${tests_repo_dir}"
sudo -E git checkout ${tests_branch}
popd
# During the ./rootfs.sh call the kata agent is built as root, so we need to update the permissions, so we can rebuild it
sudo chown -R ${USER}:${USER} "${katacontainers_repo_dir}/src/agent/"
popd
}
add_custom_agent_to_rootfs() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/usr/bin ${katacontainers_repo_dir}/src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent
sudo install -o root -g root -m 0440 ../../../src/agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/
sudo install -o root -g root -m 0440 ../../../src/agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/
popd
}
build_and_install_rootfs() {
build_rootfs_image
install_rootfs_image
}
build_rootfs_image() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
# Logic from install_kata_image.sh - if we aren't using podman (ie on a fedora like), then use docker
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
sudo -E USE_DOCKER="${use_docker:-}" ./image_builder.sh ${ROOTFS_DIR}
popd
}
install_rootfs_image() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
local commit=$(git log --format=%h -1 HEAD)
local date=$(date +%Y-%m-%d-%T.%N%z)
local image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "${PREFIX}/share/kata-containers/${image}"
(cd ${PREFIX}/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
echo "Built Rootfs from ${ROOTFS_DIR} to ${PREFIX}/share/kata-containers/${image}"
ls -al ${PREFIX}/share/kata-containers
popd
}
install_guest_kernel_image() {
${tests_repo_dir}/.ci/install_kata_kernel.sh
}
build_qemu() {
${tests_repo_dir}/.ci/install_virtiofsd.sh
${tests_repo_dir}/.ci/install_qemu.sh
}
build_cloud_hypervisor() {
${tests_repo_dir}/.ci/install_virtiofsd.sh
${tests_repo_dir}/.ci/install_cloud_hypervisor.sh
}
check_kata_runtime() {
sudo kata-runtime check
}
k8s_pod_file="${HOME}/busybox-cc.yaml"
init_kubernetes() {
# Check that kubeadm was installed and install it otherwise
if ! [ -x "$(command -v kubeadm)" ]; then
pushd "${tests_repo_dir}/.ci"
sudo -E PATH=$PATH -s install_kubernetes.sh
if [ "${CRI_CONTAINERD}" == "yes" ]; then
sudo -E PATH=$PATH -s "configure_containerd_for_kubernetes.sh"
fi
popd
fi
# If kubernetes init has previously run we need to clean it by removing the image and resetting k8s
local cid=$(sudo docker ps -a -q -f name=^/kata-registry$)
if [ -n "${cid}" ]; then
sudo docker stop ${cid} && sudo docker rm ${cid}
fi
local k8s_nodes=$(kubectl get nodes -o name 2>/dev/null || true)
if [ -n "${k8s_nodes}" ]; then
sudo kubeadm reset -f
fi
export CI="true" && sudo -E PATH=$PATH -s ${tests_repo_dir}/integration/kubernetes/init.sh
sudo chown ${USER}:$(id -g -n ${USER}) "$HOME/.kube/config"
cat << EOF > ${k8s_pod_file}
apiVersion: v1
kind: Pod
metadata:
name: busybox-cc
spec:
runtimeClassName: kata
containers:
- name: nginx
image: quay.io/kata-containers/confidential-containers:signed
imagePullPolicy: Always
EOF
}
call_kubernetes_create_cc_pod() {
kubernetes_create_cc_pod ${k8s_pod_file}
}
call_kubernetes_delete_cc_pod() {
pod_name=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
kubernetes_delete_cc_pod $pod_name
}
call_kubernetes_create_ssh_demo_pod() {
setup_decryption_files_in_guest
kubernetes_create_ssh_demo_pod
}
call_connect_to_ssh_demo_pod() {
connect_to_ssh_demo_pod
}
call_kubernetes_delete_ssh_demo_pod() {
pod=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
kubernetes_delete_ssh_demo_pod $pod
}
crictl_sandbox_name=kata-cc-busybox-sandbox
call_crictl_create_cc_pod() {
# Update iptables to allow forwarding to the cni0 bridge avoiding issues caused by the docker0 bridge
sudo iptables -P FORWARD ACCEPT
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
get_pod_config
crictl_delete_cc_pod_if_exists "${crictl_sandbox_name}"
crictl_create_cc_pod "${pod_config}"
sudo crictl pods
}
call_crictl_create_cc_container() {
# Create container configuration yaml based on our test copy of busybox
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
get_pod_config
local container_config="${FIXTURES_DIR}/${CONTAINER_CONFIG_FILE:-container-config.yaml}"
local pod_name=${crictl_sandbox_name}
crictl_create_cc_container ${pod_name} ${pod_config} ${container_config}
sudo crictl ps -a
}
crictl_delete_cc() {
crictl_delete_cc_pod ${crictl_sandbox_name}
}
test_kata_runtime() {
echo "Running ctr with the kata runtime..."
local test_image="quay.io/kata-containers/confidential-containers:signed"
if [ -z $(sudo ctr images ls -q name=="${test_image}") ]; then
sudo ctr image pull "${test_image}"
fi
sudo ctr run --runtime "io.containerd.kata.v2" --rm -t "${test_image}" test-kata uname -a
}
run_kata_and_capture_logs() {
echo "Clearing systemd journal..."
sudo systemctl stop systemd-journald
sudo rm -f /var/log/journal/*/* /run/log/journal/*/*
sudo systemctl start systemd-journald
test_kata_runtime
echo "Collecting logs..."
sudo journalctl -q -o cat -a -t kata-runtime > ${HOME}/kata-runtime.log
sudo journalctl -q -o cat -a -t kata > ${HOME}/shimv2.log
echo "Logs output to ${HOME}/kata-runtime.log and ${HOME}/shimv2.log"
}
get_ids() {
guest_cid=$(sudo ss -H --vsock | awk '{print $6}' | cut -d: -f1)
sandbox_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
}
open_kata_shell() {
get_ids
sudo -E "PATH=$PATH" kata-runtime exec ${sandbox_id}
}
build_bundle_dir_if_necessary() {
bundle_dir="/tmp/bundle"
if [ ! -d "${bundle_dir}" ]; then
rootfs_dir="$bundle_dir/rootfs"
image="quay.io/kata-containers/confidential-containers:signed"
mkdir -p "$rootfs_dir" && (cd "$bundle_dir" && runc spec)
sudo docker export $(sudo docker create "$image") | tar -C "$rootfs_dir" -xvf -
fi
# There were errors in create container agent-ctl command due to /bin/ seemingly not being on the path, so hardcode it
sudo sed -i -e 's%^\(\t*\)"sh"$%\1"/bin/sh"%g' "${bundle_dir}/config.json"
}
build_agent_ctl() {
cd ${GOPATH}/src/${katacontainers_repo}/src/tools/agent-ctl/
if [ -e "${HOME}/.cargo/registry" ]; then
sudo chown -R ${USER}:${USER} "${HOME}/.cargo/registry"
fi
sudo -E PATH=$PATH -s make
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
cd "./target/${ARCH}-unknown-linux-${LIBC}/release/"
}
run_agent_ctl_command() {
get_ids
build_bundle_dir_if_necessary
command=$1
# If kata-agent-ctl pre-built in this directory, use it directly, otherwise build it first and switch to release
if [ ! -x kata-agent-ctl ]; then
build_agent_ctl
fi
./kata-agent-ctl -l debug connect --bundle-dir "${bundle_dir}" --server-address "vsock://${guest_cid}:1024" -c "${command}"
}
agent_pull_image() {
run_agent_ctl_command "PullImage image=${PULL_IMAGE} cid=${CONTAINER_ID} source_creds=${SOURCE_CREDS}"
}
agent_create_container() {
run_agent_ctl_command "CreateContainer cid=${CONTAINER_ID}"
}
shim_pull_image() {
get_ids
local ctr_shim_command="sudo ctr --namespace k8s.io shim --id ${sandbox_id} pull-image ${PULL_IMAGE} ${CONTAINER_ID}"
echo "Issuing command '${ctr_shim_command}'"
${ctr_shim_command}
}
call_copy_signature_files_to_guest() {
# TODO #5173 - remove this once the kernel_params aren't ignored by the agent config
export DEBUG_CONSOLE="true"
if [ "${SKOPEO:-}" = "yes" ]; then
add_kernel_params "agent.container_policy_file=/etc/containers/quay_verification/quay_policy.json"
setup_skopeo_signature_files_in_guest
else
# TODO #4888 - set config to specifically enable signature verification to be on in ImageClient
setup_offline_fs_kbc_signature_files_in_guest
fi
}
main() {
while getopts "dh" opt; do
case "$opt" in
d)
export DEBUG="-d"
set -x
;;
h)
usage 0
;;
\?)
echo "Invalid option: -$OPTARG" >&2
usage 1
;;
esac
done
shift $((OPTIND - 1))
subcmd="${1:-}"
[ -z "${subcmd}" ] && usage 1
case "${subcmd}" in
all)
build_and_install_all
run_kata_and_capture_logs
;;
build_and_install_all)
build_and_install_all
;;
rebuild_and_install_kata)
rebuild_and_install_kata
;;
initialize)
initialize
;;
build_kata_runtime)
build_and_install_kata_runtime
;;
configure)
configure
;;
create_rootfs)
create_a_local_rootfs
;;
build_and_add_agent_to_rootfs)
build_and_add_agent_to_rootfs
;;
build_and_install_rootfs)
build_and_install_rootfs
;;
install_guest_kernel)
install_guest_kernel_image
;;
build_cloud_hypervisor)
build_cloud_hypervisor
;;
build_qemu)
build_qemu
;;
init_kubernetes)
init_kubernetes
;;
crictl_create_cc_pod)
call_crictl_create_cc_pod
;;
crictl_create_cc_container)
call_crictl_create_cc_container
;;
crictl_delete_cc)
crictl_delete_cc
;;
kubernetes_create_cc_pod)
call_kubernetes_create_cc_pod
;;
kubernetes_delete_cc_pod)
call_kubernetes_delete_cc_pod
;;
kubernetes_create_ssh_demo_pod)
call_kubernetes_create_ssh_demo_pod
;;
connect_to_ssh_demo_pod)
call_connect_to_ssh_demo_pod
;;
kubernetes_delete_ssh_demo_pod)
call_kubernetes_delete_ssh_demo_pod
;;
test)
test_kata_runtime
;;
test_capture_logs)
run_kata_and_capture_logs
;;
open_kata_console)
open_kata_console
;;
open_kata_shell)
open_kata_shell
;;
agent_pull_image)
agent_pull_image
;;
shim_pull_image)
shim_pull_image
;;
agent_create_container)
agent_create_container
;;
copy_signature_files_to_guest)
call_copy_signature_files_to_guest
;;
*)
usage 1
;;
esac
}
main $@

View File

@@ -1,45 +0,0 @@
# Copyright (c) 2021 IBM Corp.
#
# SPDX-License-Identifier: Apache-2.0
#
aa_kbc_params = "$AA_KBC_PARAMS"
https_proxy = "$HTTPS_PROXY"
[endpoints]
allowed = [
"AddARPNeighborsRequest",
"AddSwapRequest",
"CloseStdinRequest",
"CopyFileRequest",
"CreateContainerRequest",
"CreateSandboxRequest",
"DestroySandboxRequest",
#"ExecProcessRequest",
"GetMetricsRequest",
"GetOOMEventRequest",
"GuestDetailsRequest",
"ListInterfacesRequest",
"ListRoutesRequest",
"MemHotplugByProbeRequest",
"OnlineCPUMemRequest",
"PauseContainerRequest",
"PullImageRequest",
"ReadStreamRequest",
"RemoveContainerRequest",
#"ReseedRandomDevRequest",
"ResizeVolumeRequest",
"ResumeContainerRequest",
"SetGuestDateTimeRequest",
"SignalProcessRequest",
"StartContainerRequest",
"StartTracingRequest",
"StatsContainerRequest",
"StopTracingRequest",
"TtyWinResizeRequest",
"UpdateContainerRequest",
"UpdateInterfaceRequest",
"UpdateRoutesRequest",
"VolumeStatsRequest",
"WaitProcessRequest",
"WriteStreamRequest"
]

View File

@@ -1,475 +0,0 @@
# How to build, run and test Kata CCv0
## Introduction and Background
In order to try and make building (locally) and demoing the Kata Containers `CCv0` code base as simple as possible I've
shared a script [`ccv0.sh`](./ccv0.sh). This script was originally my attempt to automate the steps of the
[Developer Guide](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md) so that I could do
different sections of them repeatedly and reliably as I was playing around with make changes to different parts of the
Kata code base. I then tried to weave in some of the [`tests/.ci`](https://github.com/kata-containers/tests/tree/main/.ci)
scripts in order to have less duplicated code.
As we're progress on the confidential containers journey I hope to add more features to demonstrate the functionality
we have working.
*Disclaimer: This script has mostly just been used and tested by me ([@stevenhorsman](https://github.com/stevenhorsman)),*
*so there might be issues with it. I'm happy to try and help solve these if possible, but this shouldn't be considered a*
*fully supported process by the Kata Containers community.*
### Basic script set-up and optional environment variables
In order to build, configure and demo the CCv0 functionality, these are the set-up steps I take:
- Provision a new VM
- *I choose a Ubuntu 20.04 8GB VM for this as I had one available. There are some dependences on apt-get installed*
*packages, so these will need re-working to be compatible with other platforms.*
- Copy the script over to your VM *(I put it in the home directory)* and ensure it has execute permission by running
```bash
$ chmod u+x ccv0.sh
```
- Optionally set up some environment variables
- By default the script checks out the `CCv0` branches of the `kata-containers/kata-containers` and
`kata-containers/tests` repositories, but it is designed to be used to test of personal forks and branches as well.
If you want to build and run these you can export the `katacontainers_repo`, `katacontainers_branch`, `tests_repo`
and `tests_branch` variables e.g.
```bash
$ export katacontainers_repo=github.com/stevenhorsman/kata-containers
$ export katacontainers_branch=stevenh/agent-pull-image-endpoint
$ export tests_repo=github.com/stevenhorsman/tests
$ export tests_branch=stevenh/add-ccv0-changes-to-build
```
before running the script.
- By default the build and configuration are using `QEMU` as the hypervisor. In order to use `Cloud Hypervisor` instead
set:
```
$ export KATA_HYPERVISOR="cloud-hypervisor"
```
before running the build.
- At this point you can provision a Kata confidential containers pod and container with either
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image),
or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
and then test and use it.
### Using crictl for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
- Run the full build process with Kubernetes turned off, so its configuration doesn't interfere with `crictl` using:
```bash
$ export KUBERNETES="no"
$ export KATA_HYPERVISOR="qemu"
$ ~/ccv0.sh -d build_and_install_all
```
> **Note**: Much of this script has to be run as `sudo`, so you are likely to get prompted for your password.
- *I run this script sourced just so that the required installed components are accessible on the `PATH` to the rest*
*of the process without having to reload the session.*
- The steps that `build_and_install_all` takes is:
- Checkout the git repos for the `tests` and `kata-containers` repos as specified by the environment variables
(default to `CCv0` branches if they are not supplied)
- Use the `tests/.ci` scripts to install the build dependencies
- Build and install the Kata runtime
- Configure Kata to use containerd and for debug and confidential containers features to be enabled (including
enabling console access to the Kata guest shell, which should only be done in development)
- Create, build and install a rootfs for the Kata hypervisor to use. For 'CCv0' this is currently based on Ubuntu
20.04.
- Build the Kata guest kernel
- Install the hypervisor (in order to select which hypervisor will be used, the `KATA_HYPERVISOR` environment
variable can be used to select between `qemu` or `cloud-hypervisor`)
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull ubuntu
> ```
> then re-run the command.
- The first time this runs it may take a while, but subsequent runs will be quicker as more things are already
installed and they can be further cut down by not running all the above steps
[see "Additional script usage" below](#additional-script-usage)
- Create a new Kata sandbox pod using `crictl` with:
```bash
$ ~/ccv0.sh crictl_create_cc_pod
```
- This creates a pod configuration file, creates the pod from this using
`sudo crictl runp -r kata ~/pod-config.yaml` and runs `sudo crictl pods` to show the pod
- Create a new Kata confidential container with:
```bash
$ ~/ccv0.sh crictl_create_cc_container
```
- This creates a container (based on `busybox:1.33.1`) in the Kata cc sandbox and prints a list of containers.
This will have been created based on an image pulled in the Kata pod sandbox/guest, not on the host machine.
As this point you should have a `crictl` pod and container that is using the Kata confidential containers runtime.
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
#### Clean up the `crictl` pod sandbox and container
- When the testing is complete you can delete the container and pod by running:
```bash
$ ~/ccv0.sh crictl_delete_cc
```
### Using Kubernetes for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
configured and created using the VM
as a single node cluster:
```bash
$ export KUBERNETES="yes"
$ ~/ccv0.sh build_and_install_all
```
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull registry:2
> $ sudo docker pull ubuntu:20.04
> ```
> then re-run the command.
- Check that your Kubernetes cluster has been correctly set-up by running :
```bash
$ kubectl get nodes
```
and checking that you see a single node e.g.
```text
NAME STATUS ROLES AGE VERSION
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
```
- Create a Kata confidential containers pod by running:
```bash
$ ~/ccv0.sh kubernetes_create_cc_pod
```
- Wait a few seconds for pod to start then check that the pod's status is `Running` with
```bash
$ kubectl get pods
```
which should show something like:
```text
NAME READY STATUS RESTARTS AGE
busybox-cc 1/1 Running 0 54s
```
- As this point you should have a Kubernetes pod and container running, that is using the Kata
confidential containers runtime.
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
#### Clean up the Kubernetes pod sandbox and container
- When the testing is complete you can delete the container and pod by running:
```bash
$ ~/ccv0.sh kubernetes_delete_cc_pod
```
### Validate that the container image was pulled on the guest
There are a couple of ways we can check that the container pull image action was offloaded to the guest, by checking
the guest's file system for the unpacked bundle and checking the host's directories to ensure it wasn't also pulled
there.
- To check the guest's file system:
- Open a shell into the Kata guest with:
```bash
$ ~/ccv0.sh open_kata_shell
```
- List the files in the directory that the container image bundle should have been unpacked to with:
```bash
$ ls -ltr /run/kata-containers/confidential-containers_signed/
```
- This should give something like
```
total 72
-rw-r--r-- 1 root root 2977 Jan 20 10:03 config.json
drwxr-xr-x 12 root root 240 Jan 20 10:03 rootfs
```
which shows how the image has been pulled and then unbundled on the guest.
- Leave the Kata guest shell by running:
```bash
$ exit
```
- To verify that the image wasn't pulled on the host system we can look at the shared sandbox on the host and we
should only see a single bundle for the pause container as the `busybox` based container image should have been
pulled on the guest:
- Find all the `rootfs` directories under in the pod's shared directory with:
```bash
$ pod_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
$ sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs
```
which should only show a single `rootfs` directory if the container image was pulled on the guest, not the host
- Looking that `rootfs` directory with
```bash
$ sudo ls -ltr $(sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs)
```
shows something similar to
```
total 668
-rwxr-xr-x 1 root root 682696 Aug 25 13:58 pause
drwxr-xr-x 2 root root 6 Jan 20 02:01 proc
drwxr-xr-x 2 root root 6 Jan 20 02:01 dev
drwxr-xr-x 2 root root 6 Jan 20 02:01 sys
drwxr-xr-x 2 root root 25 Jan 20 02:01 etc
```
which is clearly the pause container indicating that the `busybox` based container image is not exposed to the host.
### Using a Kata pod sandbox for testing with `agent-ctl` or `ctr shim`
Once you have a kata pod sandbox created as described above, either using
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image), or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
, you can use this to test specific components of the Kata confidential
containers architecture. This can be useful for development and debugging to isolate and test features
that aren't broadly supported end-to-end. Here are some examples:
- In the first terminal run the pull image on guest command against the Kata agent, via the shim (`containerd-shim-kata-v2`).
This can be achieved using the [containerd](https://github.com/containerd/containerd) CLI tool, `ctr`, which can be used to
interact with the shim directly. The command takes the form
`ctr --namespace k8s.io shim --id <sandbox-id> pull-image <image> <new-container-id>` and can been run directly, or through
the `ccv0.sh` script to automatically fill in the variables:
- Optionally, set up some environment variables to set the image and credentials used:
- By default the shim pull image test in `ccv0.sh` will use the `busybox:1.33.1` based test image
`quay.io/kata-containers/confidential-containers:signed` which requires no authentication. To use a different
image, set the `PULL_IMAGE` environment variable e.g.
```bash
$ export PULL_IMAGE="docker.io/library/busybox:latest"
```
Currently the containerd shim pull image
code doesn't support using a container registry that requires authentication, so if this is required, see the
below steps to run the pull image command against the agent directly.
- Run the pull image agent endpoint with:
```bash
$ ~/ccv0.sh shim_pull_image
```
which we print the `ctr shim` command for reference
- Alternatively you can issue the command directly to the `kata-agent` pull image endpoint, which also supports
credentials in order to pull from an authenticated registry:
- Optionally set up some environment variables to set the image and credentials used:
- Set the `PULL_IMAGE` environment variable e.g. `export PULL_IMAGE="docker.io/library/busybox:latest"`
if a specific container image is required.
- If the container registry for the image requires authentication then this can be set with an environment
variable `SOURCE_CREDS`. For example to use Docker Hub (`docker.io`) as an authenticated user first run
`export SOURCE_CREDS="<dockerhub username>:<dockerhub api key>"`
> **Note**: the credentials support on the agent request is a tactical solution for the short-term
proof of concept to allow more images to be pulled and tested. Once we have support for getting
keys into the Kata guest image using the attestation-agent and/or KBS I'd expect container registry
credentials to be looked up using that mechanism.
- Run the pull image agent endpoint with
```bash
$ ~/ccv0.sh agent_pull_image
```
and you should see output which includes `Command PullImage (1 of 1) returned (Ok(()), false)` to indicate
that the `PullImage` request was successful e.g.
```
Finished release [optimized] target(s) in 0.21s
{"msg":"announce","level":"INFO","ts":"2021-09-15T08:40:14.189360410-07:00","subsystem":"rpc","name":"kata-agent-ctl","pid":"830920","version":"0.1.0","source":"kata-agent-ctl","config":"Config { server_address: \"vsock://1970354082:1024\", bundle_dir: \"/tmp/bundle\", timeout_nano: 0, interactive: false, ignore_errors: false }"}
{"msg":"client setup complete","level":"INFO","ts":"2021-09-15T08:40:14.193639057-07:00","pid":"830920","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","server-address":"vsock://1970354082:1024"}
{"msg":"Run command PullImage (1 of 1)","level":"INFO","ts":"2021-09-15T08:40:14.196643765-07:00","pid":"830920","source":"kata-agent-ctl","subsystem":"rpc","name":"kata-agent-ctl","version":"0.1.0"}
{"msg":"response received","level":"INFO","ts":"2021-09-15T08:40:43.828200633-07:00","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","pid":"830920","response":""}
{"msg":"Command PullImage (1 of 1) returned (Ok(()), false)","level":"INFO","ts":"2021-09-15T08:40:43.828261708-07:00","subsystem":"rpc","pid":"830920","source":"kata-agent-ctl","version":"0.1.0","name":"kata-agent-ctl"}
```
> **Note**: The first time that `~/ccv0.sh agent_pull_image` is run, the `agent-ctl` tool will be built
which may take a few minutes.
- To validate that the image pull was successful, you can open a shell into the Kata guest with:
```bash
$ ~/ccv0.sh open_kata_shell
```
- Check the `/run/kata-containers/` directory to verify that the container image bundle has been created in a directory
named either `01234556789` (for the container id), or the container image name, e.g.
```bash
$ ls -ltr /run/kata-containers/confidential-containers_signed/
```
which should show something like
```
total 72
drwxr-xr-x 10 root root 200 Jan 1 1970 rootfs
-rw-r--r-- 1 root root 2977 Jan 20 16:45 config.json
```
- Leave the Kata shell by running:
```bash
$ exit
```
## Verifying signed images
For this sample demo, we use local attestation to pass through the required
configuration to do container image signature verification. Due to this, the ability to verify images is limited
to a pre-created selection of test images in our test
repository [`quay.io/kata-containers/confidential-containers`](https://quay.io/repository/kata-containers/confidential-containers?tab=tags).
For pulling images not in this test repository (called an *unprotected* registry below), we fall back to the behaviour
of not enforcing signatures. More documentation on how to customise this to match your own containers through local,
or remote attestation will be available in future.
In our test repository there are three tagged images:
| Test Image | Base Image used | Signature status | GPG key status |
| --- | --- | --- | --- |
| `quay.io/kata-containers/confidential-containers:signed` | `busybox:1.33.1` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/signatures.tar) embedded in kata rootfs | [public key](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/public.gpg) embedded in kata rootfs |
| `quay.io/kata-containers/confidential-containers:unsigned` | `busybox:1.33.1` | not signed | not signed |
| `quay.io/kata-containers/confidential-containers:other_signed` | `nginx:1.21.3` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/signatures.tar) embedded in kata rootfs | GPG key not kept |
Using a standard unsigned `busybox` image that can be pulled from another, *unprotected*, `quay.io` repository we can
test a few scenarios.
In this sample, with local attestation, we pass in the the public GPG key and signature files, and the [`offline_fs_kbc`
configuration](https://github.com/confidential-containers/attestation-agent/blob/main/src/kbc_modules/offline_fs_kbc/README.md)
into the guest image which specifies that any container image from `quay.io/kata-containers`
must be signed with the embedded GPG key and the agent configuration needs updating to enable this.
With this policy set a few tests of image verification can be done to test different scenarios by attempting
to create containers from these images using `crictl`:
- If you don't already have the Kata Containers CC code built and configured for `crictl`, then follow the
[instructions above](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
up to the `~/ccv0.sh crictl_create_cc_pod` command.
- In order to enable the guest image, you will need to setup the required configuration, policy and signature files
needed by running
`~/ccv0.sh copy_signature_files_to_guest` and then run `~/ccv0.sh crictl_create_cc_pod` which will delete and recreate
your pod - adding in the new files.
- To test the fallback behaviour works using an unsigned image from an *unprotected* registry we can pull the `busybox`
image by running:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-unprotected.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This finishes showing the running container e.g.
```text
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
98c70fefe997a quay.io/prometheus/busybox:latest Less than a second ago Running prometheus-busybox-signed 0 70119e0539238
```
- To test that an unsigned image from our *protected* test container registry is rejected we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-protected.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This correctly results in an error message from `crictl`:
`PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [Match reference failed.]" image="quay.io/kata-containers/confidential-containers:unsigned"`
- To test that the signed image our *protected* test container registry is accepted we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This finishes by showing a new `kata-cc-busybox-signed` running container e.g.
```text
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b4d85c2132ed9 quay.io/kata-containers/confidential-containers:signed Less than a second ago Running kata-cc-busybox-signed 0 70119e0539238
...
```
- Finally to check the image with a valid signature, but invalid GPG key (the real trusted piece of information we really
want to protect with the attestation agent in future) fails we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_signed-protected-other.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- Again this results in an error message from `crictl`:
`"PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [signature verify failed! There is no pubkey can verify the signature!]" image="quay.io/kata-containers/confidential-containers:other_signed"`
### Using Kubernetes to create a Kata confidential containers pod from the encrypted ssh demo sample image
The [ssh-demo](https://github.com/confidential-containers/documentation/tree/main/demos/ssh-demo) explains how to
demonstrate creating a Kata confidential containers pod from an encrypted image with the runtime created by the
[confidential-containers operator](https://github.com/confidential-containers/documentation/blob/main/demos/operator-demo).
To be fully confidential, this should be run on a Trusted Execution Environment, but it can be tested on generic
hardware as well.
If you wish to build the Kata confidential containers runtime to do this yourself, then you can using the following
steps:
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
configured and created using the VM as a single node cluster and with `AA_KBC` set to `offline_fs_kbc`.
```bash
$ export KUBERNETES="yes"
$ export AA_KBC=offline_fs_kbc
$ ~/ccv0.sh build_and_install_all
```
- The `AA_KBC=offline_fs_kbc` mode will ensure that, when creating the rootfs of the Kata guest, the
[attestation-agent](https://github.com/confidential-containers/attestation-agent) will be added along with the
[sample offline KBC](https://github.com/confidential-containers/documentation/blob/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json)
and an agent configuration file
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull registry:2
> $ sudo docker pull ubuntu:20.04
> ```
> then re-run the command.
- Check that your Kubernetes cluster has been correctly set-up by running :
```bash
$ kubectl get nodes
```
and checking that you see a single node e.g.
```text
NAME STATUS ROLES AGE VERSION
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
```
- Create a sample Kata confidential containers ssh pod by running:
```bash
$ ~/ccv0.sh kubernetes_create_ssh_demo_pod
```
- As this point you should have a Kubernetes pod running the Kata confidential containers runtime that has pulled
the [sample image](https://hub.docker.com/r/katadocker/ccv0-ssh) which was encrypted by the key file that we included
in the rootfs.
During the pod deployment the image was pulled and then decrypted using the key file, on the Kata guest image, without
it ever being available to the host.
- To validate that the container is working you, can connect to the image via SSH by running:
```bash
$ ~/ccv0.sh connect_to_ssh_demo_pod
```
- During this connection the host key fingerprint is shown and should match:
`ED25519 key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0.`
- After you are finished connecting then run:
```bash
$ exit
```
- To delete the sample SSH demo pod run:
```bash
$ ~/ccv0.sh kubernetes_delete_ssh_demo_pod
```
## Additional script usage
As well as being able to use the script as above to build all of `kata-containers` from scratch it can be used to just
re-build bits of it by running the script with different parameters. For example after the first build you will often
not need to re-install the dependencies, the hypervisor or the Guest kernel, but just test code changes made to the
runtime and agent. This can be done by running `~/ccv0.sh rebuild_and_install_kata`. (*Note this does a hard checkout*
*from git, so if your changes are only made locally it is better to do the individual steps e.g.*
`~/ccv0.sh build_kata_runtime && ~/ccv0.sh build_and_add_agent_to_rootfs && ~/ccv0.sh build_and_install_rootfs`).
There are commands for a lot of steps in building, setting up and testing and the full list can be seen by running
`~/ccv0.sh help`:
```
$ ~/ccv0.sh help
Overview:
Build and test kata containers from source
Optionally set kata-containers and tests repo and branch as exported variables before running
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/ccv0.sh build_and_install_all
Usage:
ccv0.sh [options] <command>
Commands:
- help: Display this help
- all: Build and install everything, test kata with containerd and capture the logs
- build_and_install_all: Build and install everything
- initialize: Install dependencies and check out kata-containers source
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
- build_kata_runtime: Build and install the kata runtime
- configure: Configure Kata to use rootfs and enable debug
- create_rootfs: Create a local rootfs
- build_and_add_agent_to_rootfs:Builds the kata-agent and adds it to the rootfs
- build_and_install_rootfs: Builds and installs the rootfs image
- install_guest_kernel: Setup, build and install the guest kernel
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
- build_qemu: Checkout, patch, build and install QEMU
- init_kubernetes: initialize a Kubernetes cluster on this system
- crictl_create_cc_pod Use crictl to create a new kata cc pod
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
- open_kata_shell: Open a shell into the kata runtime
- agent_pull_image: Run PullImage command against the agent with agent-ctl
- shim_pull_image: Run PullImage command against the shim with ctr
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
- test: Test using kata with containerd
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
Options:
-d: Enable debug
-h: Display this help
```

View File

@@ -1,44 +0,0 @@
# Generating a Kata Containers payload for the Confidential Containers Operator
[Confidential Containers
Operator](https://github.com/confidential-containers/operator) consumes a Kata
Containers payload, generated from the `CCv0` branch, and here one can find all
the necessary info on how to build such a payload.
## Requirements
* `make` installed in the machine
* Docker installed in the machine
* `sudo` access to the machine
## Process
* Clone [Kata Containers](https://github.com/kata-containers/kata-containers)
```sh
git clone --branch CCv0 https://github.com/kata-containers/kata-containers
```
* In case you've already cloned the repo, make sure to switch to the `CCv0` branch
```sh
git checkout CCv0
```
* Ensure your tree is clean and in sync with upstream `CCv0`
```sh
git clean -xfd
git reset --hard <upstream>/CCv0
```
* Make sure you're authenticated to `quay.io`
```sh
sudo docker login quay.io
```
* From the top repo directory, run:
```sh
sudo make cc-payload
```
* Make sure the image was upload to the [Confidential Containers
runtime-payload
registry](https://quay.io/repository/confidential-containers/runtime-payload?tab=tags)
## Notes
Make sure to run it on a machine that's not the one you're hacking on, prepare a
cup of tea, and get back to it an hour later (at least).

View File

@@ -94,16 +94,6 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.hypervisor.enable_guest_swap` | `boolean` | enable swap in the guest |
| `io.katacontainers.config.hypervisor.use_legacy_serial` | `boolean` | uses legacy serial device for guest's console (QEMU) |
## Confidential Computing Options
| Key | Value Type | Comments |
|-------| ----- | ----- |
| `io.katacontainers.config.pre_attestation.enabled"` | `bool` |
determines if SEV/-ES attestation is enabled |
| `io.katacontainers.config.pre_attestation.uri"` | `string` |
specify the location of the attestation server |
| `io.katacontainers.config.sev.policy"` | `uint32` |
specify the SEV guest policy |
## Container Options
| Key | Value Type | Comments |
|-------| ----- | ----- |

View File

@@ -27,8 +27,6 @@ $ image="quay.io/prometheus/busybox:latest"
$ cat << EOF > "${pod_yaml}"
metadata:
name: busybox-sandbox1
uid: $(uuidgen)
namespace: default
EOF
$ cat << EOF > "${container_yaml}"
metadata:

View File

@@ -32,7 +32,6 @@ The `nydus-sandbox.yaml` looks like below:
metadata:
attempt: 1
name: nydus-sandbox
uid: nydus-uid
namespace: default
log_directory: /tmp
linux:

View File

@@ -42,8 +42,6 @@ $ image="quay.io/prometheus/busybox:latest"
$ cat << EOF > "${pod_yaml}"
metadata:
name: busybox-sandbox1
uid: $(uuidgen)
namespace: default
EOF
$ cat << EOF > "${container_yaml}"
metadata:

3705
src/agent/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -23,12 +23,11 @@ regex = "1.5.6"
serial_test = "0.5.1"
kata-sys-util = { path = "../libs/kata-sys-util" }
kata-types = { path = "../libs/kata-types" }
url = "2.2.2"
# Async helpers
async-trait = "0.1.42"
async-recursion = "0.3.2"
futures = "0.3.28"
futures = "0.3.17"
# Async runtime
tokio = { version = "1.28.1", features = ["full"] }
@@ -44,6 +43,7 @@ ipnetwork = "0.17.0"
logging = { path = "../libs/logging" }
slog = "2.5.2"
slog-scope = "4.1.2"
slog-term = "2.9.0"
# Redirect ttrpc log calls
slog-stdlog = "4.0.0"
@@ -67,22 +67,12 @@ serde = { version = "1.0.129", features = ["derive"] }
toml = "0.5.8"
clap = { version = "3.0.1", features = ["derive"] }
# "vendored" feature for openssl is required by musl build
openssl = { version = "0.10.38", features = ["vendored"] }
# Image pull/decrypt
image-rs = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.7.0", default-features = false, features = ["kata-cc-native-tls"] }
[patch.crates-io]
oci-distribution = { git = "https://github.com/krustlet/oci-distribution.git", rev = "f44124c" }
[dev-dependencies]
tempfile = "3.1.0"
test-utils = { path = "../libs/test-utils" }
which = "4.3.0"
[workspace]
resolver = "2"
members = [
"rustjail",
]

View File

@@ -26,7 +26,7 @@ export VERSION_COMMIT := $(if $(COMMIT),$(VERSION)-$(COMMIT),$(VERSION))
EXTRA_RUSTFEATURES :=
##VAR SECCOMP=yes|no define if agent enables seccomp feature
SECCOMP := yes
SECCOMP ?= yes
# Enable seccomp feature of rust build
ifeq ($(SECCOMP),yes)

View File

@@ -541,11 +541,8 @@ fn linux_device_to_cgroup_device(d: &LinuxDevice) -> Option<DeviceResource> {
}
fn linux_device_group_to_cgroup_device(d: &LinuxDeviceCgroup) -> Option<DeviceResource> {
let dev_type = match &d.r#type {
Some(t_s) => match DeviceType::from_char(t_s.chars().next()) {
Some(t_c) => t_c,
None => return None,
},
let dev_type = match DeviceType::from_char(d.r#type.chars().next()) {
Some(t) => t,
None => return None,
};
@@ -602,7 +599,7 @@ lazy_static! {
// all mknod to all char devices
LinuxDeviceCgroup {
allow: true,
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
@@ -611,7 +608,7 @@ lazy_static! {
// all mknod to all block devices
LinuxDeviceCgroup {
allow: true,
r#type: Some("b".to_string()),
r#type: "b".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
@@ -620,7 +617,7 @@ lazy_static! {
// all read/write/mknod to char device /dev/console
LinuxDeviceCgroup {
allow: true,
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(5),
minor: Some(1),
access: "rwm".to_string(),
@@ -629,7 +626,7 @@ lazy_static! {
// all read/write/mknod to char device /dev/pts/<N>
LinuxDeviceCgroup {
allow: true,
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(136),
minor: Some(WILDCARD),
access: "rwm".to_string(),
@@ -638,7 +635,7 @@ lazy_static! {
// all read/write/mknod to char device /dev/ptmx
LinuxDeviceCgroup {
allow: true,
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(5),
minor: Some(2),
access: "rwm".to_string(),
@@ -647,7 +644,7 @@ lazy_static! {
// all read/write/mknod to char device /dev/net/tun
LinuxDeviceCgroup {
allow: true,
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(10),
minor: Some(200),
access: "rwm".to_string(),

View File

@@ -241,12 +241,6 @@ pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources
let devices = {
let mut d = Vec::new();
for dev in res.Devices.iter() {
let dev_type = if dev.Type.is_empty() {
None
} else {
Some(dev.Type.clone())
};
let major = if dev.Major == -1 {
None
} else {
@@ -260,7 +254,7 @@ pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources
};
d.push(oci::LinuxDeviceCgroup {
allow: dev.Allow,
r#type: dev_type,
r#type: dev.Type.clone(),
major,
minor,
access: dev.Access.clone(),

View File

@@ -1118,6 +1118,7 @@ mod tests {
use std::fs::create_dir;
use std::fs::create_dir_all;
use std::fs::remove_dir_all;
use std::fs::remove_file;
use std::io;
use std::os::unix::fs;
use std::os::unix::io::AsRawFd;
@@ -1333,14 +1334,9 @@ mod tests {
fn test_mknod_dev() {
skip_if_not_root!();
let tempdir = tempdir().unwrap();
let olddir = unistd::getcwd().unwrap();
defer!(let _ = unistd::chdir(&olddir););
let _ = unistd::chdir(tempdir.path());
let path = "/dev/fifo-test";
let dev = oci::LinuxDevice {
path: "/fifo".to_string(),
path: path.to_string(),
r#type: "c".to_string(),
major: 0,
minor: 0,
@@ -1348,13 +1344,16 @@ mod tests {
uid: Some(unistd::getuid().as_raw()),
gid: Some(unistd::getgid().as_raw()),
};
let path = Path::new("fifo");
let ret = mknod_dev(&dev, path);
let ret = mknod_dev(&dev, Path::new(path));
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
let ret = stat::stat(path);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
// clear test device node
let ret = remove_file(path);
assert!(ret.is_ok(), "Should pass, Got: {:?}", ret);
}
#[test]

View File

@@ -161,7 +161,7 @@ impl Process {
pub fn notify_term_close(&mut self) {
let notify = self.term_exit_notifier.clone();
notify.notify_one();
notify.notify_waiters();
}
pub fn close_stdin(&mut self) {

View File

@@ -11,7 +11,6 @@ use std::fs;
use std::str::FromStr;
use std::time;
use tracing::instrument;
use url::Url;
use kata_types::config::default::DEFAULT_AGENT_VSOCK_PORT;
@@ -26,14 +25,6 @@ const LOG_VPORT_OPTION: &str = "agent.log_vport";
const CONTAINER_PIPE_SIZE_OPTION: &str = "agent.container_pipe_size";
const UNIFIED_CGROUP_HIERARCHY_OPTION: &str = "agent.unified_cgroup_hierarchy";
const CONFIG_FILE: &str = "agent.config_file";
const AA_KBC_PARAMS: &str = "agent.aa_kbc_params";
const HTTPS_PROXY: &str = "agent.https_proxy";
const NO_PROXY: &str = "agent.no_proxy";
const ENABLE_DATA_INTEGRITY: &str = "agent.data_integrity";
const ENABLE_SIGNATURE_VERIFICATION: &str = "agent.enable_signature_verification";
const IMAGE_POLICY_FILE: &str = "agent.image_policy";
const IMAGE_REGISTRY_AUTH_FILE: &str = "agent.image_registry_auth";
const SIMPLE_SIGNING_SIGSTORE_CONFIG: &str = "agent.simple_signing_sigstore_config";
const DEFAULT_LOG_LEVEL: slog::Level = slog::Level::Info;
const DEFAULT_HOTPLUG_TIMEOUT: time::Duration = time::Duration::from_secs(3);
@@ -86,15 +77,6 @@ pub struct AgentConfig {
pub tracing: bool,
pub endpoints: AgentEndpoints,
pub supports_seccomp: bool,
pub container_policy_path: String,
pub aa_kbc_params: String,
pub https_proxy: String,
pub no_proxy: String,
pub data_integrity: bool,
pub enable_signature_verification: bool,
pub image_policy_file: String,
pub image_registry_auth_file: String,
pub simple_signing_sigstore_config: String,
}
#[derive(Debug, Deserialize)]
@@ -110,15 +92,6 @@ pub struct AgentConfigBuilder {
pub unified_cgroup_hierarchy: Option<bool>,
pub tracing: Option<bool>,
pub endpoints: Option<EndpointsConfig>,
pub container_policy_path: Option<String>,
pub aa_kbc_params: Option<String>,
pub https_proxy: Option<String>,
pub no_proxy: Option<String>,
pub data_integrity: Option<bool>,
pub enable_signature_verification: Option<bool>,
pub image_policy_file: Option<String>,
pub image_registry_auth_file: Option<String>,
pub simple_signing_sigstore_config: Option<String>,
}
macro_rules! config_override {
@@ -180,15 +153,6 @@ impl Default for AgentConfig {
tracing: false,
endpoints: Default::default(),
supports_seccomp: rpc::have_seccomp(),
container_policy_path: String::from(""),
aa_kbc_params: String::from(""),
https_proxy: String::from(""),
no_proxy: String::from(""),
data_integrity: false,
enable_signature_verification: true,
image_policy_file: String::from(""),
image_registry_auth_file: String::from(""),
simple_signing_sigstore_config: String::from(""),
}
}
}
@@ -217,23 +181,6 @@ impl FromStr for AgentConfig {
config_override!(agent_config_builder, agent_config, server_addr);
config_override!(agent_config_builder, agent_config, unified_cgroup_hierarchy);
config_override!(agent_config_builder, agent_config, tracing);
config_override!(agent_config_builder, agent_config, container_policy_path);
config_override!(agent_config_builder, agent_config, aa_kbc_params);
config_override!(agent_config_builder, agent_config, https_proxy);
config_override!(agent_config_builder, agent_config, no_proxy);
config_override!(agent_config_builder, agent_config, data_integrity);
config_override!(
agent_config_builder,
agent_config,
enable_signature_verification
);
config_override!(agent_config_builder, agent_config, image_policy_file);
config_override!(agent_config_builder, agent_config, image_registry_auth_file);
config_override!(
agent_config_builder,
agent_config,
simple_signing_sigstore_config
);
// Populate the allowed endpoints hash set, if we got any from the config file.
if let Some(endpoints) = agent_config_builder.endpoints {
@@ -262,10 +209,6 @@ impl AgentConfig {
let mut config: AgentConfig = Default::default();
let cmdline = fs::read_to_string(file)?;
let params: Vec<&str> = cmdline.split_ascii_whitespace().collect();
let mut using_config_file = false;
// Check if there is config file before parsing params that might
// override values from the config file.
for param in params.iter() {
// If we get a configuration file path from the command line, we
// generate our config from it.
@@ -273,15 +216,10 @@ impl AgentConfig {
// or if it can't be parsed properly.
if param.starts_with(format!("{}=", CONFIG_FILE).as_str()) {
let config_file = get_string_value(param)?;
config = AgentConfig::from_config_file(&config_file)
.context("AgentConfig from kernel cmdline")
.unwrap();
using_config_file = true;
break;
return AgentConfig::from_config_file(&config_file)
.context("AgentConfig from kernel cmdline");
}
}
for param in params.iter() {
// parse cmdline flags
parse_cmdline_param!(param, DEBUG_CONSOLE_FLAG, config.debug_console);
parse_cmdline_param!(param, DEV_MODE_FLAG, config.dev_mode);
@@ -341,48 +279,6 @@ impl AgentConfig {
config.unified_cgroup_hierarchy,
get_bool_value
);
parse_cmdline_param!(param, AA_KBC_PARAMS, config.aa_kbc_params, get_string_value);
parse_cmdline_param!(param, HTTPS_PROXY, config.https_proxy, get_url_value);
parse_cmdline_param!(param, NO_PROXY, config.no_proxy, get_string_value);
parse_cmdline_param!(
param,
ENABLE_DATA_INTEGRITY,
config.data_integrity,
get_bool_value
);
parse_cmdline_param!(
param,
ENABLE_SIGNATURE_VERIFICATION,
config.enable_signature_verification,
get_bool_value
);
// URI of the image security file
parse_cmdline_param!(
param,
IMAGE_POLICY_FILE,
config.image_policy_file,
get_string_value
);
// URI of the registry auth file
parse_cmdline_param!(
param,
IMAGE_REGISTRY_AUTH_FILE,
config.image_registry_auth_file,
get_string_value
);
// URI of the simple signing sigstore file
// used when simple signing verification is used
parse_cmdline_param!(
param,
SIMPLE_SIGNING_SIGSTORE_CONFIG,
config.simple_signing_sigstore_config,
get_string_value
);
}
if let Ok(addr) = env::var(SERVER_ADDR_ENV_VAR) {
@@ -402,9 +298,7 @@ impl AgentConfig {
}
// We did not get a configuration file: allow all endpoints.
if !using_config_file {
config.endpoints.all_allowed = true;
}
config.endpoints.all_allowed = true;
Ok(config)
}
@@ -539,12 +433,6 @@ fn get_container_pipe_size(param: &str) -> Result<i32> {
Ok(value)
}
#[instrument]
fn get_url_value(param: &str) -> Result<String> {
let value = get_string_value(param)?;
Ok(Url::parse(&value)?.to_string())
}
#[cfg(test)]
mod tests {
use test_utils::assert_result;
@@ -563,11 +451,6 @@ mod tests {
assert!(!config.dev_mode);
assert_eq!(config.log_level, DEFAULT_LOG_LEVEL);
assert_eq!(config.hotplug_timeout, DEFAULT_HOTPLUG_TIMEOUT);
assert_eq!(config.container_policy_path, "");
assert!(config.enable_signature_verification);
assert_eq!(config.image_policy_file, "");
assert_eq!(config.image_registry_auth_file, "");
assert_eq!(config.simple_signing_sigstore_config, "");
}
#[test]
@@ -586,15 +469,6 @@ mod tests {
server_addr: &'a str,
unified_cgroup_hierarchy: bool,
tracing: bool,
container_policy_path: &'a str,
aa_kbc_params: &'a str,
https_proxy: &'a str,
no_proxy: &'a str,
data_integrity: bool,
enable_signature_verification: bool,
image_policy_file: &'a str,
image_registry_auth_file: &'a str,
simple_signing_sigstore_config: &'a str,
}
impl Default for TestData<'_> {
@@ -610,15 +484,6 @@ mod tests {
server_addr: TEST_SERVER_ADDR,
unified_cgroup_hierarchy: false,
tracing: false,
container_policy_path: "",
aa_kbc_params: "",
https_proxy: "",
no_proxy: "",
data_integrity: false,
enable_signature_verification: true,
image_policy_file: "",
image_registry_auth_file: "",
simple_signing_sigstore_config: "",
}
}
}
@@ -988,126 +853,6 @@ mod tests {
tracing: true,
..Default::default()
},
TestData {
contents: "agent.aa_kbc_params=offline_fs_kbc::null",
aa_kbc_params: "offline_fs_kbc::null",
..Default::default()
},
TestData {
contents: "agent.aa_kbc_params=eaa_kbc::127.0.0.1:50000",
aa_kbc_params: "eaa_kbc::127.0.0.1:50000",
..Default::default()
},
TestData {
contents: "agent.https_proxy=http://proxy.url.com:81/",
https_proxy: "http://proxy.url.com:81/",
..Default::default()
},
TestData {
contents: "agent.https_proxy=http://192.168.1.100:81/",
https_proxy: "http://192.168.1.100:81/",
..Default::default()
},
TestData {
contents: "agent.no_proxy=*.internal.url.com",
no_proxy: "*.internal.url.com",
..Default::default()
},
TestData {
contents: "agent.no_proxy=192.168.1.0/24,172.16.0.0/12",
no_proxy: "192.168.1.0/24,172.16.0.0/12",
..Default::default()
},
TestData {
contents: "",
data_integrity: false,
..Default::default()
},
TestData {
contents: "agent.data_integrity=true",
data_integrity: true,
..Default::default()
},
TestData {
contents: "agent.data_integrity=false",
data_integrity: false,
..Default::default()
},
TestData {
contents: "agent.data_integrity=1",
data_integrity: true,
..Default::default()
},
TestData {
contents: "agent.data_integrity=0",
data_integrity: false,
..Default::default()
},
TestData {
contents: "agent.enable_signature_verification=false",
enable_signature_verification: false,
..Default::default()
},
TestData {
contents: "agent.enable_signature_verification=0",
enable_signature_verification: false,
..Default::default()
},
TestData {
contents: "agent.enable_signature_verification=1",
enable_signature_verification: true,
..Default::default()
},
TestData {
contents: "agent.enable_signature_verification=foo",
enable_signature_verification: false,
..Default::default()
},
TestData {
contents: "agent.image_policy=file:///etc/policy.json",
image_policy_file: "file:///etc/policy.json",
..Default::default()
},
TestData {
contents: "agent.image_policy=kbs:///default/security-policy/test",
image_policy_file: "kbs:///default/security-policy/test",
..Default::default()
},
TestData {
contents: "agent.image_policy=kbs://example.kbs.org/default/security-policy/test",
image_policy_file: "kbs://example.kbs.org/default/security-policy/test",
..Default::default()
},
TestData {
contents: "agent.image_registry_auth=file:///etc/auth.json",
image_registry_auth_file: "file:///etc/auth.json",
..Default::default()
},
TestData {
contents: "agent.image_registry_auth=kbs:///default/credential/test",
image_registry_auth_file: "kbs:///default/credential/test",
..Default::default()
},
TestData {
contents: "agent.image_registry_auth=kbs://example.kbs.org/default/credential/test",
image_registry_auth_file: "kbs://example.kbs.org/default/credential/test",
..Default::default()
},
TestData {
contents: "agent.simple_signing_sigstore_config=file:///etc/containers/signature/default.yml",
simple_signing_sigstore_config: "file:///etc/containers/signature/default.yml",
..Default::default()
},
TestData {
contents: "agent.simple_signing_sigstore_config=kbs:///default/sigstore-config/test",
simple_signing_sigstore_config: "kbs:///default/sigstore-config/test",
..Default::default()
},
TestData {
contents: "agent.simple_signing_sigstore_config=kbs://example.kbs.org/default/sigstore-config/test",
simple_signing_sigstore_config: "kbs://example.kbs.org/default/sigstore-config/test",
..Default::default()
},
];
let dir = tempdir().expect("failed to create tmpdir");
@@ -1155,31 +900,6 @@ mod tests {
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{}", msg);
assert_eq!(d.server_addr, config.server_addr, "{}", msg);
assert_eq!(d.tracing, config.tracing, "{}", msg);
assert_eq!(
d.container_policy_path, config.container_policy_path,
"{}",
msg
);
assert_eq!(d.aa_kbc_params, config.aa_kbc_params, "{}", msg);
assert_eq!(d.https_proxy, config.https_proxy, "{}", msg);
assert_eq!(d.no_proxy, config.no_proxy, "{}", msg);
assert_eq!(d.data_integrity, config.data_integrity, "{}", msg);
assert_eq!(
d.enable_signature_verification, config.enable_signature_verification,
"{}",
msg
);
assert_eq!(d.image_policy_file, config.image_policy_file, "{}", msg);
assert_eq!(
d.image_registry_auth_file, config.image_registry_auth_file,
"{}",
msg
);
assert_eq!(
d.simple_signing_sigstore_config, config.simple_signing_sigstore_config,
"{}",
msg
);
for v in vars_to_unset {
env::remove_var(v);
@@ -1681,50 +1401,4 @@ Caused by:
// Verify that the default values are valid
assert_eq!(config.hotplug_timeout, DEFAULT_HOTPLUG_TIMEOUT);
}
#[test]
fn test_config_from_cmdline_and_config_file() {
let dir = tempdir().expect("failed to create tmpdir");
let agent_config = r#"
dev_mode = false
server_addr = 'vsock://8:2048'
[endpoints]
allowed = ["CreateContainer", "StartContainer"]
"#;
let config_path = dir.path().join("agent-config.toml");
let config_filename = config_path.to_str().expect("failed to get config filename");
fs::write(config_filename, agent_config).expect("failed to write agen config");
let cmdline = format!("agent.devmode agent.config_file={}", config_filename);
let cmdline_path = dir.path().join("cmdline");
let cmdline_filename = cmdline_path
.to_str()
.expect("failed to get cmdline filename");
fs::write(cmdline_filename, cmdline).expect("failed to write agen config");
let config = AgentConfig::from_cmdline(cmdline_filename, vec![])
.expect("failed to parse command line");
// Should be overwritten by cmdline
assert!(config.dev_mode);
// Should be from agent config
assert_eq!(config.server_addr, "vsock://8:2048");
// Should be from agent config
assert_eq!(
config.endpoints.allowed,
vec!["CreateContainer".to_string(), "StartContainer".to_string()]
.iter()
.cloned()
.collect()
);
assert!(!config.endpoints.all_allowed);
}
}

View File

@@ -651,15 +651,13 @@ fn update_spec_devices(spec: &mut Spec, mut updates: HashMap<&str, DevUpdate>) -
if let Some(resources) = linux.resources.as_mut() {
for r in &mut resources.devices {
if let (Some(host_type), Some(host_major), Some(host_minor)) =
(r.r#type.as_ref(), r.major, r.minor)
{
if let Some(update) = res_updates.get(&(host_type.as_str(), host_major, host_minor))
if let (Some(host_major), Some(host_minor)) = (r.major, r.minor) {
if let Some(update) = res_updates.get(&(r.r#type.as_str(), host_major, host_minor))
{
info!(
sl(),
"update_spec_devices() updating resource";
"type" => &host_type,
"type" => &r.r#type,
"host_major" => host_major,
"host_minor" => host_minor,
"guest_major" => update.guest_major,
@@ -971,7 +969,7 @@ pub fn update_device_cgroup(spec: &mut Spec) -> Result<()> {
allow: false,
major: Some(major),
minor: Some(minor),
r#type: Some(String::from("b")),
r#type: String::from("b"),
access: String::from("rw"),
});
@@ -1134,13 +1132,13 @@ mod tests {
resources: Some(LinuxResources {
devices: vec![
oci::LinuxDeviceCgroup {
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(host_major_a),
minor: Some(host_minor_a),
..oci::LinuxDeviceCgroup::default()
},
oci::LinuxDeviceCgroup {
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(host_major_b),
minor: Some(host_minor_b),
..oci::LinuxDeviceCgroup::default()
@@ -1233,13 +1231,13 @@ mod tests {
resources: Some(LinuxResources {
devices: vec![
LinuxDeviceCgroup {
r#type: Some("c".to_string()),
r#type: "c".to_string(),
major: Some(host_major),
minor: Some(host_minor),
..LinuxDeviceCgroup::default()
},
LinuxDeviceCgroup {
r#type: Some("b".to_string()),
r#type: "b".to_string(),
major: Some(host_major),
minor: Some(host_minor),
..LinuxDeviceCgroup::default()

View File

@@ -1,366 +0,0 @@
// Copyright (c) 2021 Alibaba Cloud
// Copyright (c) 2021, 2023 IBM Corporation
// Copyright (c) 2022 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
use std::env;
use std::fs;
use std::path::Path;
use std::process::Command;
use std::sync::atomic::{AtomicBool, AtomicU16, Ordering};
use std::sync::Arc;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use protocols::image;
use tokio::sync::Mutex;
use ttrpc::{self, error::get_rpc_status as ttrpc_error};
use crate::rpc::{verify_cid, CONTAINER_BASE};
use crate::sandbox::Sandbox;
use crate::AGENT_CONFIG;
use image_rs::image::ImageClient;
use std::io::Write;
const AA_PATH: &str = "/usr/local/bin/attestation-agent";
const AA_KEYPROVIDER_URI: &str =
"unix:///run/confidential-containers/attestation-agent/keyprovider.sock";
const AA_GETRESOURCE_URI: &str =
"unix:///run/confidential-containers/attestation-agent/getresource.sock";
const OCICRYPT_CONFIG_PATH: &str = "/tmp/ocicrypt_config.json";
// kata rootfs is readonly, use tmpfs before CC storage is implemented.
const KATA_CC_IMAGE_WORK_DIR: &str = "/run/image/";
const KATA_CC_PAUSE_BUNDLE: &str = "/pause_bundle";
const CONFIG_JSON: &str = "config.json";
// Convenience function to obtain the scope logger.
fn sl() -> slog::Logger {
slog_scope::logger().new(o!("subsystem" => "cgroups"))
}
pub struct ImageService {
sandbox: Arc<Mutex<Sandbox>>,
attestation_agent_started: AtomicBool,
image_client: Arc<Mutex<ImageClient>>,
container_count: Arc<AtomicU16>,
}
impl ImageService {
pub async fn new(sandbox: Arc<Mutex<Sandbox>>) -> Self {
env::set_var("CC_IMAGE_WORK_DIR", KATA_CC_IMAGE_WORK_DIR);
let mut image_client = ImageClient::default();
let image_policy_file = &AGENT_CONFIG.image_policy_file;
if !image_policy_file.is_empty() {
image_client.config.file_paths.sigstore_config = image_policy_file.clone();
}
let simple_signing_sigstore_config = &AGENT_CONFIG.simple_signing_sigstore_config;
if !simple_signing_sigstore_config.is_empty() {
image_client.config.file_paths.sigstore_config = simple_signing_sigstore_config.clone();
}
let image_registry_auth_file = &AGENT_CONFIG.image_registry_auth_file;
if !image_registry_auth_file.is_empty() {
image_client.config.file_paths.auth_file = image_registry_auth_file.clone();
}
Self {
sandbox,
attestation_agent_started: AtomicBool::new(false),
image_client: Arc::new(Mutex::new(image_client)),
container_count: Arc::new(AtomicU16::new(0)),
}
}
// pause image is packaged in rootfs for CC
fn unpack_pause_image(cid: &str) -> Result<()> {
let cc_pause_bundle = Path::new(KATA_CC_PAUSE_BUNDLE);
if !cc_pause_bundle.exists() {
return Err(anyhow!("Pause image not present in rootfs"));
}
info!(sl(), "use guest pause image cid {:?}", cid);
let pause_bundle = Path::new(CONTAINER_BASE).join(cid);
let pause_rootfs = pause_bundle.join("rootfs");
let pause_config = pause_bundle.join(CONFIG_JSON);
let pause_binary = pause_rootfs.join("pause");
fs::create_dir_all(&pause_rootfs)?;
if !pause_config.exists() {
fs::copy(
cc_pause_bundle.join(CONFIG_JSON),
pause_bundle.join(CONFIG_JSON),
)?;
}
if !pause_binary.exists() {
fs::copy(cc_pause_bundle.join("rootfs").join("pause"), pause_binary)?;
}
Ok(())
}
// If we fail to start the AA, ocicrypt won't be able to unwrap keys
// and container decryption will fail.
fn init_attestation_agent() -> Result<()> {
let config_path = OCICRYPT_CONFIG_PATH;
// The image will need to be encrypted using a keyprovider
// that has the same name (at least according to the config).
let ocicrypt_config = serde_json::json!({
"key-providers": {
"attestation-agent":{
"ttrpc":AA_KEYPROVIDER_URI
}
}
});
let mut config_file = fs::File::create(config_path)?;
config_file.write_all(ocicrypt_config.to_string().as_bytes())?;
// The Attestation Agent will run for the duration of the guest.
Command::new(AA_PATH)
.arg("--keyprovider_sock")
.arg(AA_KEYPROVIDER_URI)
.arg("--getresource_sock")
.arg(AA_GETRESOURCE_URI)
.spawn()?;
Ok(())
}
/// Determines the container id (cid) to use for a given request.
///
/// If the request specifies a non-empty id, use it; otherwise derive it from the image path.
/// In either case, verify that the chosen id is valid.
fn cid_from_request(&self, req: &image::PullImageRequest) -> Result<String> {
let req_cid = req.container_id();
let cid = if !req_cid.is_empty() {
req_cid.to_string()
} else if let Some(last) = req.image().rsplit('/').next() {
// Support multiple containers with same image
let index = self.container_count.fetch_add(1, Ordering::Relaxed);
// ':' not valid for container id
format!("{}_{}", last.replace(':', "_"), index)
} else {
return Err(anyhow!("Invalid image name. {}", req.image()));
};
verify_cid(&cid)?;
Ok(cid)
}
async fn pull_image(&self, req: &image::PullImageRequest) -> Result<String> {
env::set_var("OCICRYPT_KEYPROVIDER_CONFIG", OCICRYPT_CONFIG_PATH);
let https_proxy = &AGENT_CONFIG.https_proxy;
if !https_proxy.is_empty() {
env::set_var("HTTPS_PROXY", https_proxy);
}
let no_proxy = &AGENT_CONFIG.no_proxy;
if !no_proxy.is_empty() {
env::set_var("NO_PROXY", no_proxy);
}
let cid = self.cid_from_request(req)?;
let image = req.image();
if cid.starts_with("pause") {
Self::unpack_pause_image(&cid)?;
let mut sandbox = self.sandbox.lock().await;
sandbox.images.insert(String::from(image), cid);
return Ok(image.to_owned());
}
let aa_kbc_params = &AGENT_CONFIG.aa_kbc_params;
if !aa_kbc_params.is_empty() {
match self.attestation_agent_started.compare_exchange_weak(
false,
true,
Ordering::SeqCst,
Ordering::SeqCst,
) {
Ok(_) => Self::init_attestation_agent()?,
Err(_) => info!(sl(), "Attestation Agent already running"),
}
}
// If the attestation-agent is being used, then enable the authenticated credentials support
info!(
sl(),
"image_client.config.auth set to: {}",
!aa_kbc_params.is_empty()
);
self.image_client.lock().await.config.auth = !aa_kbc_params.is_empty();
// Read enable signature verification from the agent config and set it in the image_client
let enable_signature_verification = &AGENT_CONFIG.enable_signature_verification;
info!(
sl(),
"enable_signature_verification set to: {}", enable_signature_verification
);
self.image_client.lock().await.config.security_validate = *enable_signature_verification;
let source_creds = (!req.source_creds().is_empty()).then(|| req.source_creds());
let bundle_path = Path::new(CONTAINER_BASE).join(&cid);
fs::create_dir_all(&bundle_path)?;
let decrypt_config = format!("provider:attestation-agent:{}", aa_kbc_params);
info!(sl(), "pull image {:?}, bundle path {:?}", cid, bundle_path);
// Image layers will store at KATA_CC_IMAGE_WORK_DIR, generated bundles
// with rootfs and config.json will store under CONTAINER_BASE/cid.
let res = self
.image_client
.lock()
.await
.pull_image(image, &bundle_path, &source_creds, &Some(&decrypt_config))
.await;
match res {
Ok(image) => {
info!(
sl(),
"pull and unpack image {:?}, cid: {:?}, with image-rs succeed. ", image, cid
);
}
Err(e) => {
error!(
sl(),
"pull and unpack image {:?}, cid: {:?}, with image-rs failed with {:?}. ",
image,
cid,
e.to_string()
);
return Err(e);
}
};
let mut sandbox = self.sandbox.lock().await;
sandbox.images.insert(String::from(image), cid);
Ok(image.to_owned())
}
}
#[async_trait]
impl protocols::image_ttrpc_async::Image for ImageService {
async fn pull_image(
&self,
_ctx: &ttrpc::r#async::TtrpcContext,
req: image::PullImageRequest,
) -> ttrpc::Result<image::PullImageResponse> {
match self.pull_image(&req).await {
Ok(r) => {
let mut resp = image::PullImageResponse::new();
resp.image_ref = r;
return Ok(resp);
}
Err(e) => {
return Err(ttrpc_error(ttrpc::Code::INTERNAL, e.to_string()));
}
}
}
}
#[cfg(test)]
mod tests {
use super::ImageService;
use crate::sandbox::Sandbox;
use protocols::image;
use std::sync::Arc;
use tokio::sync::Mutex;
#[tokio::test]
async fn test_cid_from_request() {
struct Case {
cid: &'static str,
image: &'static str,
result: Option<&'static str>,
}
let cases = [
Case {
cid: "",
image: "",
result: None,
},
Case {
cid: "..",
image: "",
result: None,
},
Case {
cid: "",
image: "..",
result: None,
},
Case {
cid: "",
image: "abc/..",
result: None,
},
Case {
cid: "",
image: "abc/",
result: None,
},
Case {
cid: "",
image: "../abc",
result: Some("abc_4"),
},
Case {
cid: "",
image: "../9abc",
result: Some("9abc_5"),
},
Case {
cid: "some-string.1_2",
image: "",
result: Some("some-string.1_2"),
},
Case {
cid: "0some-string.1_2",
image: "",
result: Some("0some-string.1_2"),
},
Case {
cid: "a:b",
image: "",
result: None,
},
Case {
cid: "",
image: "prefix/a:b",
result: Some("a_b_6"),
},
Case {
cid: "",
image: "/a/b/c/d:e",
result: Some("d_e_7"),
},
];
let logger = slog::Logger::root(slog::Discard, o!());
let s = Sandbox::new(&logger).unwrap();
let image_service = ImageService::new(Arc::new(Mutex::new(s))).await;
for case in &cases {
let mut req = image::PullImageRequest::new();
req.set_image(case.image.to_string());
req.set_container_id(case.cid.to_string());
let ret = image_service.cid_from_request(&req);
match (case.result, ret) {
(Some(expected), Ok(actual)) => assert_eq!(expected, actual),
(None, Err(_)) => (),
(None, Ok(r)) => panic!("Expected an error, got {}", r),
(Some(expected), Err(e)) => {
panic!("Expected {} but got an error ({})", expected, e)
}
}
}
}
}

View File

@@ -33,7 +33,7 @@ pub fn create_pci_root_bus_path() -> String {
// check if there is pci bus path for acpi
acpi_sysfs_dir.push_str(&acpi_root_bus_path);
if let Ok(_) = fs::metadata(&acpi_sysfs_dir) {
if fs::metadata(&acpi_sysfs_dir).is_ok() {
return acpi_root_bus_path;
}

View File

@@ -70,7 +70,6 @@ use tokio::{
task::JoinHandle,
};
mod image_rpc;
mod rpc;
mod tracer;
@@ -345,7 +344,7 @@ async fn start_sandbox(
sandbox.lock().await.sender = Some(tx);
// vsock:///dev/vsock, port
let mut server = rpc::start(sandbox.clone(), config.server_addr.as_str(), init_mode).await?;
let mut server = rpc::start(sandbox.clone(), config.server_addr.as_str(), init_mode)?;
server.start().await?;
rx.await?;

View File

@@ -36,6 +36,7 @@ use crate::Sandbox;
use crate::{ccw, device::get_virtio_blk_ccw_device_name};
use anyhow::{anyhow, Context, Result};
use slog::Logger;
use tracing::instrument;
pub const TYPE_ROOTFS: &str = "rootfs";
@@ -145,6 +146,11 @@ pub const STORAGE_HANDLER_LIST: &[&str] = &[
DRIVER_WATCHABLE_BIND_TYPE,
];
#[instrument]
pub fn get_mounts() -> Result<String, std::io::Error> {
fs::read_to_string("/proc/mounts")
}
#[instrument]
pub fn baremount(
source: &Path,
@@ -168,6 +174,31 @@ pub fn baremount(
return Err(anyhow!("need mount FS type"));
}
let destination_str = destination.to_string_lossy();
let mounts = get_mounts().unwrap_or_else(|_| String::new());
let already_mounted = mounts
.lines()
.map(|line| line.split_whitespace().collect::<Vec<&str>>())
.filter(|parts| parts.len() >= 3) // ensure we have at least [source}, destination, and fs_type
.any(|parts| {
// Check if source, destination and fs_type match any entry in /proc/mounts
// minimal check is for destination an fstype since source can have different names like:
// udev /dev devtmpfs
// dev /dev devtmpfs
// depending on which entity is mounting the dev/fs/pseudo-fs
parts[1] == destination_str && parts[2] == fs_type
});
if already_mounted {
slog_info!(
logger,
"{:?} is already mounted at {:?}",
source,
destination
);
return Ok(());
}
info!(
logger,
"baremount source={:?}, dest={:?}, fs_type={:?}, options={:?}, flags={:?}",
@@ -725,6 +756,14 @@ pub fn recursive_ownership_change(
mask |= EXEC_MASK;
mask |= MODE_SETGID;
}
// We do not want to change the permission of the underlying file
// using symlink. Hence we skip symlinks from recursive ownership
// and permission changes.
if path.is_symlink() {
return Ok(());
}
nix::unistd::chown(path, uid, gid)?;
if gid.is_some() {
@@ -1102,6 +1141,7 @@ fn parse_options(option_list: Vec<String>) -> HashMap<String, String> {
mod tests {
use super::*;
use protocols::agent::FSGroup;
use slog::Drain;
use std::fs::File;
use std::fs::OpenOptions;
use std::io::Write;
@@ -1112,6 +1152,31 @@ mod tests {
skip_if_not_root, skip_loop_by_user, skip_loop_if_not_root, skip_loop_if_root,
};
#[test]
fn test_already_baremounted() {
let plain = slog_term::PlainSyncDecorator::new(std::io::stdout());
let logger = Logger::root(slog_term::FullFormat::new(plain).build().fuse(), o!());
let test_cases = [
("dev", "/dev", "devtmpfs"),
("udev", "/dev", "devtmpfs"),
("proc", "/proc", "proc"),
("sysfs", "/sys", "sysfs"),
];
for &(source, destination, fs_type) in &test_cases {
let source = Path::new(source);
let destination = Path::new(destination);
let flags = MsFlags::MS_RDONLY;
let options = "mode=755";
println!(
"testing if already mounted baremount({:?} {:?} {:?})",
source, destination, fs_type
);
assert!(baremount(source, destination, fs_type, flags, options, &logger).is_ok());
}
}
#[test]
fn test_mount() {
#[derive(Debug)]

View File

@@ -37,10 +37,7 @@ use protocols::health::{
VersionCheckResponse,
};
use protocols::types::Interface;
use protocols::{
agent_ttrpc_async as agent_ttrpc, health_ttrpc_async as health_ttrpc,
image_ttrpc_async as image_ttrpc,
};
use protocols::{agent_ttrpc_async as agent_ttrpc, health_ttrpc_async as health_ttrpc};
use rustjail::cgroups::notifier;
use rustjail::container::{BaseContainer, Container, LinuxContainer, SYSTEMD_CGROUP_PATH_FORMAT};
use rustjail::mount::parse_mount_table;
@@ -56,7 +53,6 @@ use rustjail::process::ProcessOperations;
use crate::device::{
add_devices, get_virtio_blk_pci_device_name, update_device_cgroup, update_env_pci,
};
use crate::image_rpc;
use crate::linux_abi::*;
use crate::metrics::get_metrics;
use crate::mount::{add_storages, baremount, update_ephemeral_mounts, STORAGE_HANDLER_LIST};
@@ -88,12 +84,8 @@ use std::io::{BufRead, BufReader, Write};
use std::os::unix::fs::FileExt;
use std::path::PathBuf;
pub const CONTAINER_BASE: &str = "/run/kata-containers";
const CONTAINER_BASE: &str = "/run/kata-containers";
const MODPROBE_PATH: &str = "/sbin/modprobe";
const ANNO_K8S_IMAGE_NAME: &str = "io.kubernetes.cri.image-name";
const CONFIG_JSON: &str = "config.json";
const INIT_TRUSTED_STORAGE: &str = "/usr/bin/kata-init-trusted-storage";
const TRUSTED_STORAGE_DEVICE: &str = "/dev/trusted_store";
/// the iptables seriers binaries could appear either in /sbin
/// or /usr/sbin, we need to check both of them
@@ -145,41 +137,6 @@ pub struct AgentService {
init_mode: bool,
}
// A container ID must match this regex:
//
// ^[a-zA-Z0-9][a-zA-Z0-9_.-]+$
//
pub fn verify_cid(id: &str) -> Result<()> {
let mut chars = id.chars();
let valid = matches!(chars.next(), Some(first) if first.is_alphanumeric()
&& id.len() > 1
&& chars.all(|c| c.is_alphanumeric() || ['.', '-', '_'].contains(&c)));
match valid {
true => Ok(()),
false => Err(anyhow!("invalid container ID: {:?}", id)),
}
}
// Partially merge an OCI process specification into another one.
fn merge_oci_process(target: &mut oci::Process, source: &oci::Process) {
if target.args.is_empty() && !source.args.is_empty() {
target.args.append(&mut source.args.clone());
}
if target.cwd == "/" && source.cwd != "/" {
target.cwd = String::from(&source.cwd);
}
for source_env in &source.env {
let variable_name: Vec<&str> = source_env.split('=').collect();
if !target.env.iter().any(|i| i.contains(variable_name[0])) {
target.env.push(source_env.to_string());
}
}
}
impl AgentService {
#[instrument]
async fn do_create_container(
@@ -210,9 +167,6 @@ impl AgentService {
"receive createcontainer, storages: {:?}", &req.storages
);
// Merge the image bundle OCI spec into the container creation request OCI spec.
self.merge_bundle_oci(&mut oci).await?;
// Some devices need some extra processing (the ones invoked with
// --device for instance), and that's what this call is doing. It
// updates the devices listed in the OCI spec, so that they actually
@@ -220,30 +174,6 @@ impl AgentService {
// cannot predict everything from the caller.
add_devices(&req.devices.to_vec(), &mut oci, &self.sandbox).await?;
let linux = oci
.linux
.as_mut()
.ok_or_else(|| anyhow!("Spec didn't contain linux field"))?;
for specdev in &mut linux.devices {
let dev_major_minor = format!("{}:{}", specdev.major, specdev.minor);
if specdev.path == TRUSTED_STORAGE_DEVICE {
let data_integrity = AGENT_CONFIG.data_integrity;
info!(
sl(),
"trusted_store device major:min {}, enable data integrity {}",
dev_major_minor,
data_integrity.to_string()
);
Command::new(INIT_TRUSTED_STORAGE)
.args([&dev_major_minor, &data_integrity.to_string()])
.output()
.expect("Failed to initialize confidential storage");
}
}
// Both rootfs and volumes (invoked with --volume for instance) will
// be processed the same way. The idea is to always mount any provided
// storage to the specified MountPoint, so that it will match what's
@@ -665,15 +595,16 @@ impl AgentService {
let cid = req.container_id;
let eid = req.exec_id;
let mut term_exit_notifier = Arc::new(tokio::sync::Notify::new());
let term_exit_notifier;
let reader = {
let s = self.sandbox.clone();
let mut sandbox = s.lock().await;
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
term_exit_notifier = p.term_exit_notifier.clone();
if p.term_master.is_some() {
term_exit_notifier = p.term_exit_notifier.clone();
p.get_reader(StreamType::TermMaster)
} else if stdout {
if p.parent_stdout.is_some() {
@@ -693,9 +624,12 @@ impl AgentService {
let reader = reader.ok_or_else(|| anyhow!("cannot get stream reader"))?;
tokio::select! {
_ = term_exit_notifier.notified() => {
Err(anyhow!("eof"))
}
// Poll the futures in the order they appear from top to bottom
// it is very important to avoid data loss. If there is still
// data in the buffer and read_stream branch will return
// Poll::Ready so that the term_exit_notifier will never polled
// before all data were read.
biased;
v = read_stream(reader, req.len as usize) => {
let vector = v?;
let mut resp = ReadStreamResponse::new();
@@ -703,55 +637,10 @@ impl AgentService {
Ok(resp)
}
}
}
// When being passed an image name through a container annotation, merge its
// corresponding bundle OCI specification into the passed container creation one.
async fn merge_bundle_oci(&self, container_oci: &mut oci::Spec) -> Result<()> {
if let Some(image_name) = container_oci
.annotations
.get(&ANNO_K8S_IMAGE_NAME.to_string())
{
if let Some(container_id) = self.sandbox.clone().lock().await.images.get(image_name) {
let image_oci_config_path = Path::new(CONTAINER_BASE)
.join(container_id)
.join(CONFIG_JSON);
debug!(
sl(),
"Image bundle config path: {:?}", image_oci_config_path
);
let image_oci =
oci::Spec::load(image_oci_config_path.to_str().ok_or_else(|| {
anyhow!(
"Invalid container image OCI config path {:?}",
image_oci_config_path
)
})?)
.context("load image bundle")?;
if let Some(container_root) = container_oci.root.as_mut() {
if let Some(image_root) = image_oci.root.as_ref() {
let root_path = Path::new(CONTAINER_BASE)
.join(container_id)
.join(image_root.path.clone());
container_root.path =
String::from(root_path.to_str().ok_or_else(|| {
anyhow!("Invalid container image root path {:?}", root_path)
})?);
}
}
if let Some(container_process) = container_oci.process.as_mut() {
if let Some(image_process) = image_oci.process.as_ref() {
merge_oci_process(container_process, image_process);
}
}
_ = term_exit_notifier.notified() => {
Err(anyhow!("eof"))
}
}
Ok(())
}
}
@@ -1834,13 +1723,9 @@ async fn read_stream(reader: Arc<Mutex<ReadHalf<PipeStream>>>, l: usize) -> Resu
Ok(content)
}
pub async fn start(
s: Arc<Mutex<Sandbox>>,
server_address: &str,
init_mode: bool,
) -> Result<TtrpcServer> {
pub fn start(s: Arc<Mutex<Sandbox>>, server_address: &str, init_mode: bool) -> Result<TtrpcServer> {
let agent_service = Box::new(AgentService {
sandbox: s.clone(),
sandbox: s,
init_mode,
}) as Box<dyn agent_ttrpc::AgentService + Send + Sync>;
@@ -1849,20 +1734,14 @@ pub async fn start(
let health_service = Box::new(HealthService {}) as Box<dyn health_ttrpc::Health + Send + Sync>;
let health_worker = Arc::new(health_service);
let image_service = Box::new(image_rpc::ImageService::new(s).await)
as Box<dyn image_ttrpc::Image + Send + Sync>;
let aservice = agent_ttrpc::create_agent_service(agent_worker);
let hservice = health_ttrpc::create_health(health_worker);
let iservice = image_ttrpc::create_image(Arc::new(image_service));
let server = TtrpcServer::new()
.bind(server_address)?
.register_service(aservice)
.register_service(hservice)
.register_service(iservice);
.register_service(hservice);
info!(sl(), "ttRPC server started"; "address" => server_address);
@@ -2063,38 +1942,6 @@ fn do_copy_file(req: &CopyFileRequest) -> Result<()> {
}
}
let sflag = stat::SFlag::from_bits_truncate(req.file_mode);
if sflag.contains(stat::SFlag::S_IFDIR) {
fs::create_dir(path.clone()).or_else(|e| {
if e.kind() != std::io::ErrorKind::AlreadyExists {
return Err(e);
}
Ok(())
})?;
std::fs::set_permissions(path.clone(), std::fs::Permissions::from_mode(req.file_mode))?;
unistd::chown(
&path,
Some(Uid::from_raw(req.uid as u32)),
Some(Gid::from_raw(req.gid as u32)),
)?;
return Ok(());
}
if sflag.contains(stat::SFlag::S_IFLNK) {
let src = PathBuf::from(String::from_utf8(req.data.clone()).unwrap());
unistd::symlinkat(&src, None, &path)?;
let path_str = CString::new(path.to_str().unwrap())?;
let ret = unsafe { libc::lchown(path_str.as_ptr(), req.uid as u32, req.gid as u32) };
Errno::result(ret).map(drop)?;
return Ok(());
}
let mut tmpfile = path.clone();
tmpfile.set_extension("tmp");
@@ -2160,26 +2007,18 @@ pub fn setup_bundle(cid: &str, spec: &mut Spec) -> Result<PathBuf> {
let spec_root_path = Path::new(&spec_root.path);
let bundle_path = Path::new(CONTAINER_BASE).join(cid);
let config_path = bundle_path.join(CONFIG_JSON);
let config_path = bundle_path.join("config.json");
let rootfs_path = bundle_path.join("rootfs");
let rootfs_exists = Path::new(&rootfs_path).exists();
info!(
fs::create_dir_all(&rootfs_path)?;
baremount(
spec_root_path,
&rootfs_path,
"bind",
MsFlags::MS_BIND,
"",
&sl(),
"The rootfs_path is {:?} and exists: {}", rootfs_path, rootfs_exists
);
if !rootfs_exists {
fs::create_dir_all(&rootfs_path)?;
baremount(
spec_root_path,
&rootfs_path,
"bind",
MsFlags::MS_BIND,
"",
&sl(),
)?;
}
)?;
let rootfs_path_name = rootfs_path
.to_str()
@@ -3160,135 +2999,4 @@ COMMIT
"We should see the resulting rule"
);
}
#[tokio::test]
async fn test_merge_cwd() {
#[derive(Debug)]
struct TestData<'a> {
container_process_cwd: &'a str,
image_process_cwd: &'a str,
expected: &'a str,
}
let tests = &[
// Image cwd should override blank container cwd
// TODO - how can we tell the user didn't specifically set it to `/` vs not setting at all? Is that scenario valid?
TestData {
container_process_cwd: "/",
image_process_cwd: "/imageDir",
expected: "/imageDir",
},
// Container cwd should override image cwd
TestData {
container_process_cwd: "/containerDir",
image_process_cwd: "/imageDir",
expected: "/containerDir",
},
// Container cwd should override blank image cwd
TestData {
container_process_cwd: "/containerDir",
image_process_cwd: "/",
expected: "/containerDir",
},
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let mut container_process = oci::Process {
cwd: d.container_process_cwd.to_string(),
..Default::default()
};
let image_process = oci::Process {
cwd: d.image_process_cwd.to_string(),
..Default::default()
};
merge_oci_process(&mut container_process, &image_process);
assert_eq!(d.expected, container_process.cwd, "{}", msg);
}
}
#[tokio::test]
async fn test_merge_env() {
#[derive(Debug)]
struct TestData {
container_process_env: Vec<String>,
image_process_env: Vec<String>,
expected: Vec<String>,
}
let tests = &[
// Test that the pods environment overrides the images
TestData {
container_process_env: vec!["ISPRODUCTION=true".to_string()],
image_process_env: vec!["ISPRODUCTION=false".to_string()],
expected: vec!["ISPRODUCTION=true".to_string()],
},
// Test that multiple environment variables can be overrided
TestData {
container_process_env: vec![
"ISPRODUCTION=true".to_string(),
"ISDEVELOPMENT=false".to_string(),
],
image_process_env: vec![
"ISPRODUCTION=false".to_string(),
"ISDEVELOPMENT=true".to_string(),
],
expected: vec![
"ISPRODUCTION=true".to_string(),
"ISDEVELOPMENT=false".to_string(),
],
},
// Test that when none of the variables match do not override them
TestData {
container_process_env: vec!["ANOTHERENV=TEST".to_string()],
image_process_env: vec![
"ISPRODUCTION=false".to_string(),
"ISDEVELOPMENT=true".to_string(),
],
expected: vec![
"ANOTHERENV=TEST".to_string(),
"ISPRODUCTION=false".to_string(),
"ISDEVELOPMENT=true".to_string(),
],
},
// Test a mix of both overriding and not
TestData {
container_process_env: vec![
"ANOTHERENV=TEST".to_string(),
"ISPRODUCTION=true".to_string(),
],
image_process_env: vec![
"ISPRODUCTION=false".to_string(),
"ISDEVELOPMENT=true".to_string(),
],
expected: vec![
"ANOTHERENV=TEST".to_string(),
"ISPRODUCTION=true".to_string(),
"ISDEVELOPMENT=true".to_string(),
],
},
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let mut container_process = oci::Process {
env: d.container_process_env.clone(),
..Default::default()
};
let image_process = oci::Process {
env: d.image_process_env.clone(),
..Default::default()
};
merge_oci_process(&mut container_process, &image_process);
assert_eq!(d.expected, container_process.env, "{}", msg);
}
}
}

View File

@@ -62,7 +62,6 @@ pub struct Sandbox {
pub event_tx: Option<Sender<String>>,
pub bind_watcher: BindWatcher,
pub pcimap: HashMap<pci::Address, pci::Address>,
pub images: HashMap<String, String>,
}
impl Sandbox {
@@ -96,7 +95,6 @@ impl Sandbox {
event_tx: Some(tx),
bind_watcher: BindWatcher::new(),
pcimap: HashMap::new(),
images: HashMap::new(),
})
}
@@ -435,7 +433,7 @@ fn online_resources(logger: &Logger, path: &str, pattern: &str, num: i32) -> Res
}
// max wait for all CPUs to online will use 50 * 100 = 5 seconds.
const ONLINE_CPUMEM_WATI_MILLIS: u64 = 50;
const ONLINE_CPUMEM_WAIT_MILLIS: u64 = 50;
const ONLINE_CPUMEM_MAX_RETRIES: i32 = 100;
#[instrument]
@@ -465,7 +463,7 @@ fn online_cpus(logger: &Logger, num: i32) -> Result<i32> {
);
return Ok(num);
}
thread::sleep(time::Duration::from_millis(ONLINE_CPUMEM_WATI_MILLIS));
thread::sleep(time::Duration::from_millis(ONLINE_CPUMEM_WAIT_MILLIS));
}
Err(anyhow!(

View File

@@ -57,7 +57,7 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
continue;
}
let mut p = process.unwrap();
let p = process.unwrap();
let ret: i32 = match wait_status {
WaitStatus::Exited(_, c) => c,

1777
src/dragonball/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,18 +10,19 @@ license = "Apache-2.0"
edition = "2018"
[dependencies]
anyhow = "1.0.32"
arc-swap = "1.5.0"
bytes = "1.1.0"
dbs-address-space = "0.3.0"
dbs-allocator = "0.1.0"
dbs-arch = "0.2.0"
dbs-boot = "0.4.0"
dbs-device = "0.2.0"
dbs-interrupt = { version = "0.2.0", features = ["kvm-irq"] }
dbs-legacy-devices = "0.1.0"
dbs-upcall = { version = "0.3.0", optional = true }
dbs-utils = "0.2.0"
dbs-virtio-devices = { version = "0.3.1", optional = true, features = ["virtio-mmio"] }
dbs-address-space = { path = "./src/dbs_address_space" }
dbs-allocator = { path = "./src/dbs_allocator" }
dbs-arch = { path = "./src/dbs_arch" }
dbs-boot = { path = "./src/dbs_boot" }
dbs-device = { path = "./src/dbs_device" }
dbs-interrupt = { path = "./src/dbs_interrupt", features = ["kvm-irq"] }
dbs-legacy-devices = { path = "./src/dbs_legacy_devices" }
dbs-upcall = { path = "./src/dbs_upcall" , optional = true }
dbs-utils = { path = "./src/dbs_utils" }
dbs-virtio-devices = { path = "./src/dbs_virtio_devices", optional = true, features = ["virtio-mmio"] }
kvm-bindings = "0.6.0"
kvm-ioctls = "0.12.0"
lazy_static = "1.2"
@@ -29,6 +30,8 @@ libc = "0.2.39"
linux-loader = "0.6.0"
log = "0.4.14"
nix = "0.24.2"
procfs = "0.12.0"
prometheus = { version = "0.13.0", features = ["process"] }
seccompiler = "0.2.0"
serde = "1.0.27"
serde_derive = "1.0.27"
@@ -42,8 +45,8 @@ vm-memory = { version = "0.9.0", features = ["backend-mmap"] }
crossbeam-channel = "0.5.6"
[dev-dependencies]
slog-term = "2.9.0"
slog-async = "2.7.0"
slog-term = "2.9.0"
test-utils = { path = "../libs/test-utils" }
[features]

View File

@@ -39,12 +39,15 @@ clean:
test:
ifdef SUPPORT_VIRTUALIZATION
cargo test --all-features --target $(TRIPLE) -- --nocapture
RUST_BACKTRACE=1 cargo test --all-features --target $(TRIPLE) -- --nocapture --test-threads=1
else
@echo "INFO: skip testing dragonball, it need virtualization support."
exit 0
endif
coverage:
RUST_BACKTRACE=1 cargo llvm-cov --all-features --target $(TRIPLE) -- --nocapture --test-threads=1
endif # ifeq ($(ARCH), s390x)
.DEFAULT_GOAL := default

View File

@@ -16,10 +16,22 @@ and configuration process.
# Documentation
Device: [Device Document](docs/device.md)
vCPU: [vCPU Document](docs/vcpu.md)
API: [API Document](docs/api.md)
`Upcall`: [`Upcall` Document](docs/upcall.md)
- Device: [Device Document](docs/device.md)
- vCPU: [vCPU Document](docs/vcpu.md)
- API: [API Document](docs/api.md)
- `Upcall`: [`Upcall` Document](docs/upcall.md)
- `dbs_acpi`: [`dbs_acpi` Document](src/dbs_acpi/README.md)
- `dbs_address_space`: [`dbs_address_space` Document](src/dbs_address_space/README.md)
- `dbs_allocator`: [`dbs_allocator` Document](src/dbs_allocator/README.md)
- `dbs_arch`: [`dbs_arch` Document](src/dbs_arch/README.md)
- `dbs_boot`: [`dbs_boot` Document](src/dbs_boot/README.md)
- `dbs_device`: [`dbs_device` Document](src/dbs_device/README.md)
- `dbs_interrupt`: [`dbs_interrput` Document](src/dbs_interrupt/README.md)
- `dbs_legacy_devices`: [`dbs_legacy_devices` Document](src/dbs_legacy_devices/README.md)
- `dbs_tdx`: [`dbs_tdx` Document](src/dbs_tdx/README.md)
- `dbs_upcall`: [`dbs_upcall` Document](src/dbs_upcall/README.md)
- `dbs_utils`: [`dbs_utils` Document](src/dbs_utils/README.md)
- `dbs_virtio_devices`: [`dbs_virtio_devices` Document](src/dbs_virtio_devices/README.md)
Currently, the documents are still actively adding.
You could see the [official documentation](docs/) page for more details.

View File

@@ -5,15 +5,6 @@
use serde_derive::{Deserialize, Serialize};
/// This struct represents the strongly typed equivalent of the json body
/// from confidential container related requests.
#[derive(Copy, Clone, Debug, Deserialize, PartialEq, Serialize)]
#[serde(deny_unknown_fields)]
pub enum ConfidentialVmType {
/// Intel Trusted Domain
TDX = 2,
}
/// The microvm state.
///
/// When Dragonball starts, the instance state is Uninitialized. Once start_microvm method is
@@ -67,12 +58,10 @@ pub struct InstanceInfo {
pub tids: Vec<(u8, u32)>,
/// Last instance downtime
pub last_instance_downtime: u64,
/// confidential vm type
pub confidential_vm_type: Option<ConfidentialVmType>,
}
impl InstanceInfo {
/// create instance info object with given id, version, platform type and confidential vm type.
/// create instance info object with given id, version, and platform type
pub fn new(id: String, vmm_version: String) -> Self {
InstanceInfo {
id,
@@ -83,14 +72,8 @@ impl InstanceInfo {
async_state: AsyncState::Uninitialized,
tids: Vec::new(),
last_instance_downtime: 0,
confidential_vm_type: None,
}
}
/// return true if VM confidential type is TDX
pub fn is_tdx_enabled(&self) -> bool {
matches!(self.confidential_vm_type, Some(ConfidentialVmType::TDX))
}
}
impl Default for InstanceInfo {
@@ -104,7 +87,6 @@ impl Default for InstanceInfo {
async_state: AsyncState::Uninitialized,
tids: Vec::new(),
last_instance_downtime: 0,
confidential_vm_type: None,
}
}
}

View File

@@ -12,7 +12,7 @@ pub use self::boot_source::{BootSourceConfig, BootSourceConfigError, DEFAULT_KER
/// Wrapper over the microVM general information.
mod instance_info;
pub use self::instance_info::{ConfidentialVmType, InstanceInfo, InstanceState};
pub use self::instance_info::{InstanceInfo, InstanceState};
/// Wrapper for configuring the memory and CPU of the microVM.
mod machine_config;

View File

@@ -16,6 +16,8 @@ use crate::event_manager::EventManager;
use crate::vm::{CpuTopology, KernelConfigInfo, VmConfigInfo};
use crate::vmm::Vmm;
use crate::hypervisor_metrics::get_hypervisor_metrics;
use self::VmConfigError::*;
use self::VmmActionError::MachineConfig;
@@ -58,6 +60,11 @@ pub enum VmmActionError {
#[error("Upcall not ready, can't hotplug device.")]
UpcallServerNotReady,
/// Error when get prometheus metrics.
/// Currently does not distinguish between error types for metrics.
#[error("failed to get hypervisor metrics")]
GetHypervisorMetrics,
/// The action `ConfigureBootSource` failed either because of bad user input or an internal
/// error.
#[error("failed to configure boot source for VM: {0}")]
@@ -135,6 +142,9 @@ pub enum VmmAction {
/// Get the configuration of the microVM.
GetVmConfiguration,
/// Get Prometheus Metrics.
GetHypervisorMetrics,
/// Set the microVM configuration (memory & vcpu) using `VmConfig` as input. This
/// action can only be called before the microVM has booted.
SetVmConfiguration(VmConfigInfo),
@@ -208,6 +218,8 @@ pub enum VmmData {
Empty,
/// The microVM configuration represented by `VmConfigInfo`.
MachineConfiguration(Box<VmConfigInfo>),
/// Prometheus Metrics represented by String.
HypervisorMetrics(String),
}
/// Request data type used to communicate between the API and the VMM.
@@ -262,6 +274,7 @@ impl VmmService {
VmmAction::GetVmConfiguration => Ok(VmmData::MachineConfiguration(Box::new(
self.machine_config.clone(),
))),
VmmAction::GetHypervisorMetrics => self.get_hypervisor_metrics(),
VmmAction::SetVmConfiguration(machine_config) => {
self.set_vm_configuration(vmm, machine_config)
}
@@ -381,6 +394,13 @@ impl VmmService {
Ok(VmmData::Empty)
}
/// Get prometheus metrics.
fn get_hypervisor_metrics(&self) -> VmmRequestResult {
get_hypervisor_metrics()
.map_err(|_| VmmActionError::GetHypervisorMetrics)
.map(VmmData::HypervisorMetrics)
}
/// Set virtual machine configuration.
pub fn set_vm_configuration(
&mut self,

View File

@@ -0,0 +1,14 @@
[package]
name = "dbs-acpi"
version = "0.1.0"
authors = ["Alibaba Dragonball Team"]
description = "acpi definitions for virtual machines."
license = "Apache-2.0"
edition = "2018"
homepage = "https://github.com/openanolis/dragonball-sandbox"
repository = "https://github.com/openanolis/dragonball-sandbox"
keywords = ["dragonball", "acpi", "vmm", "secure-sandbox"]
readme = "README.md"
[dependencies]
vm-memory = "0.9.0"

View File

@@ -0,0 +1,11 @@
# dbs-acpi
`dbs-acpi` provides ACPI data structures for VMM to emulate ACPI behavior.
## Acknowledgement
Part of the code is derived from the [Cloud Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) project.
## License
This project is licensed under [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).

View File

@@ -0,0 +1,29 @@
// Copyright (c) 2019 Intel Corporation
// Copyright (c) 2023 Alibaba Cloud
//
// SPDX-License-Identifier: Apache-2.0
pub mod rsdp;
pub mod sdt;
fn generate_checksum(data: &[u8]) -> u8 {
(255 - data.iter().fold(0u8, |acc, x| acc.wrapping_add(*x))).wrapping_add(1)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_generate_checksum() {
let mut buf = [0x00; 8];
let sum = generate_checksum(&buf);
assert_eq!(sum, 0);
buf[0] = 0xff;
let sum = generate_checksum(&buf);
assert_eq!(sum, 1);
buf[0] = 0xaa;
buf[1] = 0xcc;
buf[4] = generate_checksum(&buf);
let sum = buf.iter().fold(0u8, |s, v| s.wrapping_add(*v));
assert_eq!(sum, 0);
}
}

View File

@@ -0,0 +1,60 @@
// Copyright (c) 2019 Intel Corporation
// Copyright (c) 2023 Alibaba Cloud
//
// SPDX-License-Identifier: Apache-2.0
// RSDP (Root System Description Pointer) is a data structure used in the ACPI programming interface.
use vm_memory::ByteValued;
#[repr(packed)]
#[derive(Clone, Copy, Default)]
pub struct Rsdp {
pub signature: [u8; 8],
pub checksum: u8,
pub oem_id: [u8; 6],
pub revision: u8,
_rsdt_addr: u32,
pub length: u32,
pub xsdt_addr: u64,
pub extended_checksum: u8,
_reserved: [u8; 3],
}
// SAFETY: Rsdp only contains a series of integers
unsafe impl ByteValued for Rsdp {}
impl Rsdp {
pub fn new(xsdt_addr: u64) -> Self {
let mut rsdp = Rsdp {
signature: *b"RSD PTR ",
checksum: 0,
oem_id: *b"ALICLD",
revision: 1,
_rsdt_addr: 0,
length: std::mem::size_of::<Rsdp>() as u32,
xsdt_addr,
extended_checksum: 0,
_reserved: [0; 3],
};
rsdp.checksum = super::generate_checksum(&rsdp.as_slice()[0..19]);
rsdp.extended_checksum = super::generate_checksum(rsdp.as_slice());
rsdp
}
pub fn len() -> usize {
std::mem::size_of::<Rsdp>()
}
}
#[cfg(test)]
mod tests {
use super::Rsdp;
use vm_memory::bytes::ByteValued;
#[test]
fn test_rsdp() {
let rsdp = Rsdp::new(0xa0000);
let sum = rsdp
.as_slice()
.iter()
.fold(0u8, |acc, x| acc.wrapping_add(*x));
assert_eq!(sum, 0);
}
}

View File

@@ -0,0 +1,137 @@
// Copyright (c) 2019 Intel Corporation
// Copyright (c) 2023 Alibaba Cloud
//
// SPDX-License-Identifier: Apache-2.0
#[repr(packed)]
pub struct GenericAddress {
pub address_space_id: u8,
pub register_bit_width: u8,
pub register_bit_offset: u8,
pub access_size: u8,
pub address: u64,
}
impl GenericAddress {
pub fn io_port_address<T>(address: u16) -> Self {
GenericAddress {
address_space_id: 1,
register_bit_width: 8 * std::mem::size_of::<T>() as u8,
register_bit_offset: 0,
access_size: std::mem::size_of::<T>() as u8,
address: u64::from(address),
}
}
pub fn mmio_address<T>(address: u64) -> Self {
GenericAddress {
address_space_id: 0,
register_bit_width: 8 * std::mem::size_of::<T>() as u8,
register_bit_offset: 0,
access_size: std::mem::size_of::<T>() as u8,
address,
}
}
}
pub struct Sdt {
data: Vec<u8>,
}
#[allow(clippy::len_without_is_empty)]
impl Sdt {
pub fn new(signature: [u8; 4], length: u32, revision: u8) -> Self {
assert!(length >= 36);
const OEM_ID: [u8; 6] = *b"ALICLD";
const OEM_TABLE: [u8; 8] = *b"RUND ";
const CREATOR_ID: [u8; 4] = *b"ALIC";
let mut data = Vec::with_capacity(length as usize);
data.extend_from_slice(&signature);
data.extend_from_slice(&length.to_le_bytes());
data.push(revision);
data.push(0); // checksum
data.extend_from_slice(&OEM_ID); // oem id u32
data.extend_from_slice(&OEM_TABLE); // oem table
data.extend_from_slice(&1u32.to_le_bytes()); // oem revision u32
data.extend_from_slice(&CREATOR_ID); // creator id u32
data.extend_from_slice(&1u32.to_le_bytes()); // creator revison u32
assert_eq!(data.len(), 36);
data.resize(length as usize, 0);
let mut sdt = Sdt { data };
sdt.update_checksum();
sdt
}
pub fn update_checksum(&mut self) {
self.data[9] = 0;
let checksum = super::generate_checksum(self.data.as_slice());
self.data[9] = checksum
}
pub fn as_slice(&self) -> &[u8] {
self.data.as_slice()
}
pub fn append<T>(&mut self, value: T) {
let orig_length = self.data.len();
let new_length = orig_length + std::mem::size_of::<T>();
self.data.resize(new_length, 0);
self.write_u32(4, new_length as u32);
self.write(orig_length, value);
}
pub fn append_slice(&mut self, data: &[u8]) {
let orig_length = self.data.len();
let new_length = orig_length + data.len();
self.write_u32(4, new_length as u32);
self.data.extend_from_slice(data);
self.update_checksum();
}
/// Write a value at the given offset
pub fn write<T>(&mut self, offset: usize, value: T) {
assert!((offset + (std::mem::size_of::<T>() - 1)) < self.data.len());
unsafe {
*(((self.data.as_mut_ptr() as usize) + offset) as *mut T) = value;
}
self.update_checksum();
}
pub fn write_u8(&mut self, offset: usize, val: u8) {
self.write(offset, val);
}
pub fn write_u16(&mut self, offset: usize, val: u16) {
self.write(offset, val);
}
pub fn write_u32(&mut self, offset: usize, val: u32) {
self.write(offset, val);
}
pub fn write_u64(&mut self, offset: usize, val: u64) {
self.write(offset, val);
}
pub fn len(&self) -> usize {
self.data.len()
}
}
#[cfg(test)]
mod tests {
use super::Sdt;
#[test]
fn test_sdt() {
let mut sdt = Sdt::new(*b"TEST", 40, 1);
let sum: u8 = sdt
.as_slice()
.iter()
.fold(0u8, |acc, x| acc.wrapping_add(*x));
assert_eq!(sum, 0);
sdt.write_u32(36, 0x12345678);
let sum: u8 = sdt
.as_slice()
.iter()
.fold(0u8, |acc, x| acc.wrapping_add(*x));
assert_eq!(sum, 0);
}
}

View File

@@ -0,0 +1,20 @@
[package]
name = "dbs-address-space"
version = "0.3.0"
authors = ["Alibaba Dragonball Team"]
description = "address space manager for virtual machines."
license = "Apache-2.0"
edition = "2018"
homepage = "https://github.com/openanolis/dragonball-sandbox"
repository = "https://github.com/openanolis/dragonball-sandbox"
keywords = ["dragonball", "address", "vmm", "secure-sandbox"]
readme = "README.md"
[dependencies]
arc-swap = ">=0.4.8"
libc = "0.2.39"
nix = "0.23.1"
lazy_static = "1"
thiserror = "1"
vmm-sys-util = "0.11.0"
vm-memory = { version = "0.9", features = ["backend-mmap", "backend-atomic"] }

View File

@@ -0,0 +1 @@
../../LICENSE

View File

@@ -0,0 +1,80 @@
# dbs-address-space
## Design
The `dbs-address-space` crate is an address space manager for virtual machines, which manages memory and MMIO resources resident in the guest physical address space.
Main components are:
- `AddressSpaceRegion`: Struct to maintain configuration information about a guest address region.
```rust
#[derive(Debug, Clone)]
pub struct AddressSpaceRegion {
/// Type of address space regions.
pub ty: AddressSpaceRegionType,
/// Base address of the region in virtual machine's physical address space.
pub base: GuestAddress,
/// Size of the address space region.
pub size: GuestUsize,
/// Host NUMA node ids assigned to this region.
pub host_numa_node_id: Option<u32>,
/// File/offset tuple to back the memory allocation.
file_offset: Option<FileOffset>,
/// Mmap permission flags.
perm_flags: i32,
/// Hugepage madvise hint.
///
/// It needs 'advise' or 'always' policy in host shmem config.
is_hugepage: bool,
/// Hotplug hint.
is_hotplug: bool,
/// Anonymous memory hint.
///
/// It should be true for regions with the MADV_DONTFORK flag enabled.
is_anon: bool,
}
```
- `AddressSpaceBase`: Base implementation to manage guest physical address space, without support of region hotplug.
```rust
#[derive(Clone)]
pub struct AddressSpaceBase {
regions: Vec<Arc<AddressSpaceRegion>>,
layout: AddressSpaceLayout,
}
```
- `AddressSpaceBase`: An address space implementation with region hotplug capability.
```rust
/// The `AddressSpace` is a wrapper over [AddressSpaceBase] to support hotplug of
/// address space regions.
#[derive(Clone)]
pub struct AddressSpace {
state: Arc<ArcSwap<AddressSpaceBase>>,
}
```
## Usage
```rust
// 1. create several memory regions
let reg = Arc::new(
AddressSpaceRegion::create_default_memory_region(
GuestAddress(0x100000),
0x100000,
None,
"shmem",
"",
false,
false,
false,
)
.unwrap()
);
let regions = vec![reg];
// 2. create layout (depending on archs)
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
// 3. create address space from regions and layout
let address_space = AddressSpace::from_regions(regions, layout.clone());
```
## License
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.

View File

@@ -0,0 +1,830 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Physical address space manager for virtual machines.
use std::sync::Arc;
use arc_swap::ArcSwap;
use vm_memory::{GuestAddress, GuestMemoryMmap};
use crate::{AddressSpaceError, AddressSpaceLayout, AddressSpaceRegion, AddressSpaceRegionType};
/// Base implementation to manage guest physical address space, without support of region hotplug.
#[derive(Clone)]
pub struct AddressSpaceBase {
regions: Vec<Arc<AddressSpaceRegion>>,
layout: AddressSpaceLayout,
}
impl AddressSpaceBase {
/// Create an instance of `AddressSpaceBase` from an `AddressSpaceRegion` array.
///
/// To achieve better performance by using binary search algorithm, the `regions` vector
/// will gotten sorted by guest physical address.
///
/// Note, panicking if some regions intersects with each other.
///
/// # Arguments
/// * `regions` - prepared regions to managed by the address space instance.
/// * `layout` - prepared address space layout configuration.
pub fn from_regions(
mut regions: Vec<Arc<AddressSpaceRegion>>,
layout: AddressSpaceLayout,
) -> Self {
regions.sort_unstable_by_key(|v| v.base);
for region in regions.iter() {
if !layout.is_region_valid(region) {
panic!(
"Invalid region {:?} for address space layout {:?}",
region, layout
);
}
}
for idx in 1..regions.len() {
if regions[idx].intersect_with(&regions[idx - 1]) {
panic!("address space regions intersect with each other");
}
}
AddressSpaceBase { regions, layout }
}
/// Insert a new address space region into the address space.
///
/// # Arguments
/// * `region` - the new region to be inserted.
pub fn insert_region(
&mut self,
region: Arc<AddressSpaceRegion>,
) -> Result<(), AddressSpaceError> {
if !self.layout.is_region_valid(&region) {
return Err(AddressSpaceError::InvalidAddressRange(
region.start_addr().0,
region.len(),
));
}
for idx in 0..self.regions.len() {
if self.regions[idx].intersect_with(&region) {
return Err(AddressSpaceError::InvalidAddressRange(
region.start_addr().0,
region.len(),
));
}
}
self.regions.push(region);
Ok(())
}
/// Enumerate all regions in the address space.
///
/// # Arguments
/// * `cb` - the callback function to apply to each region.
pub fn walk_regions<F>(&self, mut cb: F) -> Result<(), AddressSpaceError>
where
F: FnMut(&Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError>,
{
for reg in self.regions.iter() {
cb(reg)?;
}
Ok(())
}
/// Get address space layout associated with the address space.
pub fn layout(&self) -> AddressSpaceLayout {
self.layout.clone()
}
/// Get maximum of guest physical address in the address space.
pub fn last_addr(&self) -> GuestAddress {
let mut last_addr = GuestAddress(self.layout.mem_start);
for reg in self.regions.iter() {
if reg.ty != AddressSpaceRegionType::DAXMemory && reg.last_addr() > last_addr {
last_addr = reg.last_addr();
}
}
last_addr
}
/// Check whether the guest physical address `guest_addr` belongs to a DAX memory region.
///
/// # Arguments
/// * `guest_addr` - the guest physical address to inquire
pub fn is_dax_region(&self, guest_addr: GuestAddress) -> bool {
for reg in self.regions.iter() {
// Safe because we have validate the region when creating the address space object.
if reg.region_type() == AddressSpaceRegionType::DAXMemory
&& reg.start_addr() <= guest_addr
&& reg.start_addr().0 + reg.len() > guest_addr.0
{
return true;
}
}
false
}
/// Get protection flags of memory region that guest physical address `guest_addr` belongs to.
///
/// # Arguments
/// * `guest_addr` - the guest physical address to inquire
pub fn prot_flags(&self, guest_addr: GuestAddress) -> Result<i32, AddressSpaceError> {
for reg in self.regions.iter() {
if reg.start_addr() <= guest_addr && reg.start_addr().0 + reg.len() > guest_addr.0 {
return Ok(reg.prot_flags());
}
}
Err(AddressSpaceError::InvalidRegionType)
}
/// Get optional NUMA node id associated with guest physical address `gpa`.
///
/// # Arguments
/// * `gpa` - guest physical address to query.
pub fn numa_node_id(&self, gpa: u64) -> Option<u32> {
for reg in self.regions.iter() {
if gpa >= reg.base.0 && gpa < (reg.base.0 + reg.size) {
return reg.host_numa_node_id;
}
}
None
}
}
/// An address space implementation with region hotplug capability.
///
/// The `AddressSpace` is a wrapper over [AddressSpaceBase] to support hotplug of
/// address space regions.
#[derive(Clone)]
pub struct AddressSpace {
state: Arc<ArcSwap<AddressSpaceBase>>,
}
impl AddressSpace {
/// Convert a [GuestMemoryMmap] object into `GuestMemoryAtomic<GuestMemoryMmap>`.
pub fn convert_into_vm_as(
gm: GuestMemoryMmap,
) -> vm_memory::atomic::GuestMemoryAtomic<GuestMemoryMmap> {
vm_memory::atomic::GuestMemoryAtomic::from(Arc::new(gm))
}
/// Create an instance of `AddressSpace` from an `AddressSpaceRegion` array.
///
/// To achieve better performance by using binary search algorithm, the `regions` vector
/// will gotten sorted by guest physical address.
///
/// Note, panicking if some regions intersects with each other.
///
/// # Arguments
/// * `regions` - prepared regions to managed by the address space instance.
/// * `layout` - prepared address space layout configuration.
pub fn from_regions(regions: Vec<Arc<AddressSpaceRegion>>, layout: AddressSpaceLayout) -> Self {
let base = AddressSpaceBase::from_regions(regions, layout);
AddressSpace {
state: Arc::new(ArcSwap::new(Arc::new(base))),
}
}
/// Insert a new address space region into the address space.
///
/// # Arguments
/// * `region` - the new region to be inserted.
pub fn insert_region(
&mut self,
region: Arc<AddressSpaceRegion>,
) -> Result<(), AddressSpaceError> {
let curr = self.state.load().regions.clone();
let layout = self.state.load().layout.clone();
let mut base = AddressSpaceBase::from_regions(curr, layout);
base.insert_region(region)?;
let _old = self.state.swap(Arc::new(base));
Ok(())
}
/// Enumerate all regions in the address space.
///
/// # Arguments
/// * `cb` - the callback function to apply to each region.
pub fn walk_regions<F>(&self, cb: F) -> Result<(), AddressSpaceError>
where
F: FnMut(&Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError>,
{
self.state.load().walk_regions(cb)
}
/// Get address space layout associated with the address space.
pub fn layout(&self) -> AddressSpaceLayout {
self.state.load().layout()
}
/// Get maximum of guest physical address in the address space.
pub fn last_addr(&self) -> GuestAddress {
self.state.load().last_addr()
}
/// Check whether the guest physical address `guest_addr` belongs to a DAX memory region.
///
/// # Arguments
/// * `guest_addr` - the guest physical address to inquire
pub fn is_dax_region(&self, guest_addr: GuestAddress) -> bool {
self.state.load().is_dax_region(guest_addr)
}
/// Get protection flags of memory region that guest physical address `guest_addr` belongs to.
///
/// # Arguments
/// * `guest_addr` - the guest physical address to inquire
pub fn prot_flags(&self, guest_addr: GuestAddress) -> Result<i32, AddressSpaceError> {
self.state.load().prot_flags(guest_addr)
}
/// Get optional NUMA node id associated with guest physical address `gpa`.
///
/// # Arguments
/// * `gpa` - guest physical address to query.
pub fn numa_node_id(&self, gpa: u64) -> Option<u32> {
self.state.load().numa_node_id(gpa)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
use vm_memory::GuestUsize;
use vmm_sys_util::tempfile::TempFile;
// define macros for unit test
const GUEST_PHYS_END: u64 = (1 << 46) - 1;
const GUEST_MEM_START: u64 = 0;
const GUEST_MEM_END: u64 = GUEST_PHYS_END >> 1;
const GUEST_DEVICE_START: u64 = GUEST_MEM_END + 1;
#[test]
fn test_address_space_base_from_regions() {
let mut file = TempFile::new().unwrap().into_file();
let sample_buf = &[1, 2, 3, 4, 5];
assert!(file.write_all(sample_buf).is_ok());
file.set_len(0x10000).unwrap();
let reg = Arc::new(
AddressSpaceRegion::create_device_region(GuestAddress(GUEST_DEVICE_START), 0x1000)
.unwrap(),
);
let regions = vec![reg];
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
let address_space = AddressSpaceBase::from_regions(regions, layout.clone());
assert_eq!(address_space.layout(), layout);
}
#[test]
#[should_panic(expected = "Invalid region")]
fn test_address_space_base_from_regions_when_region_invalid() {
let reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x1000,
None,
None,
0,
0,
false,
));
let regions = vec![reg];
let layout = AddressSpaceLayout::new(0x2000, 0x200, 0x1800);
let _address_space = AddressSpaceBase::from_regions(regions, layout);
}
#[test]
#[should_panic(expected = "address space regions intersect with each other")]
fn test_address_space_base_from_regions_when_region_intersected() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x200),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let _address_space = AddressSpaceBase::from_regions(regions, layout);
}
#[test]
fn test_address_space_base_insert_region() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1];
let layout = AddressSpaceLayout::new(0x2000, 0x100, 0x1800);
let mut address_space = AddressSpaceBase::from_regions(regions, layout);
// Normal case.
address_space.insert_region(reg2).unwrap();
assert!(!address_space.regions[1].intersect_with(&address_space.regions[0]));
// Error invalid address range case when region invaled.
let invalid_reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x0),
0x100,
None,
None,
0,
0,
false,
));
assert_eq!(
format!(
"{:?}",
address_space.insert_region(invalid_reg).err().unwrap()
),
format!("InvalidAddressRange({:?}, {:?})", 0x0, 0x100)
);
// Error Error invalid address range case when region to be inserted will intersect
// exsisting regions.
let intersected_reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x400),
0x200,
None,
None,
0,
0,
false,
));
assert_eq!(
format!(
"{:?}",
address_space.insert_region(intersected_reg).err().unwrap()
),
format!("InvalidAddressRange({:?}, {:?})", 0x400, 0x200)
);
}
#[test]
fn test_address_space_base_walk_regions() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpaceBase::from_regions(regions, layout);
// The argument of walk_regions is a function which takes a &Arc<AddressSpaceRegion>
// and returns result. This function will be applied to all regions.
fn do_not_have_hotplug(region: &Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError> {
if region.is_hotplug() {
Err(AddressSpaceError::InvalidRegionType) // The Error type is dictated to AddressSpaceError.
} else {
Ok(())
}
}
assert!(matches!(
address_space.walk_regions(do_not_have_hotplug).unwrap(),
()
));
}
#[test]
fn test_address_space_base_last_addr() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpaceBase::from_regions(regions, layout);
assert_eq!(address_space.last_addr(), GuestAddress(0x500 - 1));
}
#[test]
fn test_address_space_base_is_dax_region() {
let page_size = 4096;
let address_space_region = vec![
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(page_size),
page_size as GuestUsize,
)),
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(page_size * 2),
page_size as GuestUsize,
)),
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DAXMemory,
GuestAddress(GUEST_DEVICE_START),
page_size as GuestUsize,
)),
];
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
let address_space = AddressSpaceBase::from_regions(address_space_region, layout);
assert!(!address_space.is_dax_region(GuestAddress(page_size)));
assert!(!address_space.is_dax_region(GuestAddress(page_size * 2)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + 1)));
assert!(!address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size - 1)));
}
#[test]
fn test_address_space_base_prot_flags() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
Some(0),
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x300,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpaceBase::from_regions(regions, layout);
// Normal case, reg1.
assert_eq!(address_space.prot_flags(GuestAddress(0x200)).unwrap(), 0);
// Normal case, reg2.
assert_eq!(
address_space.prot_flags(GuestAddress(0x500)).unwrap(),
libc::PROT_READ | libc::PROT_WRITE
);
// Inquire gpa where no region is set.
assert!(matches!(
address_space.prot_flags(GuestAddress(0x600)),
Err(AddressSpaceError::InvalidRegionType)
));
}
#[test]
fn test_address_space_base_numa_node_id() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
Some(0),
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x300,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpaceBase::from_regions(regions, layout);
// Normal case.
assert_eq!(address_space.numa_node_id(0x200).unwrap(), 0);
// Inquire region with None as its numa node id.
assert_eq!(address_space.numa_node_id(0x400), None);
// Inquire gpa where no region is set.
assert_eq!(address_space.numa_node_id(0x600), None);
}
#[test]
fn test_address_space_convert_into_vm_as() {
// ! Further and detailed test is needed here.
let gmm = GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0x0), 0x400)]).unwrap();
let _vm = AddressSpace::convert_into_vm_as(gmm);
}
#[test]
fn test_address_space_insert_region() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1];
let layout = AddressSpaceLayout::new(0x2000, 0x100, 0x1800);
let mut address_space = AddressSpace::from_regions(regions, layout);
// Normal case.
assert!(matches!(address_space.insert_region(reg2).unwrap(), ()));
// Error invalid address range case when region invaled.
let invalid_reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x0),
0x100,
None,
None,
0,
0,
false,
));
assert_eq!(
format!(
"{:?}",
address_space.insert_region(invalid_reg).err().unwrap()
),
format!("InvalidAddressRange({:?}, {:?})", 0x0, 0x100)
);
// Error Error invalid address range case when region to be inserted will intersect
// exsisting regions.
let intersected_reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x400),
0x200,
None,
None,
0,
0,
false,
));
assert_eq!(
format!(
"{:?}",
address_space.insert_region(intersected_reg).err().unwrap()
),
format!("InvalidAddressRange({:?}, {:?})", 0x400, 0x200)
);
}
#[test]
fn test_address_space_walk_regions() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpace::from_regions(regions, layout);
fn access_all_hotplug_flag(
region: &Arc<AddressSpaceRegion>,
) -> Result<(), AddressSpaceError> {
region.is_hotplug();
Ok(())
}
assert!(matches!(
address_space.walk_regions(access_all_hotplug_flag).unwrap(),
()
));
}
#[test]
fn test_address_space_layout() {
let reg = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x1000,
None,
None,
0,
0,
false,
));
let regions = vec![reg];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpace::from_regions(regions, layout.clone());
assert_eq!(layout, address_space.layout());
}
#[test]
fn test_address_space_last_addr() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
None,
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x200,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpace::from_regions(regions, layout);
assert_eq!(address_space.last_addr(), GuestAddress(0x500 - 1));
}
#[test]
fn test_address_space_is_dax_region() {
let page_size = 4096;
let address_space_region = vec![
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(page_size),
page_size as GuestUsize,
)),
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(page_size * 2),
page_size as GuestUsize,
)),
Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DAXMemory,
GuestAddress(GUEST_DEVICE_START),
page_size as GuestUsize,
)),
];
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
let address_space = AddressSpace::from_regions(address_space_region, layout);
assert!(!address_space.is_dax_region(GuestAddress(page_size)));
assert!(!address_space.is_dax_region(GuestAddress(page_size * 2)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + 1)));
assert!(!address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size)));
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size - 1)));
}
#[test]
fn test_address_space_prot_flags() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
Some(0),
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x300,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpace::from_regions(regions, layout);
// Normal case, reg1.
assert_eq!(address_space.prot_flags(GuestAddress(0x200)).unwrap(), 0);
// Normal case, reg2.
assert_eq!(
address_space.prot_flags(GuestAddress(0x500)).unwrap(),
libc::PROT_READ | libc::PROT_WRITE
);
// Inquire gpa where no region is set.
assert!(matches!(
address_space.prot_flags(GuestAddress(0x600)),
Err(AddressSpaceError::InvalidRegionType)
));
}
#[test]
fn test_address_space_numa_node_id() {
let reg1 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x100),
0x200,
Some(0),
None,
0,
0,
false,
));
let reg2 = Arc::new(AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x300),
0x300,
None,
None,
0,
0,
false,
));
let regions = vec![reg1, reg2];
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
let address_space = AddressSpace::from_regions(regions, layout);
// Normal case.
assert_eq!(address_space.numa_node_id(0x200).unwrap(), 0);
// Inquire region with None as its numa node id.
assert_eq!(address_space.numa_node_id(0x400), None);
// Inquire gpa where no region is set.
assert_eq!(address_space.numa_node_id(0x600), None);
}
}

View File

@@ -0,0 +1,154 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
use lazy_static::lazy_static;
use crate::{AddressSpaceRegion, AddressSpaceRegionType};
// Max retry times for reading /proc
const PROC_READ_RETRY: u64 = 5;
lazy_static! {
/// Upper bound of host memory.
pub static ref USABLE_END: u64 = {
for _ in 0..PROC_READ_RETRY {
if let Ok(buf) = std::fs::read("/proc/meminfo") {
let content = String::from_utf8_lossy(&buf);
for line in content.lines() {
if line.starts_with("MemTotal:") {
if let Some(end) = line.find(" kB") {
if let Ok(size) = line[9..end].trim().parse::<u64>() {
return (size << 10) - 1;
}
}
}
}
}
}
panic!("Exceed max retry times. Cannot get total mem size from /proc/meminfo");
};
}
/// Address space layout configuration.
///
/// The layout configuration must guarantee that `mem_start` <= `mem_end` <= `phys_end`.
/// Non-memory region should be arranged into the range [mem_end, phys_end).
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct AddressSpaceLayout {
/// end of guest physical address
pub phys_end: u64,
/// start of guest memory address
pub mem_start: u64,
/// end of guest memory address
pub mem_end: u64,
/// end of usable memory address
pub usable_end: u64,
}
impl AddressSpaceLayout {
/// Create a new instance of `AddressSpaceLayout`.
pub fn new(phys_end: u64, mem_start: u64, mem_end: u64) -> Self {
AddressSpaceLayout {
phys_end,
mem_start,
mem_end,
usable_end: *USABLE_END,
}
}
/// Check whether an region is valid with the constraints of the layout.
pub fn is_region_valid(&self, region: &AddressSpaceRegion) -> bool {
let region_end = match region.base.0.checked_add(region.size) {
None => return false,
Some(v) => v,
};
match region.ty {
AddressSpaceRegionType::DefaultMemory => {
if region.base.0 < self.mem_start || region_end > self.mem_end {
return false;
}
}
AddressSpaceRegionType::DeviceMemory | AddressSpaceRegionType::DAXMemory => {
if region.base.0 < self.mem_end || region_end > self.phys_end {
return false;
}
}
}
true
}
}
#[cfg(test)]
mod tests {
use super::*;
use vm_memory::GuestAddress;
#[test]
fn test_is_region_valid() {
let layout = AddressSpaceLayout::new(0x1_0000_0000, 0x1000_0000, 0x2000_0000);
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x0),
0x1_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x2000_0000),
0x1_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1_0000),
0x2000_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(u64::MAX),
0x1_0000_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1000_0000),
0x1_0000,
);
assert!(layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(0x1000_0000),
0x1_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(0x1_0000_0000),
0x1_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(0x1_0000),
0x1_0000_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(u64::MAX),
0x1_0000_0000,
);
assert!(!layout.is_region_valid(&region));
let region = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(0x8000_0000),
0x1_0000,
);
assert!(layout.is_region_valid(&region));
}
}

View File

@@ -0,0 +1,87 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
#![deny(missing_docs)]
//! Traits and Structs to manage guest physical address space for virtual machines.
//!
//! The [vm-memory](https://crates.io/crates/vm-memory) implements mechanisms to manage and access
//! guest memory resident in guest physical address space. In addition to guest memory, there may
//! be other type of devices resident in the same guest physical address space.
//!
//! The `dbs-address-space` crate provides traits and structs to manage the guest physical address
//! space for virtual machines, and mechanisms to coordinate all the devices resident in the
//! guest physical address space.
use vm_memory::GuestUsize;
mod address_space;
pub use self::address_space::{AddressSpace, AddressSpaceBase};
mod layout;
pub use layout::{AddressSpaceLayout, USABLE_END};
mod memory;
pub use memory::{GuestMemoryHybrid, GuestMemoryManager, GuestRegionHybrid, GuestRegionRaw};
mod numa;
pub use self::numa::{NumaIdTable, NumaNode, NumaNodeInfo, MPOL_MF_MOVE, MPOL_PREFERRED};
mod region;
pub use region::{AddressSpaceRegion, AddressSpaceRegionType};
/// Errors associated with virtual machine address space management.
#[derive(Debug, thiserror::Error)]
pub enum AddressSpaceError {
/// Invalid address space region type.
#[error("invalid address space region type")]
InvalidRegionType,
/// Invalid address range.
#[error("invalid address space region (0x{0:x}, 0x{1:x})")]
InvalidAddressRange(u64, GuestUsize),
/// Invalid guest memory source type.
#[error("invalid memory source type {0}")]
InvalidMemorySourceType(String),
/// Failed to create memfd to map anonymous memory.
#[error("can not create memfd to map anonymous memory")]
CreateMemFd(#[source] nix::Error),
/// Failed to open memory file.
#[error("can not open memory file")]
OpenFile(#[source] std::io::Error),
/// Failed to create directory.
#[error("can not create directory")]
CreateDir(#[source] std::io::Error),
/// Failed to set size for memory file.
#[error("can not set size for memory file")]
SetFileSize(#[source] std::io::Error),
/// Failed to unlink memory file.
#[error("can not unlink memory file")]
UnlinkFile(#[source] nix::Error),
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_error_code() {
let e = AddressSpaceError::InvalidRegionType;
assert_eq!(format!("{e}"), "invalid address space region type");
assert_eq!(format!("{e:?}"), "InvalidRegionType");
assert_eq!(
format!(
"{}",
AddressSpaceError::InvalidMemorySourceType("test".to_string())
),
"invalid memory source type test"
);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,193 @@
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Structs to manage guest memory for virtual machines.
//!
//! The `vm-memory` crate only provides traits and structs to access normal guest memory,
//! it doesn't support special guest memory like virtio-fs/virtio-pmem DAX window etc.
//! So this crate provides `GuestMemoryManager` over `vm-memory` to provide uniform abstraction
//! for all guest memory.
//!
//! It also provides interfaces to coordinate guest memory hotplug events.
use std::str::FromStr;
use std::sync::Arc;
use vm_memory::{GuestAddressSpace, GuestMemoryAtomic, GuestMemoryLoadGuard, GuestMemoryMmap};
mod raw_region;
pub use raw_region::GuestRegionRaw;
mod hybrid;
pub use hybrid::{GuestMemoryHybrid, GuestRegionHybrid};
/// Type of source to allocate memory for virtual machines.
#[derive(Debug, Eq, PartialEq)]
pub enum MemorySourceType {
/// File on HugeTlbFs.
FileOnHugeTlbFs,
/// mmap() without flag `MAP_HUGETLB`.
MmapAnonymous,
/// mmap() with flag `MAP_HUGETLB`.
MmapAnonymousHugeTlbFs,
/// memfd() without flag `MFD_HUGETLB`.
MemFdShared,
/// memfd() with flag `MFD_HUGETLB`.
MemFdOnHugeTlbFs,
}
impl MemorySourceType {
/// Check whether the memory source is huge page.
pub fn is_hugepage(&self) -> bool {
*self == Self::FileOnHugeTlbFs
|| *self == Self::MmapAnonymousHugeTlbFs
|| *self == Self::MemFdOnHugeTlbFs
}
/// Check whether the memory source is anonymous memory.
pub fn is_mmap_anonymous(&self) -> bool {
*self == Self::MmapAnonymous || *self == Self::MmapAnonymousHugeTlbFs
}
}
impl FromStr for MemorySourceType {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"hugetlbfs" => Ok(MemorySourceType::FileOnHugeTlbFs),
"memfd" => Ok(MemorySourceType::MemFdShared),
"shmem" => Ok(MemorySourceType::MemFdShared),
"hugememfd" => Ok(MemorySourceType::MemFdOnHugeTlbFs),
"hugeshmem" => Ok(MemorySourceType::MemFdOnHugeTlbFs),
"anon" => Ok(MemorySourceType::MmapAnonymous),
"mmap" => Ok(MemorySourceType::MmapAnonymous),
"hugeanon" => Ok(MemorySourceType::MmapAnonymousHugeTlbFs),
"hugemmap" => Ok(MemorySourceType::MmapAnonymousHugeTlbFs),
_ => Err(format!("unknown memory source type {s}")),
}
}
}
#[derive(Debug, Default)]
struct GuestMemoryHotplugManager {}
/// The `GuestMemoryManager` manages all guest memory for virtual machines.
///
/// The `GuestMemoryManager` fulfills several different responsibilities.
/// - First, it manages different types of guest memory, such as normal guest memory, virtio-fs
/// DAX window and virtio-pmem DAX window etc. Different clients may want to access different
/// types of memory. So the manager maintains two GuestMemory objects, one contains all guest
/// memory, the other contains only normal guest memory.
/// - Second, it coordinates memory/DAX window hotplug events, so clients may register hooks
/// to receive hotplug notifications.
#[allow(unused)]
#[derive(Debug, Clone)]
pub struct GuestMemoryManager {
default: GuestMemoryAtomic<GuestMemoryHybrid>,
/// GuestMemory object hosts all guest memory.
hybrid: GuestMemoryAtomic<GuestMemoryHybrid>,
/// GuestMemory object for vIOMMU.
iommu: GuestMemoryAtomic<GuestMemoryHybrid>,
/// GuestMemory object hosts normal guest memory.
normal: GuestMemoryAtomic<GuestMemoryMmap>,
hotplug: Arc<GuestMemoryHotplugManager>,
}
impl GuestMemoryManager {
/// Create a new instance of `GuestMemoryManager`.
pub fn new() -> Self {
Self::default()
}
/// Get a reference to the normal `GuestMemory` object.
pub fn get_normal_guest_memory(&self) -> &GuestMemoryAtomic<GuestMemoryMmap> {
&self.normal
}
/// Try to downcast the `GuestAddressSpace` object to a `GuestMemoryManager` object.
pub fn to_manager<AS: GuestAddressSpace>(_m: &AS) -> Option<&Self> {
None
}
}
impl Default for GuestMemoryManager {
fn default() -> Self {
let hybrid = GuestMemoryAtomic::new(GuestMemoryHybrid::new());
let iommu = GuestMemoryAtomic::new(GuestMemoryHybrid::new());
let normal = GuestMemoryAtomic::new(GuestMemoryMmap::new());
// By default, it provides to the `GuestMemoryHybrid` object containing all guest memory.
let default = hybrid.clone();
GuestMemoryManager {
default,
hybrid,
iommu,
normal,
hotplug: Arc::new(GuestMemoryHotplugManager::default()),
}
}
}
impl GuestAddressSpace for GuestMemoryManager {
type M = GuestMemoryHybrid;
type T = GuestMemoryLoadGuard<GuestMemoryHybrid>;
fn memory(&self) -> Self::T {
self.default.memory()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_memory_source_type() {
assert_eq!(
MemorySourceType::from_str("hugetlbfs").unwrap(),
MemorySourceType::FileOnHugeTlbFs
);
assert_eq!(
MemorySourceType::from_str("memfd").unwrap(),
MemorySourceType::MemFdShared
);
assert_eq!(
MemorySourceType::from_str("shmem").unwrap(),
MemorySourceType::MemFdShared
);
assert_eq!(
MemorySourceType::from_str("hugememfd").unwrap(),
MemorySourceType::MemFdOnHugeTlbFs
);
assert_eq!(
MemorySourceType::from_str("hugeshmem").unwrap(),
MemorySourceType::MemFdOnHugeTlbFs
);
assert_eq!(
MemorySourceType::from_str("anon").unwrap(),
MemorySourceType::MmapAnonymous
);
assert_eq!(
MemorySourceType::from_str("mmap").unwrap(),
MemorySourceType::MmapAnonymous
);
assert_eq!(
MemorySourceType::from_str("hugeanon").unwrap(),
MemorySourceType::MmapAnonymousHugeTlbFs
);
assert_eq!(
MemorySourceType::from_str("hugemmap").unwrap(),
MemorySourceType::MmapAnonymousHugeTlbFs
);
assert!(MemorySourceType::from_str("test").is_err());
}
#[ignore]
#[test]
fn test_to_manager() {
let manager = GuestMemoryManager::new();
let mgr = GuestMemoryManager::to_manager(&manager).unwrap();
assert_eq!(&manager as *const _, mgr as *const _);
}
}

View File

@@ -0,0 +1,990 @@
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
use std::io::{Read, Write};
use std::sync::atomic::Ordering;
use vm_memory::bitmap::{Bitmap, BS};
use vm_memory::mmap::NewBitmap;
use vm_memory::volatile_memory::compute_offset;
use vm_memory::{
guest_memory, volatile_memory, Address, AtomicAccess, Bytes, FileOffset, GuestAddress,
GuestMemoryRegion, GuestUsize, MemoryRegionAddress, VolatileSlice,
};
/// Guest memory region for virtio-fs DAX window.
#[derive(Debug)]
pub struct GuestRegionRaw<B = ()> {
guest_base: GuestAddress,
addr: *mut u8,
size: usize,
bitmap: B,
}
impl<B: NewBitmap> GuestRegionRaw<B> {
/// Create a `GuestRegionRaw` object from raw pointer.
///
/// # Safety
/// Caller needs to ensure `addr` and `size` are valid with static lifetime.
pub unsafe fn new(guest_base: GuestAddress, addr: *mut u8, size: usize) -> Self {
let bitmap = B::with_len(size);
GuestRegionRaw {
guest_base,
addr,
size,
bitmap,
}
}
}
impl<B: Bitmap> Bytes<MemoryRegionAddress> for GuestRegionRaw<B> {
type E = guest_memory::Error;
fn write(&self, buf: &[u8], addr: MemoryRegionAddress) -> guest_memory::Result<usize> {
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write(buf, maddr)
.map_err(Into::into)
}
fn read(&self, buf: &mut [u8], addr: MemoryRegionAddress) -> guest_memory::Result<usize> {
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read(buf, maddr)
.map_err(Into::into)
}
fn write_slice(&self, buf: &[u8], addr: MemoryRegionAddress) -> guest_memory::Result<()> {
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write_slice(buf, maddr)
.map_err(Into::into)
}
fn read_slice(&self, buf: &mut [u8], addr: MemoryRegionAddress) -> guest_memory::Result<()> {
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read_slice(buf, maddr)
.map_err(Into::into)
}
fn read_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Read,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read_from::<F>(maddr, src, count)
.map_err(Into::into)
}
fn read_exact_from<F>(
&self,
addr: MemoryRegionAddress,
src: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Read,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.read_exact_from::<F>(maddr, src, count)
.map_err(Into::into)
}
fn write_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<usize>
where
F: Write,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write_to::<F>(maddr, dst, count)
.map_err(Into::into)
}
fn write_all_to<F>(
&self,
addr: MemoryRegionAddress,
dst: &mut F,
count: usize,
) -> guest_memory::Result<()>
where
F: Write,
{
let maddr = addr.raw_value() as usize;
self.as_volatile_slice()
.unwrap()
.write_all_to::<F>(maddr, dst, count)
.map_err(Into::into)
}
fn store<T: AtomicAccess>(
&self,
val: T,
addr: MemoryRegionAddress,
order: Ordering,
) -> guest_memory::Result<()> {
self.as_volatile_slice().and_then(|s| {
s.store(val, addr.raw_value() as usize, order)
.map_err(Into::into)
})
}
fn load<T: AtomicAccess>(
&self,
addr: MemoryRegionAddress,
order: Ordering,
) -> guest_memory::Result<T> {
self.as_volatile_slice()
.and_then(|s| s.load(addr.raw_value() as usize, order).map_err(Into::into))
}
}
impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
type B = B;
fn len(&self) -> GuestUsize {
self.size as GuestUsize
}
fn start_addr(&self) -> GuestAddress {
self.guest_base
}
fn bitmap(&self) -> &Self::B {
&self.bitmap
}
fn get_host_address(&self, addr: MemoryRegionAddress) -> guest_memory::Result<*mut u8> {
// Not sure why wrapping_offset is not unsafe. Anyway this
// is safe because we've just range-checked addr using check_address.
self.check_address(addr)
.ok_or(guest_memory::Error::InvalidBackendAddress)
.map(|addr| self.addr.wrapping_offset(addr.raw_value() as isize))
}
fn file_offset(&self) -> Option<&FileOffset> {
None
}
unsafe fn as_slice(&self) -> Option<&[u8]> {
// This is safe because we mapped the area at addr ourselves, so this slice will not
// overflow. However, it is possible to alias.
Some(std::slice::from_raw_parts(self.addr, self.size))
}
unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
// This is safe because we mapped the area at addr ourselves, so this slice will not
// overflow. However, it is possible to alias.
Some(std::slice::from_raw_parts_mut(self.addr, self.size))
}
fn get_slice(
&self,
offset: MemoryRegionAddress,
count: usize,
) -> guest_memory::Result<VolatileSlice<BS<B>>> {
let offset = offset.raw_value() as usize;
let end = compute_offset(offset, count)?;
if end > self.size {
return Err(volatile_memory::Error::OutOfBounds { addr: end }.into());
}
// Safe because we checked that offset + count was within our range and we only ever hand
// out volatile accessors.
Ok(unsafe {
VolatileSlice::with_bitmap(
(self.addr as usize + offset) as *mut _,
count,
self.bitmap.slice_at(offset),
)
})
}
#[cfg(target_os = "linux")]
fn is_hugetlbfs(&self) -> Option<bool> {
None
}
}
#[cfg(test)]
mod tests {
extern crate vmm_sys_util;
use super::*;
use crate::{GuestMemoryHybrid, GuestRegionHybrid};
use std::sync::Arc;
use vm_memory::{GuestAddressSpace, GuestMemory, VolatileMemory};
/*
use crate::bitmap::tests::test_guest_memory_and_region;
use crate::bitmap::AtomicBitmap;
use crate::GuestAddressSpace;
use std::fs::File;
use std::mem;
use std::path::Path;
use vmm_sys_util::tempfile::TempFile;
type GuestMemoryMmap = super::GuestMemoryMmap<()>;
type GuestRegionMmap = super::GuestRegionMmap<()>;
type MmapRegion = super::MmapRegion<()>;
*/
#[test]
fn test_region_raw_new() {
let mut buf = [0u8; 1024];
let m =
unsafe { GuestRegionRaw::<()>::new(GuestAddress(0x10_0000), &mut buf as *mut _, 1024) };
assert_eq!(m.start_addr(), GuestAddress(0x10_0000));
assert_eq!(m.len(), 1024);
}
/*
fn check_guest_memory_mmap(
maybe_guest_mem: Result<GuestMemoryMmap, Error>,
expected_regions_summary: &[(GuestAddress, usize)],
) {
assert!(maybe_guest_mem.is_ok());
let guest_mem = maybe_guest_mem.unwrap();
assert_eq!(guest_mem.num_regions(), expected_regions_summary.len());
let maybe_last_mem_reg = expected_regions_summary.last();
if let Some((region_addr, region_size)) = maybe_last_mem_reg {
let mut last_addr = region_addr.unchecked_add(*region_size as u64);
if last_addr.raw_value() != 0 {
last_addr = last_addr.unchecked_sub(1);
}
assert_eq!(guest_mem.last_addr(), last_addr);
}
for ((region_addr, region_size), mmap) in expected_regions_summary
.iter()
.zip(guest_mem.regions.iter())
{
assert_eq!(region_addr, &mmap.guest_base);
assert_eq!(region_size, &mmap.mapping.size());
assert!(guest_mem.find_region(*region_addr).is_some());
}
}
fn new_guest_memory_mmap(
regions_summary: &[(GuestAddress, usize)],
) -> Result<GuestMemoryMmap, Error> {
GuestMemoryMmap::from_ranges(regions_summary)
}
fn new_guest_memory_mmap_from_regions(
regions_summary: &[(GuestAddress, usize)],
) -> Result<GuestMemoryMmap, Error> {
GuestMemoryMmap::from_regions(
regions_summary
.iter()
.map(|(region_addr, region_size)| {
GuestRegionMmap::new(MmapRegion::new(*region_size).unwrap(), *region_addr)
.unwrap()
})
.collect(),
)
}
fn new_guest_memory_mmap_from_arc_regions(
regions_summary: &[(GuestAddress, usize)],
) -> Result<GuestMemoryMmap, Error> {
GuestMemoryMmap::from_arc_regions(
regions_summary
.iter()
.map(|(region_addr, region_size)| {
Arc::new(
GuestRegionMmap::new(MmapRegion::new(*region_size).unwrap(), *region_addr)
.unwrap(),
)
})
.collect(),
)
}
fn new_guest_memory_mmap_with_files(
regions_summary: &[(GuestAddress, usize)],
) -> Result<GuestMemoryMmap, Error> {
let regions: Vec<(GuestAddress, usize, Option<FileOffset>)> = regions_summary
.iter()
.map(|(region_addr, region_size)| {
let f = TempFile::new().unwrap().into_file();
f.set_len(*region_size as u64).unwrap();
(*region_addr, *region_size, Some(FileOffset::new(f, 0)))
})
.collect();
GuestMemoryMmap::from_ranges_with_files(&regions)
}
*/
#[test]
fn slice_addr() {
let mut buf = [0u8; 1024];
let m =
unsafe { GuestRegionRaw::<()>::new(GuestAddress(0x10_0000), &mut buf as *mut _, 1024) };
let s = m.get_slice(MemoryRegionAddress(2), 3).unwrap();
assert_eq!(s.as_ptr(), &mut buf[2] as *mut _);
}
/*
#[test]
fn test_address_in_range() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x400).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x400).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x800);
let guest_mem =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
for guest_mem in guest_mem_list.iter() {
assert!(guest_mem.address_in_range(GuestAddress(0x200)));
assert!(!guest_mem.address_in_range(GuestAddress(0x600)));
assert!(guest_mem.address_in_range(GuestAddress(0xa00)));
assert!(!guest_mem.address_in_range(GuestAddress(0xc00)));
}
}
#[test]
fn test_check_address() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x400).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x400).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x800);
let guest_mem =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
for guest_mem in guest_mem_list.iter() {
assert_eq!(
guest_mem.check_address(GuestAddress(0x200)),
Some(GuestAddress(0x200))
);
assert_eq!(guest_mem.check_address(GuestAddress(0x600)), None);
assert_eq!(
guest_mem.check_address(GuestAddress(0xa00)),
Some(GuestAddress(0xa00))
);
assert_eq!(guest_mem.check_address(GuestAddress(0xc00)), None);
}
}
#[test]
fn test_to_region_addr() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x400).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x400).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x800);
let guest_mem =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
for guest_mem in guest_mem_list.iter() {
assert!(guest_mem.to_region_addr(GuestAddress(0x600)).is_none());
let (r0, addr0) = guest_mem.to_region_addr(GuestAddress(0x800)).unwrap();
let (r1, addr1) = guest_mem.to_region_addr(GuestAddress(0xa00)).unwrap();
assert!(r0.as_ptr() == r1.as_ptr());
assert_eq!(addr0, MemoryRegionAddress(0));
assert_eq!(addr1, MemoryRegionAddress(0x200));
}
}
#[test]
fn test_get_host_address() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x400).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x400).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x800);
let guest_mem =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
for guest_mem in guest_mem_list.iter() {
assert!(guest_mem.get_host_address(GuestAddress(0x600)).is_err());
let ptr0 = guest_mem.get_host_address(GuestAddress(0x800)).unwrap();
let ptr1 = guest_mem.get_host_address(GuestAddress(0xa00)).unwrap();
assert_eq!(
ptr0,
guest_mem.find_region(GuestAddress(0x800)).unwrap().as_ptr()
);
assert_eq!(unsafe { ptr0.offset(0x200) }, ptr1);
}
}
#[test]
fn test_deref() {
let f = TempFile::new().unwrap().into_file();
f.set_len(0x400).unwrap();
let start_addr = GuestAddress(0x0);
let guest_mem = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
start_addr,
0x400,
Some(FileOffset::new(f, 0)),
)])
.unwrap();
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
for guest_mem in guest_mem_list.iter() {
let sample_buf = &[1, 2, 3, 4, 5];
assert_eq!(guest_mem.write(sample_buf, start_addr).unwrap(), 5);
let slice = guest_mem
.find_region(GuestAddress(0))
.unwrap()
.as_volatile_slice()
.unwrap();
let buf = &mut [0, 0, 0, 0, 0];
assert_eq!(slice.read(buf, 0).unwrap(), 5);
assert_eq!(buf, sample_buf);
}
}
#[test]
fn test_read_u64() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x1000).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x1000).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x1000);
let bad_addr = GuestAddress(0x2001);
let bad_addr2 = GuestAddress(0x1ffc);
let max_addr = GuestAddress(0x2000);
let gm =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x1000), (start_addr2, 0x1000)]).unwrap();
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x1000, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x1000, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let gm_list = vec![gm, gm_backed_by_file];
for gm in gm_list.iter() {
let val1: u64 = 0xaa55_aa55_aa55_aa55;
let val2: u64 = 0x55aa_55aa_55aa_55aa;
assert_eq!(
format!("{:?}", gm.write_obj(val1, bad_addr).err().unwrap()),
format!("InvalidGuestAddress({:?})", bad_addr,)
);
assert_eq!(
format!("{:?}", gm.write_obj(val1, bad_addr2).err().unwrap()),
format!(
"PartialBuffer {{ expected: {:?}, completed: {:?} }}",
mem::size_of::<u64>(),
max_addr.checked_offset_from(bad_addr2).unwrap()
)
);
gm.write_obj(val1, GuestAddress(0x500)).unwrap();
gm.write_obj(val2, GuestAddress(0x1000 + 32)).unwrap();
let num1: u64 = gm.read_obj(GuestAddress(0x500)).unwrap();
let num2: u64 = gm.read_obj(GuestAddress(0x1000 + 32)).unwrap();
assert_eq!(val1, num1);
assert_eq!(val2, num2);
}
}
#[test]
fn write_and_read() {
let f = TempFile::new().unwrap().into_file();
f.set_len(0x400).unwrap();
let mut start_addr = GuestAddress(0x1000);
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
start_addr,
0x400,
Some(FileOffset::new(f, 0)),
)])
.unwrap();
let gm_list = vec![gm, gm_backed_by_file];
for gm in gm_list.iter() {
let sample_buf = &[1, 2, 3, 4, 5];
assert_eq!(gm.write(sample_buf, start_addr).unwrap(), 5);
let buf = &mut [0u8; 5];
assert_eq!(gm.read(buf, start_addr).unwrap(), 5);
assert_eq!(buf, sample_buf);
start_addr = GuestAddress(0x13ff);
assert_eq!(gm.write(sample_buf, start_addr).unwrap(), 1);
assert_eq!(gm.read(buf, start_addr).unwrap(), 1);
assert_eq!(buf[0], sample_buf[0]);
start_addr = GuestAddress(0x1000);
}
}
#[test]
fn read_to_and_write_from_mem() {
let f = TempFile::new().unwrap().into_file();
f.set_len(0x400).unwrap();
let gm = GuestMemoryMmap::from_ranges(&[(GuestAddress(0x1000), 0x400)]).unwrap();
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
GuestAddress(0x1000),
0x400,
Some(FileOffset::new(f, 0)),
)])
.unwrap();
let gm_list = vec![gm, gm_backed_by_file];
for gm in gm_list.iter() {
let addr = GuestAddress(0x1010);
let mut file = if cfg!(unix) {
File::open(Path::new("/dev/zero")).unwrap()
} else {
File::open(Path::new("c:\\Windows\\system32\\ntoskrnl.exe")).unwrap()
};
gm.write_obj(!0u32, addr).unwrap();
gm.read_exact_from(addr, &mut file, mem::size_of::<u32>())
.unwrap();
let value: u32 = gm.read_obj(addr).unwrap();
if cfg!(unix) {
assert_eq!(value, 0);
} else {
assert_eq!(value, 0x0090_5a4d);
}
let mut sink = Vec::new();
gm.write_all_to(addr, &mut sink, mem::size_of::<u32>())
.unwrap();
if cfg!(unix) {
assert_eq!(sink, vec![0; mem::size_of::<u32>()]);
} else {
assert_eq!(sink, vec![0x4d, 0x5a, 0x90, 0x00]);
};
}
}
#[test]
fn create_vec_with_regions() {
let region_size = 0x400;
let regions = vec![
(GuestAddress(0x0), region_size),
(GuestAddress(0x1000), region_size),
];
let mut iterated_regions = Vec::new();
let gm = GuestMemoryMmap::from_ranges(&regions).unwrap();
for region in gm.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
}
for region in gm.iter() {
iterated_regions.push((region.start_addr(), region.len() as usize));
}
assert_eq!(regions, iterated_regions);
assert!(regions
.iter()
.map(|x| (x.0, x.1))
.eq(iterated_regions.iter().copied()));
assert_eq!(gm.regions[0].guest_base, regions[0].0);
assert_eq!(gm.regions[1].guest_base, regions[1].0);
}
#[test]
fn test_memory() {
let region_size = 0x400;
let regions = vec![
(GuestAddress(0x0), region_size),
(GuestAddress(0x1000), region_size),
];
let mut iterated_regions = Vec::new();
let gm = Arc::new(GuestMemoryMmap::from_ranges(&regions).unwrap());
let mem = gm.memory();
for region in mem.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
}
for region in mem.iter() {
iterated_regions.push((region.start_addr(), region.len() as usize));
}
assert_eq!(regions, iterated_regions);
assert!(regions
.iter()
.map(|x| (x.0, x.1))
.eq(iterated_regions.iter().copied()));
assert_eq!(gm.regions[0].guest_base, regions[0].0);
assert_eq!(gm.regions[1].guest_base, regions[1].0);
}
#[test]
fn test_access_cross_boundary() {
let f1 = TempFile::new().unwrap().into_file();
f1.set_len(0x1000).unwrap();
let f2 = TempFile::new().unwrap().into_file();
f2.set_len(0x1000).unwrap();
let start_addr1 = GuestAddress(0x0);
let start_addr2 = GuestAddress(0x1000);
let gm =
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x1000), (start_addr2, 0x1000)]).unwrap();
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
(start_addr1, 0x1000, Some(FileOffset::new(f1, 0))),
(start_addr2, 0x1000, Some(FileOffset::new(f2, 0))),
])
.unwrap();
let gm_list = vec![gm, gm_backed_by_file];
for gm in gm_list.iter() {
let sample_buf = &[1, 2, 3, 4, 5];
assert_eq!(gm.write(sample_buf, GuestAddress(0xffc)).unwrap(), 5);
let buf = &mut [0u8; 5];
assert_eq!(gm.read(buf, GuestAddress(0xffc)).unwrap(), 5);
assert_eq!(buf, sample_buf);
}
}
#[test]
fn test_retrieve_fd_backing_memory_region() {
let f = TempFile::new().unwrap().into_file();
f.set_len(0x400).unwrap();
let start_addr = GuestAddress(0x0);
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
assert!(gm.find_region(start_addr).is_some());
let region = gm.find_region(start_addr).unwrap();
assert!(region.file_offset().is_none());
let gm = GuestMemoryMmap::from_ranges_with_files(&[(
start_addr,
0x400,
Some(FileOffset::new(f, 0)),
)])
.unwrap();
assert!(gm.find_region(start_addr).is_some());
let region = gm.find_region(start_addr).unwrap();
assert!(region.file_offset().is_some());
}
// Windows needs a dedicated test where it will retrieve the allocation
// granularity to determine a proper offset (other than 0) that can be
// used for the backing file. Refer to Microsoft docs here:
// https://docs.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-mapviewoffile
#[test]
#[cfg(unix)]
fn test_retrieve_offset_from_fd_backing_memory_region() {
let f = TempFile::new().unwrap().into_file();
f.set_len(0x1400).unwrap();
// Needs to be aligned on 4k, otherwise mmap will fail.
let offset = 0x1000;
let start_addr = GuestAddress(0x0);
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
assert!(gm.find_region(start_addr).is_some());
let region = gm.find_region(start_addr).unwrap();
assert!(region.file_offset().is_none());
let gm = GuestMemoryMmap::from_ranges_with_files(&[(
start_addr,
0x400,
Some(FileOffset::new(f, offset)),
)])
.unwrap();
assert!(gm.find_region(start_addr).is_some());
let region = gm.find_region(start_addr).unwrap();
assert!(region.file_offset().is_some());
assert_eq!(region.file_offset().unwrap().start(), offset);
}
*/
#[test]
fn test_mmap_insert_region() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x10_0000);
let guest_mem = GuestMemoryHybrid::<()>::new();
let mut raw_buf = [0u8; 0x1000];
let raw_ptr = &mut raw_buf as *mut u8;
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, raw_ptr, 0x1000) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, raw_ptr, 0x1000) };
let gm = &guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let mem_orig = gm.memory();
assert_eq!(mem_orig.num_regions(), 2);
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0x8000), raw_ptr, 0x1000) };
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
let gm = gm.insert_region(mmap).unwrap();
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0x4000), raw_ptr, 0x1000) };
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
let gm = gm.insert_region(mmap).unwrap();
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0xc000), raw_ptr, 0x1000) };
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
let gm = gm.insert_region(mmap).unwrap();
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0xc000), raw_ptr, 0x1000) };
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
gm.insert_region(mmap).unwrap_err();
assert_eq!(mem_orig.num_regions(), 2);
assert_eq!(gm.num_regions(), 5);
assert_eq!(gm.regions[0].start_addr(), GuestAddress(0x0000));
assert_eq!(gm.regions[1].start_addr(), GuestAddress(0x4000));
assert_eq!(gm.regions[2].start_addr(), GuestAddress(0x8000));
assert_eq!(gm.regions[3].start_addr(), GuestAddress(0xc000));
assert_eq!(gm.regions[4].start_addr(), GuestAddress(0x10_0000));
}
#[test]
fn test_mmap_remove_region() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x10_0000);
let guest_mem = GuestMemoryHybrid::<()>::new();
let mut raw_buf = [0u8; 0x1000];
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x1000) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x1000) };
let gm = &guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let mem_orig = gm.memory();
assert_eq!(mem_orig.num_regions(), 2);
gm.remove_region(GuestAddress(0), 128).unwrap_err();
gm.remove_region(GuestAddress(0x4000), 128).unwrap_err();
let (gm, region) = gm.remove_region(GuestAddress(0x10_0000), 0x1000).unwrap();
assert_eq!(mem_orig.num_regions(), 2);
assert_eq!(gm.num_regions(), 1);
assert_eq!(gm.regions[0].start_addr(), GuestAddress(0x0000));
assert_eq!(region.start_addr(), GuestAddress(0x10_0000));
}
#[test]
fn test_guest_memory_mmap_get_slice() {
let start_addr1 = GuestAddress(0);
let mut raw_buf = [0u8; 0x400];
let region =
unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
// Normal case.
let slice_addr = MemoryRegionAddress(0x100);
let slice_size = 0x200;
let slice = region.get_slice(slice_addr, slice_size).unwrap();
assert_eq!(slice.len(), slice_size);
// Empty slice.
let slice_addr = MemoryRegionAddress(0x200);
let slice_size = 0x0;
let slice = region.get_slice(slice_addr, slice_size).unwrap();
assert!(slice.is_empty());
// Error case when slice_size is beyond the boundary.
let slice_addr = MemoryRegionAddress(0x300);
let slice_size = 0x200;
assert!(region.get_slice(slice_addr, slice_size).is_err());
}
#[test]
fn test_guest_memory_mmap_as_volatile_slice() {
let start_addr1 = GuestAddress(0);
let mut raw_buf = [0u8; 0x400];
let region =
unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
let region_size = 0x400;
// Test slice length.
let slice = region.as_volatile_slice().unwrap();
assert_eq!(slice.len(), region_size);
// Test slice data.
let v = 0x1234_5678u32;
let r = slice.get_ref::<u32>(0x200).unwrap();
r.store(v);
assert_eq!(r.load(), v);
}
#[test]
fn test_guest_memory_get_slice() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x800);
let guest_mem = GuestMemoryHybrid::<()>::new();
let mut raw_buf = [0u8; 0x400];
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
// Normal cases.
let slice_size = 0x200;
let slice = guest_mem
.get_slice(GuestAddress(0x100), slice_size)
.unwrap();
assert_eq!(slice.len(), slice_size);
let slice_size = 0x400;
let slice = guest_mem
.get_slice(GuestAddress(0x800), slice_size)
.unwrap();
assert_eq!(slice.len(), slice_size);
// Empty slice.
assert!(guest_mem
.get_slice(GuestAddress(0x900), 0)
.unwrap()
.is_empty());
// Error cases, wrong size or base address.
assert!(guest_mem.get_slice(GuestAddress(0), 0x500).is_err());
assert!(guest_mem.get_slice(GuestAddress(0x600), 0x100).is_err());
assert!(guest_mem.get_slice(GuestAddress(0xc00), 0x100).is_err());
}
#[test]
fn test_checked_offset() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x800);
let start_addr3 = GuestAddress(0xc00);
let guest_mem = GuestMemoryHybrid::<()>::new();
let mut raw_buf = [0u8; 0x400];
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr3, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
assert_eq!(
guest_mem.checked_offset(start_addr1, 0x200),
Some(GuestAddress(0x200))
);
assert_eq!(
guest_mem.checked_offset(start_addr1, 0xa00),
Some(GuestAddress(0xa00))
);
assert_eq!(
guest_mem.checked_offset(start_addr2, 0x7ff),
Some(GuestAddress(0xfff))
);
assert_eq!(guest_mem.checked_offset(start_addr2, 0xc00), None);
assert_eq!(guest_mem.checked_offset(start_addr1, std::usize::MAX), None);
assert_eq!(guest_mem.checked_offset(start_addr1, 0x400), None);
assert_eq!(
guest_mem.checked_offset(start_addr1, 0x400 - 1),
Some(GuestAddress(0x400 - 1))
);
}
#[test]
fn test_check_range() {
let start_addr1 = GuestAddress(0);
let start_addr2 = GuestAddress(0x800);
let start_addr3 = GuestAddress(0xc00);
let guest_mem = GuestMemoryHybrid::<()>::new();
let mut raw_buf = [0u8; 0x400];
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr3, &mut raw_buf as *mut _, 0x400) };
let guest_mem = guest_mem
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
.unwrap();
assert!(guest_mem.check_range(start_addr1, 0x0));
assert!(guest_mem.check_range(start_addr1, 0x200));
assert!(guest_mem.check_range(start_addr1, 0x400));
assert!(!guest_mem.check_range(start_addr1, 0xa00));
assert!(guest_mem.check_range(start_addr2, 0x7ff));
assert!(guest_mem.check_range(start_addr2, 0x800));
assert!(!guest_mem.check_range(start_addr2, 0x801));
assert!(!guest_mem.check_range(start_addr2, 0xc00));
assert!(!guest_mem.check_range(start_addr1, usize::MAX));
}
}

View File

@@ -0,0 +1,85 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Types for NUMA information.
use vm_memory::{GuestAddress, GuestUsize};
/// Strategy of mbind() and don't lead to OOM.
pub const MPOL_PREFERRED: u32 = 1;
/// Strategy of mbind()
pub const MPOL_MF_MOVE: u32 = 2;
/// Type for recording numa ids of different devices
pub struct NumaIdTable {
/// vectors of numa id for each memory region
pub memory: Vec<u32>,
/// vectors of numa id for each cpu
pub cpu: Vec<u32>,
}
/// Record numa node memory information.
#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)]
pub struct NumaNodeInfo {
/// Base address of the region in guest physical address space.
pub base: GuestAddress,
/// Size of the address region.
pub size: GuestUsize,
}
/// Record all region's info of a numa node.
#[derive(Debug, Default, Clone, PartialEq, Eq)]
pub struct NumaNode {
region_infos: Vec<NumaNodeInfo>,
vcpu_ids: Vec<u32>,
}
impl NumaNode {
/// get reference of region_infos in numa node.
pub fn region_infos(&self) -> &Vec<NumaNodeInfo> {
&self.region_infos
}
/// get vcpu ids belonging to a numa node.
pub fn vcpu_ids(&self) -> &Vec<u32> {
&self.vcpu_ids
}
/// add a new numa region info into this numa node.
pub fn add_info(&mut self, info: &NumaNodeInfo) {
self.region_infos.push(*info);
}
/// add a group of vcpu ids belong to this numa node
pub fn add_vcpu_ids(&mut self, vcpu_ids: &[u32]) {
self.vcpu_ids.extend(vcpu_ids)
}
/// create a new numa node struct
pub fn new() -> NumaNode {
NumaNode {
region_infos: Vec::new(),
vcpu_ids: Vec::new(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_create_numa_node() {
let mut numa_node = NumaNode::new();
let info = NumaNodeInfo {
base: GuestAddress(0),
size: 1024,
};
numa_node.add_info(&info);
assert_eq!(*numa_node.region_infos(), vec![info]);
let vcpu_ids = vec![0, 1, 2, 3];
numa_node.add_vcpu_ids(&vcpu_ids);
assert_eq!(*numa_node.vcpu_ids(), vcpu_ids);
}
}

View File

@@ -0,0 +1,564 @@
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
use std::ffi::CString;
use std::fs::{File, OpenOptions};
use std::os::unix::io::FromRawFd;
use std::path::Path;
use std::str::FromStr;
use nix::sys::memfd;
use vm_memory::{Address, FileOffset, GuestAddress, GuestUsize};
use crate::memory::MemorySourceType;
use crate::memory::MemorySourceType::MemFdShared;
use crate::AddressSpaceError;
/// Type of address space regions.
///
/// On physical machines, physical memory may have different properties, such as
/// volatile vs non-volatile, read-only vs read-write, non-executable vs executable etc.
/// On virtual machines, the concept of memory property may be extended to support better
/// cooperation between the hypervisor and the guest kernel. Here address space region type means
/// what the region will be used for by the guest OS, and different permissions and policies may
/// be applied to different address space regions.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub enum AddressSpaceRegionType {
/// Normal memory accessible by CPUs and IO devices.
DefaultMemory,
/// MMIO address region for Devices.
DeviceMemory,
/// DAX address region for virtio-fs/virtio-pmem.
DAXMemory,
}
/// Struct to maintain configuration information about a guest address region.
#[derive(Debug, Clone)]
pub struct AddressSpaceRegion {
/// Type of address space regions.
pub ty: AddressSpaceRegionType,
/// Base address of the region in virtual machine's physical address space.
pub base: GuestAddress,
/// Size of the address space region.
pub size: GuestUsize,
/// Host NUMA node ids assigned to this region.
pub host_numa_node_id: Option<u32>,
/// File/offset tuple to back the memory allocation.
file_offset: Option<FileOffset>,
/// Mmap permission flags.
perm_flags: i32,
/// Mmap protection flags.
prot_flags: i32,
/// Hugepage madvise hint.
///
/// It needs 'advise' or 'always' policy in host shmem config.
is_hugepage: bool,
/// Hotplug hint.
is_hotplug: bool,
/// Anonymous memory hint.
///
/// It should be true for regions with the MADV_DONTFORK flag enabled.
is_anon: bool,
}
#[allow(clippy::too_many_arguments)]
impl AddressSpaceRegion {
/// Create an address space region with default configuration.
pub fn new(ty: AddressSpaceRegionType, base: GuestAddress, size: GuestUsize) -> Self {
AddressSpaceRegion {
ty,
base,
size,
host_numa_node_id: None,
file_offset: None,
perm_flags: libc::MAP_SHARED,
prot_flags: libc::PROT_READ | libc::PROT_WRITE,
is_hugepage: false,
is_hotplug: false,
is_anon: false,
}
}
/// Create an address space region with all configurable information.
///
/// # Arguments
/// * `ty` - Type of the address region
/// * `base` - Base address in VM to map content
/// * `size` - Length of content to map
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
/// * `file_offset` - Optional file descriptor and offset to map content from
/// * `perm_flags` - mmap permission flags
/// * `prot_flags` - mmap protection flags
/// * `is_hotplug` - Whether it's a region for hotplug.
pub fn build(
ty: AddressSpaceRegionType,
base: GuestAddress,
size: GuestUsize,
host_numa_node_id: Option<u32>,
file_offset: Option<FileOffset>,
perm_flags: i32,
prot_flags: i32,
is_hotplug: bool,
) -> Self {
let mut region = Self::new(ty, base, size);
region.set_host_numa_node_id(host_numa_node_id);
region.set_file_offset(file_offset);
region.set_perm_flags(perm_flags);
region.set_prot_flags(prot_flags);
if is_hotplug {
region.set_hotplug();
}
region
}
/// Create an address space region to map memory into the virtual machine.
///
/// # Arguments
/// * `base` - Base address in VM to map content
/// * `size` - Length of content to map
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
/// * `mem_type` - Memory mapping from, 'shmem' or 'hugetlbfs'
/// * `mem_file_path` - Memory file path
/// * `mem_prealloc` - Whether to enable pre-allocation of guest memory
/// * `is_hotplug` - Whether it's a region for hotplug.
pub fn create_default_memory_region(
base: GuestAddress,
size: GuestUsize,
numa_node_id: Option<u32>,
mem_type: &str,
mem_file_path: &str,
mem_prealloc: bool,
is_hotplug: bool,
) -> Result<AddressSpaceRegion, AddressSpaceError> {
Self::create_memory_region(
base,
size,
numa_node_id,
mem_type,
mem_file_path,
mem_prealloc,
libc::PROT_READ | libc::PROT_WRITE,
is_hotplug,
)
}
/// Create an address space region to map memory from memfd/hugetlbfs into the virtual machine.
///
/// # Arguments
/// * `base` - Base address in VM to map content
/// * `size` - Length of content to map
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
/// * `mem_type` - Memory mapping from, 'shmem' or 'hugetlbfs'
/// * `mem_file_path` - Memory file path
/// * `mem_prealloc` - Whether to enable pre-allocation of guest memory
/// * `is_hotplug` - Whether it's a region for hotplug.
/// * `prot_flags` - mmap protection flags
pub fn create_memory_region(
base: GuestAddress,
size: GuestUsize,
numa_node_id: Option<u32>,
mem_type: &str,
mem_file_path: &str,
mem_prealloc: bool,
prot_flags: i32,
is_hotplug: bool,
) -> Result<AddressSpaceRegion, AddressSpaceError> {
let perm_flags = if mem_prealloc {
libc::MAP_SHARED | libc::MAP_POPULATE
} else {
libc::MAP_SHARED
};
let source_type = MemorySourceType::from_str(mem_type)
.map_err(|_e| AddressSpaceError::InvalidMemorySourceType(mem_type.to_string()))?;
let mut reg = match source_type {
MemorySourceType::MemFdShared | MemorySourceType::MemFdOnHugeTlbFs => {
let fn_str = if source_type == MemFdShared {
CString::new("shmem").expect("CString::new('shmem') failed")
} else {
CString::new("hugeshmem").expect("CString::new('hugeshmem') failed")
};
let filename = fn_str.as_c_str();
let fd = memfd::memfd_create(filename, memfd::MemFdCreateFlag::empty())
.map_err(AddressSpaceError::CreateMemFd)?;
// Safe because we have just created the fd.
let file: File = unsafe { File::from_raw_fd(fd) };
file.set_len(size).map_err(AddressSpaceError::SetFileSize)?;
Self::build(
AddressSpaceRegionType::DefaultMemory,
base,
size,
numa_node_id,
Some(FileOffset::new(file, 0)),
perm_flags,
prot_flags,
is_hotplug,
)
}
MemorySourceType::MmapAnonymous | MemorySourceType::MmapAnonymousHugeTlbFs => {
let mut perm_flags = libc::MAP_PRIVATE | libc::MAP_ANONYMOUS;
if mem_prealloc {
perm_flags |= libc::MAP_POPULATE
}
Self::build(
AddressSpaceRegionType::DefaultMemory,
base,
size,
numa_node_id,
None,
perm_flags,
prot_flags,
is_hotplug,
)
}
MemorySourceType::FileOnHugeTlbFs => {
let path = Path::new(mem_file_path);
if let Some(parent_dir) = path.parent() {
// Ensure that the parent directory is existed for the mem file path.
std::fs::create_dir_all(parent_dir).map_err(AddressSpaceError::CreateDir)?;
}
let file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.open(mem_file_path)
.map_err(AddressSpaceError::OpenFile)?;
nix::unistd::unlink(mem_file_path).map_err(AddressSpaceError::UnlinkFile)?;
file.set_len(size).map_err(AddressSpaceError::SetFileSize)?;
let file_offset = FileOffset::new(file, 0);
Self::build(
AddressSpaceRegionType::DefaultMemory,
base,
size,
numa_node_id,
Some(file_offset),
perm_flags,
prot_flags,
is_hotplug,
)
}
};
if source_type.is_hugepage() {
reg.set_hugepage();
}
if source_type.is_mmap_anonymous() {
reg.set_anonpage();
}
Ok(reg)
}
/// Create an address region for device MMIO.
///
/// # Arguments
/// * `base` - Base address in VM to map content
/// * `size` - Length of content to map
pub fn create_device_region(
base: GuestAddress,
size: GuestUsize,
) -> Result<AddressSpaceRegion, AddressSpaceError> {
Ok(Self::build(
AddressSpaceRegionType::DeviceMemory,
base,
size,
None,
None,
0,
0,
false,
))
}
/// Get type of the address space region.
pub fn region_type(&self) -> AddressSpaceRegionType {
self.ty
}
/// Get size of region.
pub fn len(&self) -> GuestUsize {
self.size
}
/// Get the inclusive start physical address of the region.
pub fn start_addr(&self) -> GuestAddress {
self.base
}
/// Get the inclusive end physical address of the region.
pub fn last_addr(&self) -> GuestAddress {
debug_assert!(self.size > 0 && self.base.checked_add(self.size).is_some());
GuestAddress(self.base.raw_value() + self.size - 1)
}
/// Get mmap permission flags of the address space region.
pub fn perm_flags(&self) -> i32 {
self.perm_flags
}
/// Set mmap permission flags for the address space region.
pub fn set_perm_flags(&mut self, perm_flags: i32) {
self.perm_flags = perm_flags;
}
/// Get mmap protection flags of the address space region.
pub fn prot_flags(&self) -> i32 {
self.prot_flags
}
/// Set mmap protection flags for the address space region.
pub fn set_prot_flags(&mut self, prot_flags: i32) {
self.prot_flags = prot_flags;
}
/// Get host_numa_node_id flags
pub fn host_numa_node_id(&self) -> Option<u32> {
self.host_numa_node_id
}
/// Set associated NUMA node ID to allocate memory from for this region.
pub fn set_host_numa_node_id(&mut self, host_numa_node_id: Option<u32>) {
self.host_numa_node_id = host_numa_node_id;
}
/// Check whether the address space region is backed by a memory file.
pub fn has_file(&self) -> bool {
self.file_offset.is_some()
}
/// Get optional file associated with the region.
pub fn file_offset(&self) -> Option<&FileOffset> {
self.file_offset.as_ref()
}
/// Set associated file/offset pair for the region.
pub fn set_file_offset(&mut self, file_offset: Option<FileOffset>) {
self.file_offset = file_offset;
}
/// Set the hotplug hint.
pub fn set_hotplug(&mut self) {
self.is_hotplug = true
}
/// Get the hotplug hint.
pub fn is_hotplug(&self) -> bool {
self.is_hotplug
}
/// Set hugepage hint for `madvise()`, only takes effect when the memory type is `shmem`.
pub fn set_hugepage(&mut self) {
self.is_hugepage = true
}
/// Get the hugepage hint.
pub fn is_hugepage(&self) -> bool {
self.is_hugepage
}
/// Set the anonymous memory hint.
pub fn set_anonpage(&mut self) {
self.is_anon = true
}
/// Get the anonymous memory hint.
pub fn is_anonpage(&self) -> bool {
self.is_anon
}
/// Check whether the address space region is valid.
pub fn is_valid(&self) -> bool {
self.size > 0 && self.base.checked_add(self.size).is_some()
}
/// Check whether the address space region intersects with another one.
pub fn intersect_with(&self, other: &AddressSpaceRegion) -> bool {
// Treat invalid address region as intersecting always
let end1 = match self.base.checked_add(self.size) {
Some(addr) => addr,
None => return true,
};
let end2 = match other.base.checked_add(other.size) {
Some(addr) => addr,
None => return true,
};
!(end1 <= other.base || self.base >= end2)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::Write;
use vmm_sys_util::tempfile::TempFile;
#[test]
fn test_address_space_region_valid() {
let reg1 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0xFFFFFFFFFFFFF000),
0x2000,
);
assert!(!reg1.is_valid());
let reg1 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0xFFFFFFFFFFFFF000),
0x1000,
);
assert!(!reg1.is_valid());
let reg1 = AddressSpaceRegion::new(
AddressSpaceRegionType::DeviceMemory,
GuestAddress(0xFFFFFFFFFFFFE000),
0x1000,
);
assert!(reg1.is_valid());
assert_eq!(reg1.start_addr(), GuestAddress(0xFFFFFFFFFFFFE000));
assert_eq!(reg1.len(), 0x1000);
assert!(!reg1.has_file());
assert!(reg1.file_offset().is_none());
assert_eq!(reg1.perm_flags(), libc::MAP_SHARED);
assert_eq!(reg1.prot_flags(), libc::PROT_READ | libc::PROT_WRITE);
assert_eq!(reg1.region_type(), AddressSpaceRegionType::DeviceMemory);
let tmp_file = TempFile::new().unwrap();
let mut f = tmp_file.into_file();
let sample_buf = &[1, 2, 3, 4, 5];
assert!(f.write_all(sample_buf).is_ok());
let reg2 = AddressSpaceRegion::build(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1000),
0x1000,
None,
Some(FileOffset::new(f, 0x0)),
0x5a,
0x5a,
false,
);
assert_eq!(reg2.region_type(), AddressSpaceRegionType::DefaultMemory);
assert!(reg2.is_valid());
assert_eq!(reg2.start_addr(), GuestAddress(0x1000));
assert_eq!(reg2.len(), 0x1000);
assert!(reg2.has_file());
assert!(reg2.file_offset().is_some());
assert_eq!(reg2.perm_flags(), 0x5a);
assert_eq!(reg2.prot_flags(), 0x5a);
}
#[test]
fn test_address_space_region_intersect() {
let reg1 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1000),
0x1000,
);
let reg2 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x2000),
0x1000,
);
let reg3 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1000),
0x1001,
);
let reg4 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0x1100),
0x100,
);
let reg5 = AddressSpaceRegion::new(
AddressSpaceRegionType::DefaultMemory,
GuestAddress(0xFFFFFFFFFFFFF000),
0x2000,
);
assert!(!reg1.intersect_with(&reg2));
assert!(!reg2.intersect_with(&reg1));
// intersect with self
assert!(reg1.intersect_with(&reg1));
// intersect with others
assert!(reg3.intersect_with(&reg2));
assert!(reg2.intersect_with(&reg3));
assert!(reg1.intersect_with(&reg4));
assert!(reg4.intersect_with(&reg1));
assert!(reg1.intersect_with(&reg5));
assert!(reg5.intersect_with(&reg1));
}
#[test]
fn test_create_device_region() {
let reg = AddressSpaceRegion::create_device_region(GuestAddress(0x10000), 0x1000).unwrap();
assert_eq!(reg.region_type(), AddressSpaceRegionType::DeviceMemory);
assert_eq!(reg.start_addr(), GuestAddress(0x10000));
assert_eq!(reg.len(), 0x1000);
}
#[test]
fn test_create_default_memory_region() {
AddressSpaceRegion::create_default_memory_region(
GuestAddress(0x100000),
0x100000,
None,
"invalid",
"invalid",
false,
false,
)
.unwrap_err();
let reg = AddressSpaceRegion::create_default_memory_region(
GuestAddress(0x100000),
0x100000,
None,
"shmem",
"",
false,
false,
)
.unwrap();
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
assert_eq!(reg.len(), 0x100000);
assert!(reg.file_offset().is_some());
let reg = AddressSpaceRegion::create_default_memory_region(
GuestAddress(0x100000),
0x100000,
None,
"hugeshmem",
"",
true,
false,
)
.unwrap();
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
assert_eq!(reg.len(), 0x100000);
assert!(reg.file_offset().is_some());
let reg = AddressSpaceRegion::create_default_memory_region(
GuestAddress(0x100000),
0x100000,
None,
"mmap",
"",
true,
false,
)
.unwrap();
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
assert_eq!(reg.len(), 0x100000);
assert!(reg.file_offset().is_none());
// TODO: test hugetlbfs
}
}

View File

@@ -0,0 +1,14 @@
[package]
name = "dbs-allocator"
version = "0.1.1"
authors = ["Liu Jiang <gerry@linux.alibaba.com>"]
description = "a resource allocator for virtual machine manager"
license = "Apache-2.0"
edition = "2018"
homepage = "https://github.com/openanolis/dragonball-sandbox"
repository = "https://github.com/openanolis/dragonball-sandbox"
keywords = ["dragonball"]
readme = "README.md"
[dependencies]
thiserror = "1.0"

View File

@@ -0,0 +1 @@
../../LICENSE

View File

@@ -0,0 +1,106 @@
# dbs-allocator
## Design
The resource manager in the `Dragonball Sandbox` needs to manage and allocate different kinds of resource for the
sandbox (virtual machine), such as memory-mapped I/O address space, port I/O address space, legacy IRQ numbers,
MSI/MSI-X vectors, device instance id, etc. The `dbs-allocator` crate is designed to help the resource manager
to track and allocate these types of resources.
Main components are:
- *Constraints*: struct to declare constraints for resource allocation.
```rust
#[derive(Copy, Clone, Debug)]
pub struct Constraint {
/// Size of resource to allocate.
pub size: u64,
/// Lower boundary for resource allocation.
pub min: u64,
/// Upper boundary for resource allocation.
pub max: u64,
/// Alignment for allocated resource.
pub align: u64,
/// Policy for resource allocation.
pub policy: AllocPolicy,
}
```
- `IntervalTree`: An interval tree implementation specialized for VMM resource management.
```rust
pub struct IntervalTree<T> {
pub(crate) root: Option<Node<T>>,
}
pub fn allocate(&mut self, constraint: &Constraint) -> Option<Range>
pub fn free(&mut self, key: &Range) -> Option<T>
pub fn insert(&mut self, key: Range, data: Option<T>) -> Self
pub fn update(&mut self, key: &Range, data: T) -> Option<T>
pub fn delete(&mut self, key: &Range) -> Option<T>
pub fn get(&self, key: &Range) -> Option<NodeState<&T>>
```
## Usage
The concept of Interval Tree may seem complicated, but using dbs-allocator to do resource allocation and release is simple and straightforward.
You can following these steps to allocate your VMM resource.
```rust
// 1. To start with, we should create an interval tree for some specific resouces and give maximum address/id range as root node. The range here could be address range, id range, etc.
let mut resources_pool = IntervalTree::new();
resources_pool.insert(Range::new(MIN_RANGE, MAX_RANGE), None);
// 2. Next, create a constraint with the size for your resource, you could also assign the maximum, minimum and alignment for the constraint. Then we could use the constraint to allocate the resource in the range we previously decided. Interval Tree will give you the appropriate range.
let mut constraint = Constraint::new(SIZE);
let mut resources_range = self.resources_pool.allocate(&constraint);
// 3. Then we could use the resource range to let other crates like vm-pci / vm-device to create and maintain the device
let mut device = Device::create(resources_range, ..)
```
## Example
We will show examples for allocating an unused PCI device ID from the PCI device ID pool and allocating memory address using dbs-allocator
```rust
use dbs_allocator::{Constraint, IntervalTree, Range};
// Init a dbs-allocator IntervalTree
let mut pci_device_pool = IntervalTree::new();
// Init PCI device id pool with the range 0 to 255
pci_device_pool.insert(Range::new(0x0u8, 0xffu8), None);
// Construct a constraint with size 1 and alignment 1 to ask for an ID.
let mut constraint = Constraint::new(1u64).align(1u64);
// Get an ID from the pci_device_pool
let mut id = pci_device_pool.allocate(&constraint).map(|e| e.min as u8);
// Pass the ID generated from dbs-allocator to vm-pci specified functions to create pci devices
let mut pci_device = PciDevice::new(id as u8, ..);
```
```rust
use dbs_allocator::{Constraint, IntervalTree, Range};
// Init a dbs-allocator IntervalTree
let mut mem_pool = IntervalTree::new();
// Init memory address from GUEST_MEM_START to GUEST_MEM_END
mem_pool.insert(Range::new(GUEST_MEM_START, GUEST_MEM_END), None);
// Construct a constraint with size, maximum addr and minimum address of memory region to ask for an memory allocation range.
let constraint = Constraint::new(region.len())
.min(region.start_addr().raw_value())
.max(region.last_addr().raw_value());
// Get the memory allocation range from the pci_device_pool
let mem_range = mem_pool.allocate(&constraint).unwrap();
// Update the mem_range in IntervalTree with memory region info
mem_pool.update(&mem_range, region);
// After allocation, we can use the memory range to do mapping and other memory related work.
...
```
## License
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,164 @@
// Copyright (C) 2019, 2022 Alibaba Cloud. All rights reserved.
// SPDX-License-Identifier: Apache-2.0
//! Data structures and algorithms to support resource allocation and management.
//!
//! The `dbs-allocator` crate provides data structures and algorithms to manage and allocate
//! integer identifiable resources. The resource manager in virtual machine monitor (VMM) may
//! manage and allocate resources for virtual machines by using:
//! - [Constraint]: Struct to declare constraints for resource allocation.
//! - [IntervalTree]: An interval tree implementation specialized for VMM resource management.
#![deny(missing_docs)]
pub mod interval_tree;
pub use interval_tree::{IntervalTree, NodeState, Range};
/// Error codes for resource allocation operations.
#[derive(thiserror::Error, Debug, Eq, PartialEq)]
pub enum Error {
/// Invalid boundary for resource allocation.
#[error("invalid boundary constraint: min ({0}), max ({1})")]
InvalidBoundary(u64, u64),
}
/// Specialized version of [`std::result::Result`] for resource allocation operations.
pub type Result<T> = std::result::Result<T, Error>;
/// Resource allocation policies.
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub enum AllocPolicy {
/// Default resource allocation policy.
Default,
/// Return the first available resource matching the allocation constraints.
FirstMatch,
}
/// Struct to declare resource allocation constraints.
#[derive(Copy, Clone, Debug)]
pub struct Constraint {
/// Size of resource to allocate.
pub size: u64,
/// Lower boundary for resource allocation.
pub min: u64,
/// Upper boundary for resource allocation.
pub max: u64,
/// Alignment for allocated resource.
pub align: u64,
/// Policy for resource allocation.
pub policy: AllocPolicy,
}
impl Constraint {
/// Create a new instance of [`Constraint`] with default settings.
pub fn new<T>(size: T) -> Self
where
u64: From<T>,
{
Constraint {
size: u64::from(size),
min: 0,
max: u64::MAX,
align: 1,
policy: AllocPolicy::Default,
}
}
/// Set the lower boundary constraint for resource allocation.
pub fn min<T>(mut self, min: T) -> Self
where
u64: From<T>,
{
self.min = u64::from(min);
self
}
/// Set the upper boundary constraint for resource allocation.
pub fn max<T>(mut self, max: T) -> Self
where
u64: From<T>,
{
self.max = u64::from(max);
self
}
/// Set the alignment constraint for allocated resource.
pub fn align<T>(mut self, align: T) -> Self
where
u64: From<T>,
{
self.align = u64::from(align);
self
}
/// Set the resource allocation policy.
pub fn policy(mut self, policy: AllocPolicy) -> Self {
self.policy = policy;
self
}
/// Validate the resource allocation constraints.
pub fn validate(&self) -> Result<()> {
if self.max < self.min {
return Err(Error::InvalidBoundary(self.min, self.max));
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_set_min() {
let constraint = Constraint::new(2_u64).min(1_u64);
assert_eq!(constraint.min, 1_u64);
}
#[test]
fn test_set_max() {
let constraint = Constraint::new(2_u64).max(100_u64);
assert_eq!(constraint.max, 100_u64);
}
#[test]
fn test_set_align() {
let constraint = Constraint::new(2_u64).align(8_u64);
assert_eq!(constraint.align, 8_u64);
}
#[test]
fn test_set_policy() {
let mut constraint = Constraint::new(2_u64).policy(AllocPolicy::FirstMatch);
assert_eq!(constraint.policy, AllocPolicy::FirstMatch);
constraint = constraint.policy(AllocPolicy::Default);
assert_eq!(constraint.policy, AllocPolicy::Default);
}
#[test]
fn test_consistently_change_constraint() {
let constraint = Constraint::new(2_u64)
.min(1_u64)
.max(100_u64)
.align(8_u64)
.policy(AllocPolicy::FirstMatch);
assert_eq!(constraint.min, 1_u64);
assert_eq!(constraint.max, 100_u64);
assert_eq!(constraint.align, 8_u64);
assert_eq!(constraint.policy, AllocPolicy::FirstMatch);
}
#[test]
fn test_set_invalid_boundary() {
// Normal case.
let constraint = Constraint::new(2_u64).max(1000_u64).min(999_u64);
assert!(constraint.validate().is_ok());
// Error case.
let constraint = Constraint::new(2_u64).max(999_u64).min(1000_u64);
assert_eq!(
constraint.validate(),
Err(Error::InvalidBoundary(1000u64, 999u64))
);
}
}

View File

@@ -0,0 +1,26 @@
[package]
name = "dbs-arch"
version = "0.2.3"
authors = ["Alibaba Dragonball Team"]
license = "Apache-2.0 AND BSD-3-Clause"
edition = "2018"
description = "A collection of CPU architecture specific constants and utilities."
homepage = "https://github.com/openanolis/dragonball-sandbox"
repository = "https://github.com/openanolis/dragonball-sandbox"
keywords = ["dragonball", "secure-sandbox", "arch", "ARM64", "x86"]
readme = "README.md"
[dependencies]
memoffset = "0.6"
kvm-bindings = { version = "0.6.0", features = ["fam-wrappers"] }
kvm-ioctls = "0.12.0"
thiserror = "1"
vm-memory = { version = "0.9" }
vmm-sys-util = "0.11.0"
libc = ">=0.2.39"
[dev-dependencies]
vm-memory = { version = "0.9", features = ["backend-mmap"] }
[package.metadata.docs.rs]
all-features = true

View File

@@ -0,0 +1 @@
../../LICENSE

View File

@@ -0,0 +1,29 @@
# dbs-arch
## Design
The `dbs-arch` crate is a collection of CPU architecture specific constants and utilities to hide CPU architecture details away from the Dragonball Sandbox or other VMMs.
Also, we have provided x86_64 CPUID support in this crate, for more details you could look at [this document](docs/x86_64_cpuid.md)
## Supported Architectures
- AMD64 (x86_64)
- ARM64 (aarch64)
## Submodule List
This repository contains the following submodules:
| Name | Arch| Description |
| --- | --- | --- |
| [x86_64::cpuid](src/x86_64/cpuid/) | x86_64 |Facilities to process CPUID information. |
| [x86_64::msr](src/x86_64/msr.rs) | x86_64 | Constants and functions for Model Specific Registers |
| [aarch64::gic](src/aarch64/gic) | aarch64 | Structures to manage GICv2/GICv3/ITS devices for ARM64 |
| [aarch64::regs](src/aarch64/regs.rs) | aarch64 | Constants and functions to configure and manage CPU registers |
## Acknowledgement
Part of the code is derived from the [Firecracker](https://github.com/firecracker-microvm/firecracker) project.
## License
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.

View File

@@ -0,0 +1 @@
../../THIRD-PARTY

Some files were not shown because too many files have changed in this diff Show More