mirror of
https://github.com/kata-containers/kata-containers.git
synced 2026-03-13 16:22:06 +00:00
Compare commits
335 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d424f3c595 | ||
|
|
cf8899f260 | ||
|
|
542012c8be | ||
|
|
5979f3790b | ||
|
|
006ecce49a | ||
|
|
6ad16d4977 | ||
|
|
4e812009f5 | ||
|
|
29855ed0c6 | ||
|
|
025596b289 | ||
|
|
e1a69c0c92 | ||
|
|
1a6b27bf6a | ||
|
|
a536d4a7bf | ||
|
|
ad6e53c399 | ||
|
|
7ffc0c1225 | ||
|
|
35d6d86ab5 | ||
|
|
f764248095 | ||
|
|
2205fb9d05 | ||
|
|
11631c681a | ||
|
|
7923de8999 | ||
|
|
e2c31fce23 | ||
|
|
2fc5f0e2e0 | ||
|
|
c0171ea0a7 | ||
|
|
58f9a57c20 | ||
|
|
07694ef3ae | ||
|
|
d8439dba89 | ||
|
|
bda83cee5d | ||
|
|
badff23c71 | ||
|
|
27c02367f9 | ||
|
|
a0a524efc2 | ||
|
|
f5e9985afe | ||
|
|
f910c66d6f | ||
|
|
1a94aad44f | ||
|
|
2d13e2d71c | ||
|
|
b77d69aeee | ||
|
|
743291c6c4 | ||
|
|
a71d35c764 | ||
|
|
6328181762 | ||
|
|
f74b7aba18 | ||
|
|
8933d54428 | ||
|
|
8a584589ff | ||
|
|
21f5b65233 | ||
|
|
69f05cf9e6 | ||
|
|
87d41b3dfa | ||
|
|
ff8d7e7e41 | ||
|
|
1b111a9aab | ||
|
|
684a6e1a55 | ||
|
|
99711f107f | ||
|
|
7c857d38c1 | ||
|
|
28e171bf73 | ||
|
|
91e1e612c3 | ||
|
|
cddcde1d40 | ||
|
|
7edc7172c0 | ||
|
|
b3901c46d6 | ||
|
|
8a2c201719 | ||
|
|
5a1b5d3672 | ||
|
|
ad413d1646 | ||
|
|
1512560111 | ||
|
|
bee1a628bd | ||
|
|
62e328ca5c | ||
|
|
458e1bc712 | ||
|
|
1cc1c81c9a | ||
|
|
1a5f90dc3f | ||
|
|
51cd99c927 | ||
|
|
3b883bf5a7 | ||
|
|
f9dec11a8f | ||
|
|
53af71cfd0 | ||
|
|
a435d36fe1 | ||
|
|
a79a3a8e1d | ||
|
|
3c32875046 | ||
|
|
08dfaa97aa | ||
|
|
63b8534b41 | ||
|
|
e8f8641988 | ||
|
|
68b9acfd02 | ||
|
|
f89abcbad8 | ||
|
|
c9742d6fa9 | ||
|
|
731e7c763f | ||
|
|
d74639d8c6 | ||
|
|
02cc4fe9db | ||
|
|
8353aae41a | ||
|
|
6ad5d7112e | ||
|
|
5261e3a60c | ||
|
|
9cc6b5f461 | ||
|
|
9d285c6226 | ||
|
|
87568ed985 | ||
|
|
39192c6084 | ||
|
|
0e157be6f2 | ||
|
|
a274333248 | ||
|
|
69535b8089 | ||
|
|
9e1710674a | ||
|
|
61a8eabf8e | ||
|
|
6222bd9103 | ||
|
|
187a72d381 | ||
|
|
0c84270357 | ||
|
|
6520dfee37 | ||
|
|
ff22790617 | ||
|
|
a5d4e33880 | ||
|
|
5e937fa622 | ||
|
|
b0bea47c53 | ||
|
|
73c57b9a19 | ||
|
|
e941b3a094 | ||
|
|
ba8a8fcbf2 | ||
|
|
c8fcd29d9b | ||
|
|
901c192251 | ||
|
|
5d6199f9bc | ||
|
|
20f1f62a2a | ||
|
|
ede1dae65d | ||
|
|
662f87539e | ||
|
|
f28af98ac6 | ||
|
|
8a22b5f075 | ||
|
|
9792ac49fe | ||
|
|
24564a8499 | ||
|
|
c5a87eed29 | ||
|
|
6daeb08e69 | ||
|
|
3aa6c77a01 | ||
|
|
37641a5430 | ||
|
|
314aec73d4 | ||
|
|
4703434b12 | ||
|
|
350f3f70b7 | ||
|
|
d7f04a64a0 | ||
|
|
bdde6aa948 | ||
|
|
91a0b3b406 | ||
|
|
3c1044d9d5 | ||
|
|
5385ddc560 | ||
|
|
6177a0db3e | ||
|
|
a45900324d | ||
|
|
ea198fddcc | ||
|
|
8f7ef41c14 | ||
|
|
6293c17bde | ||
|
|
cdf04e5018 | ||
|
|
7a3b55ce67 | ||
|
|
c1bd527163 | ||
|
|
6efd684a46 | ||
|
|
5b82268d2c | ||
|
|
ff4cfcd8a2 | ||
|
|
c8ac56569a | ||
|
|
81775ab1b3 | ||
|
|
717f775f30 | ||
|
|
b9f100b391 | ||
|
|
a56f96bb2b | ||
|
|
5ce0b4743f | ||
|
|
b11d618a3f | ||
|
|
56fdeb1247 | ||
|
|
4a5ab38f16 | ||
|
|
d4eba36980 | ||
|
|
b7c9867d60 | ||
|
|
2e9853c761 | ||
|
|
7c4b597816 | ||
|
|
589672d510 | ||
|
|
6a680e241b | ||
|
|
fb4f7a002c | ||
|
|
0ae987973b | ||
|
|
4a207a16f9 | ||
|
|
2c8f83424d | ||
|
|
1fc715bc65 | ||
|
|
e1a4040a6c | ||
|
|
6a59e227b6 | ||
|
|
e91f5edba0 | ||
|
|
8b8aef09af | ||
|
|
56767001cb | ||
|
|
a84773652c | ||
|
|
99ba86a1b2 | ||
|
|
7f3b309997 | ||
|
|
fde22d6bce | ||
|
|
9465a04963 | ||
|
|
df8d144119 | ||
|
|
f90570aef0 | ||
|
|
c3637039f4 | ||
|
|
bc4919f9b2 | ||
|
|
f9e332c6db | ||
|
|
cfd662fee9 | ||
|
|
d36c3395c0 | ||
|
|
b5be8a4a8f | ||
|
|
f2e00c95c0 | ||
|
|
8979552527 | ||
|
|
1bbcbafa67 | ||
|
|
f66c68a2bf | ||
|
|
4dd828414f | ||
|
|
ad47d1b9f8 | ||
|
|
788c562a95 | ||
|
|
6742f3a898 | ||
|
|
5eacecffc3 | ||
|
|
8ed1595f96 | ||
|
|
6123d0db2c | ||
|
|
8653be71b2 | ||
|
|
6a76bf92cb | ||
|
|
72743851c1 | ||
|
|
9f6d4892c8 | ||
|
|
6f73a72839 | ||
|
|
3615d73433 | ||
|
|
34779491e0 | ||
|
|
f3738beaca | ||
|
|
b87ed27416 | ||
|
|
124e390333 | ||
|
|
db77c9a438 | ||
|
|
13715db1f8 | ||
|
|
e149a3c783 | ||
|
|
630634c5df | ||
|
|
228b30f31c | ||
|
|
81f99543ec | ||
|
|
38a7b5325f | ||
|
|
a0fd41fd37 | ||
|
|
ae6e8d2b38 | ||
|
|
309e232553 | ||
|
|
f95a7896b1 | ||
|
|
14025baafe | ||
|
|
b629f6a822 | ||
|
|
59fdd69b85 | ||
|
|
5dddd7c5d1 | ||
|
|
bad3ac84b0 | ||
|
|
87d99a71ec | ||
|
|
545de5042a | ||
|
|
62aa6750ec | ||
|
|
fe07ac662d | ||
|
|
dd422ccb69 | ||
|
|
114542e2ba | ||
|
|
371a118ad0 | ||
|
|
e64edf41e5 | ||
|
|
67a6fff4f7 | ||
|
|
c3f21c36f3 | ||
|
|
01450deb6a | ||
|
|
8430068058 | ||
|
|
bbd3c1b6ab | ||
|
|
7153b51578 | ||
|
|
8c662916ab | ||
|
|
5f7da301fd | ||
|
|
fad801d0fb | ||
|
|
55e2f0955b | ||
|
|
556e663fce | ||
|
|
98c1217093 | ||
|
|
8e7d9926e4 | ||
|
|
e2ee769783 | ||
|
|
2011e3d72a | ||
|
|
8e09e04f48 | ||
|
|
935432c36d | ||
|
|
2ee2cd307b | ||
|
|
88eaff5330 | ||
|
|
c09e268a1b | ||
|
|
25d80fcec2 | ||
|
|
4687f2bf9d | ||
|
|
6a7a323656 | ||
|
|
ac5f5353ba | ||
|
|
950b89ffac | ||
|
|
7729d82e6e | ||
|
|
26d525fcf3 | ||
|
|
b4852c8544 | ||
|
|
8ccc1e5c93 | ||
|
|
f50d2b0664 | ||
|
|
687596ae41 | ||
|
|
620b945975 | ||
|
|
d50f3888af | ||
|
|
ce14f26d82 | ||
|
|
419f8a5db7 | ||
|
|
6c91af0a26 | ||
|
|
5a9829996c | ||
|
|
59f4731bb2 | ||
|
|
468f017e21 | ||
|
|
b9535fb187 | ||
|
|
7a854507cc | ||
|
|
cfc90fad84 | ||
|
|
64f013f3bf | ||
|
|
8f4b1df9cf | ||
|
|
9b3dc572ae | ||
|
|
2c8dfde168 | ||
|
|
b9b8ccca0c | ||
|
|
150e54d02b | ||
|
|
3ae02f9202 | ||
|
|
22d4e4c5a6 | ||
|
|
a864d0e349 | ||
|
|
788d2a254e | ||
|
|
e8917d7321 | ||
|
|
8db43eae44 | ||
|
|
3fed61e7a4 | ||
|
|
b34dda4ca6 | ||
|
|
6787c63900 | ||
|
|
6e5679bc46 | ||
|
|
62080f83cb | ||
|
|
02d99caf6d | ||
|
|
9824206820 | ||
|
|
61e4032b08 | ||
|
|
a24dbdc781 | ||
|
|
dacdf7c282 | ||
|
|
f5d1957174 | ||
|
|
304b9d9146 | ||
|
|
eed3c7c046 | ||
|
|
7319cff77a | ||
|
|
2a957d41c8 | ||
|
|
75a294b74b | ||
|
|
b69cdb5c21 | ||
|
|
ee17097e88 | ||
|
|
f63673838b | ||
|
|
6924d14df5 | ||
|
|
9e048c8ee0 | ||
|
|
2935aeb7d7 | ||
|
|
02031e29aa | ||
|
|
107fae033b | ||
|
|
8c75c2f4bd | ||
|
|
49723a9ecf | ||
|
|
dc67d902eb | ||
|
|
3f38f75918 | ||
|
|
438fe3b829 | ||
|
|
bd08d745f4 | ||
|
|
3ffd48bc16 | ||
|
|
7f961461bd | ||
|
|
bb2ef4ca34 | ||
|
|
063f7aa7cb | ||
|
|
b6282f7053 | ||
|
|
1af03b9b32 | ||
|
|
4cecd62370 | ||
|
|
c4094f62c9 | ||
|
|
b9a63d66a4 | ||
|
|
1ab99bd6bb | ||
|
|
f6a51a8a78 | ||
|
|
4e352a73ee | ||
|
|
89b622dcb8 | ||
|
|
8c9d08e872 | ||
|
|
283f809dda | ||
|
|
a65291ad72 | ||
|
|
46b81dd7d2 | ||
|
|
c4771d9e89 | ||
|
|
a88212e2c5 | ||
|
|
883b4db380 | ||
|
|
6822029c81 | ||
|
|
ae55893deb | ||
|
|
ce54e43ebe | ||
|
|
ceb5c69ee8 | ||
|
|
fbc2a91ab5 | ||
|
|
307cfc8f7a | ||
|
|
aedc586e14 | ||
|
|
310e069f73 | ||
|
|
ed23b47c71 | ||
|
|
2be342023b | ||
|
|
6ca34f949e | ||
|
|
6c68924230 | ||
|
|
f72cb2fc12 | ||
|
|
07810bf71f |
4
.github/workflows/PR-wip-checks.yaml
vendored
4
.github/workflows/PR-wip-checks.yaml
vendored
@@ -9,6 +9,10 @@ on:
|
||||
- labeled
|
||||
- unlabeled
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
pr_wip_check:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
4
.github/workflows/add-backport-label.yaml
vendored
4
.github/workflows/add-backport-label.yaml
vendored
@@ -10,6 +10,10 @@ on:
|
||||
- labeled
|
||||
- unlabeled
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
check-issues:
|
||||
if: ${{ github.event.label.name != 'auto-backport' }}
|
||||
|
||||
4
.github/workflows/add-issues-to-project.yaml
vendored
4
.github/workflows/add-issues-to-project.yaml
vendored
@@ -11,6 +11,10 @@ on:
|
||||
- opened
|
||||
- reopened
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
add-new-issues-to-backlog:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
4
.github/workflows/add-pr-sizing-label.yaml
vendored
4
.github/workflows/add-pr-sizing-label.yaml
vendored
@@ -12,6 +12,10 @@ on:
|
||||
- reopened
|
||||
- synchronize
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
add-pr-size-label:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
4
.github/workflows/auto-backport.yaml
vendored
4
.github/workflows/auto-backport.yaml
vendored
@@ -2,6 +2,10 @@ on:
|
||||
pull_request_target:
|
||||
types: ["labeled", "closed"]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
backport:
|
||||
name: Backport PR
|
||||
|
||||
@@ -99,7 +99,7 @@ jobs:
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
|
||||
@@ -2,6 +2,10 @@ name: CI | Build kata-static tarball for arm64
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
stage:
|
||||
required: false
|
||||
type: string
|
||||
default: test
|
||||
tarball-suffix:
|
||||
required: false
|
||||
type: string
|
||||
@@ -29,6 +33,8 @@ jobs:
|
||||
- rootfs-initrd
|
||||
- shim-v2
|
||||
- virtiofsd
|
||||
stage:
|
||||
- ${{ inputs.stage }}
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
@@ -83,7 +89,7 @@ jobs:
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
|
||||
@@ -2,6 +2,10 @@ name: CI | Build kata-static tarball for s390x
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
stage:
|
||||
required: false
|
||||
type: string
|
||||
default: test
|
||||
tarball-suffix:
|
||||
required: false
|
||||
type: string
|
||||
@@ -25,6 +29,8 @@ jobs:
|
||||
- rootfs-initrd
|
||||
- shim-v2
|
||||
- virtiofsd
|
||||
stage:
|
||||
- ${{ inputs.stage }}
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
@@ -80,7 +86,7 @@ jobs:
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
|
||||
5
.github/workflows/cargo-deny-runner.yaml
vendored
5
.github/workflows/cargo-deny-runner.yaml
vendored
@@ -7,6 +7,11 @@ on:
|
||||
- reopened
|
||||
- synchronize
|
||||
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
cargo-deny-runner:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
170
.github/workflows/cc-payload-after-push-amd64.yaml
vendored
170
.github/workflows/cc-payload-after-push-amd64.yaml
vendored
@@ -1,170 +0,0 @@
|
||||
name: CI | Publish CC runtime payload for amd64
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
target-arch:
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-asset:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
measured_rootfs:
|
||||
- no
|
||||
asset:
|
||||
- cc-cloud-hypervisor
|
||||
- cc-qemu
|
||||
- cc-virtiofsd
|
||||
- cc-sev-kernel
|
||||
- cc-sev-ovmf
|
||||
- cc-x86_64-ovmf
|
||||
- cc-snp-qemu
|
||||
- cc-sev-rootfs-initrd
|
||||
- cc-tdx-qemu
|
||||
- cc-tdx-td-shim
|
||||
- cc-tdx-tdvf
|
||||
include:
|
||||
- measured_rootfs: yes
|
||||
asset: cc-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-tdx-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-rootfs-image
|
||||
- measured_rootfs: yes
|
||||
asset: cc-tdx-rootfs-image
|
||||
steps:
|
||||
- name: Login to Kata Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0 # This is needed in order to keep the commit ids history
|
||||
- name: Build ${{ matrix.asset }}
|
||||
run: |
|
||||
make "${KATA_ASSET}-tarball"
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
KATA_ASSET: ${{ matrix.asset }}
|
||||
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
|
||||
PUSH_TO_REGISTRY: yes
|
||||
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
|
||||
|
||||
- name: store-artifact ${{ matrix.asset }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
- name: store-artifact root_hash_tdx.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_tdx.txt
|
||||
path: tools/osbuilder/root_hash_tdx.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
- name: store-artifact root_hash_vanilla.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt
|
||||
path: tools/osbuilder/root_hash_vanilla.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
build-asset-cc-shim-v2:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build-asset
|
||||
steps:
|
||||
- name: Login to Kata Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Get root_hash_tdx.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_tdx.txt
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Get root_hash_vanilla.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Build cc-shim-v2
|
||||
run: |
|
||||
make cc-shim-v2-tarball
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
PUSH_TO_REGISTRY: yes
|
||||
MEASURED_ROOTFS: yes
|
||||
|
||||
- name: store-artifact cc-shim-v2
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-build/kata-static-cc-shim-v2.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
create-kata-tarball:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-asset, build-asset-cc-shim-v2]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
path: kata-static.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
kata-payload:
|
||||
needs: create-kata-tarball
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Login to Confidential Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
|
||||
- name: build-and-push-kata-payload
|
||||
id: build-and-push-kata-payload
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
|
||||
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
|
||||
"kata-containers-${{ inputs.target-arch }}"
|
||||
171
.github/workflows/cc-payload-after-push-s390x.yaml
vendored
171
.github/workflows/cc-payload-after-push-s390x.yaml
vendored
@@ -1,171 +0,0 @@
|
||||
name: CI | Publish CC runtime payload for s390x
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
target-arch:
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-asset:
|
||||
runs-on: s390x
|
||||
strategy:
|
||||
matrix:
|
||||
measured_rootfs:
|
||||
- no
|
||||
asset:
|
||||
- cc-qemu
|
||||
- cc-rootfs-initrd
|
||||
- cc-se-image
|
||||
- cc-virtiofsd
|
||||
include:
|
||||
- measured_rootfs: yes
|
||||
asset: cc-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-rootfs-image
|
||||
steps:
|
||||
- name: Login to Kata Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0 # This is needed in order to keep the commit ids history
|
||||
|
||||
- name: Place a host key document
|
||||
run: |
|
||||
mkdir -p "host-key-document"
|
||||
cp "${CI_HKD_PATH}" "host-key-document"
|
||||
env:
|
||||
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
|
||||
|
||||
- name: Build ${{ matrix.asset }}
|
||||
run: |
|
||||
make "${KATA_ASSET}-tarball"
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
sudo chown -R $(id -u):$(id -g) "kata-build"
|
||||
env:
|
||||
KATA_ASSET: ${{ matrix.asset }}
|
||||
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
|
||||
PUSH_TO_REGISTRY: yes
|
||||
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
|
||||
HKD_PATH: "host-key-document"
|
||||
|
||||
- name: store-artifact ${{ matrix.asset }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
- name: store-artifact root_hash_vanilla.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt-s390x
|
||||
path: tools/osbuilder/root_hash_vanilla.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
build-asset-cc-shim-v2:
|
||||
runs-on: s390x
|
||||
needs: build-asset
|
||||
steps:
|
||||
- name: Login to Kata Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Get root_hash_vanilla.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt-s390x
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Build cc-shim-v2
|
||||
run: |
|
||||
make cc-shim-v2-tarball
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
PUSH_TO_REGISTRY: yes
|
||||
MEASURED_ROOTFS: yes
|
||||
|
||||
- name: store-artifact cc-shim-v2
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-build/kata-static-cc-shim-v2.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
create-kata-tarball:
|
||||
runs-on: s390x
|
||||
needs: [build-asset, build-asset-cc-shim-v2]
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-s390x
|
||||
path: kata-static.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
kata-payload:
|
||||
needs: create-kata-tarball
|
||||
runs-on: s390x
|
||||
steps:
|
||||
- name: Login to Confidential Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-s390x
|
||||
|
||||
- name: build-and-push-kata-payload
|
||||
id: build-and-push-kata-payload
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
|
||||
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
|
||||
"kata-containers-${{ inputs.target-arch }}"
|
||||
47
.github/workflows/cc-payload-after-push.yaml
vendored
47
.github/workflows/cc-payload-after-push.yaml
vendored
@@ -1,47 +0,0 @@
|
||||
name: CI | Publish Kata Containers payload for Confidential Containers
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- CCv0
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
build-assets-amd64:
|
||||
uses: ./.github/workflows/cc-payload-after-push-amd64.yaml
|
||||
with:
|
||||
target-arch: amd64
|
||||
secrets: inherit
|
||||
|
||||
build-assets-s390x:
|
||||
uses: ./.github/workflows/cc-payload-after-push-s390x.yaml
|
||||
with:
|
||||
target-arch: s390x
|
||||
secrets: inherit
|
||||
|
||||
publish:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-assets-amd64, build-assets-s390x]
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Login to Confidential Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Push commit multi-arch manifest
|
||||
run: |
|
||||
docker manifest create quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA} \
|
||||
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}-amd64 \
|
||||
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}-s390x
|
||||
docker manifest push quay.io/confidential-containers/runtime-payload-ci:kata-containers-${GITHUB_SHA}
|
||||
|
||||
- name: Push latest multi-arch manifest
|
||||
run: |
|
||||
docker manifest create quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest \
|
||||
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-amd64 \
|
||||
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-s390x
|
||||
docker manifest push quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest
|
||||
154
.github/workflows/cc-payload-amd64.yaml
vendored
154
.github/workflows/cc-payload-amd64.yaml
vendored
@@ -1,154 +0,0 @@
|
||||
name: Publish Kata Containers payload for Confidential Containers (amd64)
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
target-arch:
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-asset:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
measured_rootfs:
|
||||
- no
|
||||
asset:
|
||||
- cc-cloud-hypervisor
|
||||
- cc-qemu
|
||||
- cc-virtiofsd
|
||||
- cc-sev-kernel
|
||||
- cc-sev-ovmf
|
||||
- cc-x86_64-ovmf
|
||||
- cc-snp-qemu
|
||||
- cc-sev-rootfs-initrd
|
||||
- cc-tdx-qemu
|
||||
- cc-tdx-td-shim
|
||||
- cc-tdx-tdvf
|
||||
include:
|
||||
- measured_rootfs: yes
|
||||
asset: cc-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-tdx-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-rootfs-image
|
||||
- measured_rootfs: yes
|
||||
asset: cc-tdx-rootfs-image
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Build ${{ matrix.asset }}
|
||||
run: |
|
||||
make "${KATA_ASSET}-tarball"
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
KATA_ASSET: ${{ matrix.asset }}
|
||||
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
|
||||
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
|
||||
|
||||
- name: store-artifact ${{ matrix.asset }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
- name: store-artifact root_hash_tdx.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_tdx.txt
|
||||
path: tools/osbuilder/root_hash_tdx.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
- name: store-artifact root_hash_vanilla.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt
|
||||
path: tools/osbuilder/root_hash_vanilla.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
build-asset-cc-shim-v2:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build-asset
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Get root_hash_tdx.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_tdx.txt
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Get root_hash_vanilla.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Build cc-shim-v2
|
||||
run: |
|
||||
make cc-shim-v2-tarball
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
MEASURED_ROOTFS: yes
|
||||
|
||||
- name: store-artifact cc-shim-v2
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-build/kata-static-cc-shim-v2.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
create-kata-tarball:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-asset, build-asset-cc-shim-v2]
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
path: kata-static.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
kata-payload:
|
||||
needs: create-kata-tarball
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Login to quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
|
||||
- name: build-and-push-kata-payload
|
||||
id: build-and-push-kata-payload
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
|
||||
$(pwd)/kata-static.tar.xz \
|
||||
"quay.io/confidential-containers/runtime-payload" \
|
||||
"kata-containers-${{ inputs.target-arch }}"
|
||||
|
||||
142
.github/workflows/cc-payload-s390x.yaml
vendored
142
.github/workflows/cc-payload-s390x.yaml
vendored
@@ -1,142 +0,0 @@
|
||||
name: Publish Kata Containers payload for Confidential Containers (s390x)
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
target-arch:
|
||||
required: true
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
build-asset:
|
||||
runs-on: s390x
|
||||
strategy:
|
||||
matrix:
|
||||
measured_rootfs:
|
||||
- no
|
||||
asset:
|
||||
- cc-qemu
|
||||
- cc-virtiofsd
|
||||
include:
|
||||
- measured_rootfs: yes
|
||||
asset: cc-kernel
|
||||
- measured_rootfs: yes
|
||||
asset: cc-rootfs-image
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: Build ${{ matrix.asset }}
|
||||
run: |
|
||||
make "${KATA_ASSET}-tarball"
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
KATA_ASSET: ${{ matrix.asset }}
|
||||
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
|
||||
MEASURED_ROOTFS: ${{ matrix.measured_rootfs }}
|
||||
|
||||
- name: store-artifact ${{ matrix.asset }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
- name: store-artifact root_hash_vanilla.txt
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt-s390x
|
||||
path: tools/osbuilder/root_hash_vanilla.txt
|
||||
retention-days: 1
|
||||
if-no-files-found: ignore
|
||||
|
||||
build-asset-cc-shim-v2:
|
||||
runs-on: s390x
|
||||
needs: build-asset
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Get root_hash_vanilla.txt
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: root_hash_vanilla.txt-s390x
|
||||
path: tools/osbuilder/
|
||||
|
||||
- name: Build cc-shim-v2
|
||||
run: |
|
||||
make cc-shim-v2-tarball
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
MEASURED_ROOTFS: yes
|
||||
|
||||
- name: store-artifact cc-shim-v2
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-build/kata-static-cc-shim-v2.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
create-kata-tarball:
|
||||
runs-on: s390x
|
||||
needs: [build-asset, build-asset-cc-shim-v2]
|
||||
steps:
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-artifacts-s390x
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-s390x
|
||||
path: kata-static.tar.xz
|
||||
retention-days: 1
|
||||
if-no-files-found: error
|
||||
|
||||
kata-payload:
|
||||
needs: create-kata-tarball
|
||||
runs-on: s390x
|
||||
steps:
|
||||
- name: Login to quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Adjust a permission for repo
|
||||
run: |
|
||||
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-s390x
|
||||
|
||||
- name: build-and-push-kata-payload
|
||||
id: build-and-push-kata-payload
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
|
||||
$(pwd)/kata-static.tar.xz \
|
||||
"quay.io/confidential-containers/runtime-payload" \
|
||||
"kata-containers-${{ inputs.target-arch }}"
|
||||
46
.github/workflows/cc-payload.yaml
vendored
46
.github/workflows/cc-payload.yaml
vendored
@@ -1,46 +0,0 @@
|
||||
name: Publish Kata Containers payload for Confidential Containers
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'CC\-[0-9]+.[0-9]+.[0-9]+'
|
||||
|
||||
jobs:
|
||||
build-assets-amd64:
|
||||
uses: ./.github/workflows/cc-payload-amd64.yaml
|
||||
with:
|
||||
target-arch: amd64
|
||||
secrets: inherit
|
||||
|
||||
build-assets-s390x:
|
||||
uses: ./.github/workflows/cc-payload-s390x.yaml
|
||||
with:
|
||||
target-arch: s390x
|
||||
secrets: inherit
|
||||
|
||||
publish:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-assets-amd64, build-assets-s390x]
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Login to Confidential Containers quay.io
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
|
||||
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
|
||||
|
||||
- name: Push commit multi-arch manifest
|
||||
run: |
|
||||
docker manifest create quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA} \
|
||||
--amend quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}-amd64 \
|
||||
--amend quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}-s390x
|
||||
docker manifest push quay.io/confidential-containers/runtime-payload:kata-containers-${GITHUB_SHA}
|
||||
|
||||
- name: Push latest multi-arch manifest
|
||||
run: |
|
||||
docker manifest create quay.io/confidential-containers/runtime-payload:kata-containers-latest \
|
||||
--amend quay.io/confidential-containers/runtime-payload:kata-containers-amd64 \
|
||||
--amend quay.io/confidential-containers/runtime-payload:kata-containers-s390x
|
||||
docker manifest push quay.io/confidential-containers/runtime-payload:kata-containers-latest
|
||||
4
.github/workflows/ci-nightly.yaml
vendored
4
.github/workflows/ci-nightly.yaml
vendored
@@ -4,6 +4,10 @@ on:
|
||||
- cron: '0 0 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
kata-containers-ci-on-push:
|
||||
uses: ./.github/workflows/ci.yaml
|
||||
|
||||
6
.github/workflows/ci-on-push.yaml
vendored
6
.github/workflows/ci-on-push.yaml
vendored
@@ -3,6 +3,7 @@ on:
|
||||
pull_request_target:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'stable-*'
|
||||
types:
|
||||
# Adding 'labeled' to the list of activity types that trigger this event
|
||||
# (default: opened, synchronize, reopened) so that we can run this
|
||||
@@ -14,6 +15,11 @@ on:
|
||||
- labeled
|
||||
paths-ignore:
|
||||
- 'docs/**'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
kata-containers-ci-on-push:
|
||||
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
|
||||
|
||||
21
.github/workflows/ci.yaml
vendored
21
.github/workflows/ci.yaml
vendored
@@ -74,3 +74,24 @@ jobs:
|
||||
with:
|
||||
tarball-suffix: -${{ inputs.tag }}
|
||||
commit-hash: ${{ inputs.commit-hash }}
|
||||
|
||||
run-cri-containerd-tests:
|
||||
needs: build-kata-static-tarball-amd64
|
||||
uses: ./.github/workflows/run-cri-containerd-tests.yaml
|
||||
with:
|
||||
tarball-suffix: -${{ inputs.tag }}
|
||||
commit-hash: ${{ inputs.commit-hash }}
|
||||
|
||||
run-nydus-tests:
|
||||
needs: build-kata-static-tarball-amd64
|
||||
uses: ./.github/workflows/run-nydus-tests.yaml
|
||||
with:
|
||||
tarball-suffix: -${{ inputs.tag }}
|
||||
commit-hash: ${{ inputs.commit-hash }}
|
||||
|
||||
run-vfio-tests:
|
||||
needs: build-kata-static-tarball-amd64
|
||||
uses: ./.github/workflows/run-vfio-tests.yaml
|
||||
with:
|
||||
tarball-suffix: -${{ inputs.tag }}
|
||||
commit-hash: ${{ inputs.commit-hash }}
|
||||
|
||||
8
.github/workflows/commit-message-check.yaml
vendored
8
.github/workflows/commit-message-check.yaml
vendored
@@ -6,6 +6,10 @@ on:
|
||||
- reopened
|
||||
- synchronize
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
error_msg: |+
|
||||
See the document below for help on formatting commits for the project.
|
||||
@@ -47,7 +51,7 @@ jobs:
|
||||
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
|
||||
with:
|
||||
commits: ${{ steps.get-pr-commits.outputs.commits }}
|
||||
pattern: '^.{0,75}(\n.*)*$|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
|
||||
pattern: '^.{0,75}(\n.*)*$'
|
||||
error: 'Subject too long (max 75)'
|
||||
post_error: ${{ env.error_msg }}
|
||||
|
||||
@@ -98,6 +102,6 @@ jobs:
|
||||
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
|
||||
with:
|
||||
commits: ${{ steps.get-pr-commits.outputs.commits }}
|
||||
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
|
||||
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:'
|
||||
error: 'Failed to find subsystem in subject'
|
||||
post_error: ${{ env.error_msg }}
|
||||
|
||||
5
.github/workflows/darwin-tests.yaml
vendored
5
.github/workflows/darwin-tests.yaml
vendored
@@ -6,6 +6,11 @@ on:
|
||||
- reopened
|
||||
- synchronize
|
||||
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
name: Darwin tests
|
||||
jobs:
|
||||
test:
|
||||
|
||||
124
.github/workflows/deploy-ccv0-demo.yaml
vendored
124
.github/workflows/deploy-ccv0-demo.yaml
vendored
@@ -1,124 +0,0 @@
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created, edited]
|
||||
|
||||
name: deploy-ccv0-demo
|
||||
|
||||
jobs:
|
||||
check-comment-and-membership:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
github.event.issue.pull_request
|
||||
&& github.event_name == 'issue_comment'
|
||||
&& github.event.action == 'created'
|
||||
&& startsWith(github.event.comment.body, '/deploy-ccv0-demo')
|
||||
steps:
|
||||
- name: Check membership
|
||||
uses: kata-containers/is-organization-member@1.0.1
|
||||
id: is_organization_member
|
||||
with:
|
||||
organization: kata-containers
|
||||
username: ${{ github.event.comment.user.login }}
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Fail if not member
|
||||
run: |
|
||||
result=${{ steps.is_organization_member.outputs.result }}
|
||||
if [ $result == false ]; then
|
||||
user=${{ github.event.comment.user.login }}
|
||||
echo Either ${user} is not part of the kata-containers organization
|
||||
echo or ${user} has its Organization Visibility set to Private at
|
||||
echo https://github.com/orgs/kata-containers/people?query=${user}
|
||||
echo
|
||||
echo Ensure you change your Organization Visibility to Public and
|
||||
echo trigger the test again.
|
||||
exit 1
|
||||
fi
|
||||
|
||||
build-asset:
|
||||
runs-on: ubuntu-latest
|
||||
needs: check-comment-and-membership
|
||||
strategy:
|
||||
matrix:
|
||||
asset:
|
||||
- cloud-hypervisor
|
||||
- firecracker
|
||||
- kernel
|
||||
- qemu
|
||||
- rootfs-image
|
||||
- rootfs-initrd
|
||||
- shim-v2
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Prepare confidential container rootfs
|
||||
if: ${{ matrix.asset == 'rootfs-initrd' }}
|
||||
run: |
|
||||
pushd include_rootfs/etc
|
||||
curl -LO https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json
|
||||
mkdir kata-containers
|
||||
envsubst < docs/how-to/data/confidential-agent-config.toml.in > kata-containers/agent.toml
|
||||
popd
|
||||
env:
|
||||
AA_KBC_PARAMS: offline_fs_kbc::null
|
||||
|
||||
- name: Build ${{ matrix.asset }}
|
||||
run: |
|
||||
make "${KATA_ASSET}-tarball"
|
||||
build_dir=$(readlink -f build)
|
||||
# store-artifact does not work with symlink
|
||||
sudo cp -r "${build_dir}" "kata-build"
|
||||
env:
|
||||
AA_KBC: offline_fs_kbc
|
||||
INCLUDE_ROOTFS: include_rootfs
|
||||
KATA_ASSET: ${{ matrix.asset }}
|
||||
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
|
||||
|
||||
- name: store-artifact ${{ matrix.asset }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
|
||||
if-no-files-found: error
|
||||
|
||||
create-kata-tarball:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build-asset
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: get-artifacts
|
||||
uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: kata-artifacts
|
||||
path: kata-artifacts
|
||||
- name: merge-artifacts
|
||||
run: |
|
||||
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
|
||||
- name: store-artifacts
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
path: kata-static.tar.xz
|
||||
|
||||
kata-deploy:
|
||||
needs: create-kata-tarball
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: kata-static-tarball
|
||||
- name: build-and-push-kata-deploy-ci
|
||||
id: build-and-push-kata-deploy-ci
|
||||
run: |
|
||||
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
|
||||
pushd $GITHUB_WORKSPACE
|
||||
git checkout $tag
|
||||
pkg_sha=$(git rev-parse HEAD)
|
||||
popd
|
||||
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
|
||||
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/confidential-containers/runtime-payload:$pkg_sha $GITHUB_WORKSPACE/tools/packaging/kata-deploy
|
||||
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
|
||||
docker push quay.io/confidential-containers/runtime-payload:$pkg_sha
|
||||
mkdir -p packaging/kata-deploy
|
||||
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
|
||||
echo "::set-output name=PKG_SHA::${pkg_sha}"
|
||||
36
.github/workflows/kata-runtime-classes-sync.yaml
vendored
Normal file
36
.github/workflows/kata-runtime-classes-sync.yaml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- opened
|
||||
- edited
|
||||
- reopened
|
||||
- synchronize
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
kata-deploy-runtime-classes-check:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
- name: Ensure the split out runtime classes match the all-in-one file
|
||||
run: |
|
||||
pushd tools/packaging/kata-deploy/runtimeclasses/
|
||||
echo "::group::Combine runtime classes"
|
||||
for runtimeClass in `find . -type f \( -name "*.yaml" -and -not -name "kata-runtimeClasses.yaml" \) | sort`; do
|
||||
echo "Adding ${runtimeClass} to the resultingRuntimeClasses.yaml"
|
||||
cat ${runtimeClass} >> resultingRuntimeClasses.yaml;
|
||||
done
|
||||
echo "::endgroup::"
|
||||
echo "::group::Displaying the content of resultingRuntimeClasses.yaml"
|
||||
cat resultingRuntimeClasses.yaml
|
||||
echo "::endgroup::"
|
||||
echo ""
|
||||
echo "::group::Displaying the content of kata-runtimeClasses.yaml"
|
||||
cat kata-runtimeClasses.yaml
|
||||
echo "::endgroup::"
|
||||
echo ""
|
||||
diff resultingRuntimeClasses.yaml kata-runtimeClasses.yaml
|
||||
4
.github/workflows/payload-after-push.yaml
vendored
4
.github/workflows/payload-after-push.yaml
vendored
@@ -5,6 +5,10 @@ on:
|
||||
- main
|
||||
- stable-*
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
build-assets-amd64:
|
||||
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
|
||||
|
||||
19
.github/workflows/release.yaml
vendored
19
.github/workflows/release.yaml
vendored
@@ -4,6 +4,10 @@ on:
|
||||
tags:
|
||||
- '[0-9]+.[0-9]+.[0-9]+*'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
build-and-push-assets-amd64:
|
||||
uses: ./.github/workflows/release-amd64.yaml
|
||||
@@ -117,6 +121,21 @@ jobs:
|
||||
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
|
||||
popd
|
||||
|
||||
upload-versions-yaml:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: upload versions.yaml
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GIT_UPLOAD_TOKEN }}
|
||||
run: |
|
||||
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
|
||||
pushd $GITHUB_WORKSPACE
|
||||
versions_file="kata-containers-$tag-versions.yaml"
|
||||
cp versions.yaml ${versions_file}
|
||||
hub release edit -m "" -a "${versions_file}" "${tag}"
|
||||
popd
|
||||
|
||||
upload-cargo-vendored-tarball:
|
||||
needs: upload-multi-arch-static-tarball
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -15,6 +15,10 @@ on:
|
||||
branches:
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
check-pr-porting-labels:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
42
.github/workflows/run-cri-containerd-tests.yaml
vendored
Normal file
42
.github/workflows/run-cri-containerd-tests.yaml
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
name: CI | Run cri-containerd tests
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
tarball-suffix:
|
||||
required: false
|
||||
type: string
|
||||
commit-hash:
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
run-cri-containerd:
|
||||
strategy:
|
||||
fail-fast: true
|
||||
matrix:
|
||||
containerd_version: ['lts', 'active']
|
||||
vmm: ['clh', 'qemu']
|
||||
runs-on: garm-ubuntu-2204
|
||||
env:
|
||||
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
|
||||
GOPATH: ${{ github.workspace }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: bash tests/integration/cri-containerd/gha-run.sh install-dependencies
|
||||
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
|
||||
path: kata-artifacts
|
||||
|
||||
- name: Install kata
|
||||
run: bash tests/integration/cri-containerd/gha-run.sh install-kata kata-artifacts
|
||||
|
||||
- name: Run cri-containerd tests
|
||||
run: bash tests/integration/cri-containerd/gha-run.sh run
|
||||
22
.github/workflows/run-k8s-tests-on-aks.yaml
vendored
22
.github/workflows/run-k8s-tests-on-aks.yaml
vendored
@@ -40,37 +40,43 @@ jobs:
|
||||
GH_PR_NUMBER: ${{ inputs.pr-number }}
|
||||
KATA_HOST_OS: ${{ matrix.host_os }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
USING_NFD: "false"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Download Azure CLI
|
||||
run: bash tests/integration/gha-run.sh install-azure-cli
|
||||
run: bash tests/integration/kubernetes/gha-run.sh install-azure-cli
|
||||
|
||||
- name: Log into the Azure account
|
||||
run: bash tests/integration/gha-run.sh login-azure
|
||||
run: bash tests/integration/kubernetes/gha-run.sh login-azure
|
||||
env:
|
||||
AZ_APPID: ${{ secrets.AZ_APPID }}
|
||||
AZ_PASSWORD: ${{ secrets.AZ_PASSWORD }}
|
||||
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
|
||||
|
||||
- name: Create AKS cluster
|
||||
run: bash tests/integration/gha-run.sh create-cluster
|
||||
timeout-minutes: 10
|
||||
run: bash tests/integration/kubernetes/gha-run.sh create-cluster
|
||||
|
||||
- name: Install `bats`
|
||||
run: bash tests/integration/gha-run.sh install-bats
|
||||
run: bash tests/integration/kubernetes/gha-run.sh install-bats
|
||||
|
||||
- name: Install `kubectl`
|
||||
run: bash tests/integration/gha-run.sh install-kubectl
|
||||
run: bash tests/integration/kubernetes/gha-run.sh install-kubectl
|
||||
|
||||
- name: Download credentials for the Kubernetes CLI to use them
|
||||
run: bash tests/integration/gha-run.sh get-cluster-credentials
|
||||
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
|
||||
|
||||
- name: Deploy Kata
|
||||
timeout-minutes: 10
|
||||
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
|
||||
|
||||
- name: Run tests
|
||||
timeout-minutes: 60
|
||||
run: bash tests/integration/gha-run.sh run-tests-aks
|
||||
run: bash tests/integration/kubernetes/gha-run.sh run-tests
|
||||
|
||||
- name: Delete AKS cluster
|
||||
if: always()
|
||||
run: bash tests/integration/gha-run.sh delete-cluster
|
||||
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster
|
||||
|
||||
9
.github/workflows/run-k8s-tests-on-sev.yaml
vendored
9
.github/workflows/run-k8s-tests-on-sev.yaml
vendored
@@ -29,15 +29,20 @@ jobs:
|
||||
DOCKER_TAG: ${{ inputs.tag }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
KUBECONFIG: /home/kata/.kube/config
|
||||
USING_NFD: "false"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Deploy Kata
|
||||
timeout-minutes: 10
|
||||
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-sev
|
||||
|
||||
- name: Run tests
|
||||
timeout-minutes: 30
|
||||
run: bash tests/integration/gha-run.sh run-tests-sev
|
||||
run: bash tests/integration/kubernetes/gha-run.sh run-tests
|
||||
|
||||
- name: Delete kata-deploy
|
||||
if: always()
|
||||
run: bash tests/integration/gha-run.sh cleanup-sev
|
||||
run: bash tests/integration/kubernetes/gha-run.sh cleanup-sev
|
||||
|
||||
11
.github/workflows/run-k8s-tests-on-snp.yaml
vendored
11
.github/workflows/run-k8s-tests-on-snp.yaml
vendored
@@ -29,15 +29,20 @@ jobs:
|
||||
DOCKER_TAG: ${{ inputs.tag }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
KUBECONFIG: /home/kata/.kube/config
|
||||
USING_NFD: "false"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Deploy Kata
|
||||
timeout-minutes: 10
|
||||
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-snp
|
||||
|
||||
- name: Run tests
|
||||
timeout-minutes: 30
|
||||
run: bash tests/integration/gha-run.sh run-tests-snp
|
||||
|
||||
run: bash tests/integration/kubernetes/gha-run.sh run-tests
|
||||
|
||||
- name: Delete kata-deploy
|
||||
if: always()
|
||||
run: bash tests/integration/gha-run.sh cleanup-snp
|
||||
run: bash tests/integration/kubernetes/gha-run.sh cleanup-snp
|
||||
|
||||
12
.github/workflows/run-k8s-tests-on-tdx.yaml
vendored
12
.github/workflows/run-k8s-tests-on-tdx.yaml
vendored
@@ -28,16 +28,20 @@ jobs:
|
||||
DOCKER_REPO: ${{ inputs.repo }}
|
||||
DOCKER_TAG: ${{ inputs.tag }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
|
||||
USING_NFD: "true"
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Deploy Kata
|
||||
timeout-minutes: 10
|
||||
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-tdx
|
||||
|
||||
- name: Run tests
|
||||
timeout-minutes: 30
|
||||
run: bash tests/integration/gha-run.sh run-tests-tdx
|
||||
|
||||
run: bash tests/integration/kubernetes/gha-run.sh run-tests
|
||||
|
||||
- name: Delete kata-deploy
|
||||
if: always()
|
||||
run: bash tests/integration/gha-run.sh cleanup-tdx
|
||||
run: bash tests/integration/kubernetes/gha-run.sh cleanup-tdx
|
||||
|
||||
8
.github/workflows/run-metrics.yaml
vendored
8
.github/workflows/run-metrics.yaml
vendored
@@ -46,9 +46,15 @@ jobs:
|
||||
- name: run blogbench test
|
||||
run: bash tests/metrics/gha-run.sh run-test-blogbench
|
||||
|
||||
- name: run tensorflow test
|
||||
run: bash tests/metrics/gha-run.sh run-test-tensorflow
|
||||
|
||||
- name: run fio test
|
||||
run: bash tests/metrics/gha-run.sh run-test-fio
|
||||
|
||||
- name: make metrics tarball ${{ matrix.vmm }}
|
||||
run: bash tests/metrics/gha-run.sh make-tarball-results
|
||||
|
||||
|
||||
- name: archive metrics results ${{ matrix.vmm }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
|
||||
42
.github/workflows/run-nydus-tests.yaml
vendored
Normal file
42
.github/workflows/run-nydus-tests.yaml
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
name: CI | Run nydus tests
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
tarball-suffix:
|
||||
required: false
|
||||
type: string
|
||||
commit-hash:
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
run-nydus:
|
||||
strategy:
|
||||
fail-fast: true
|
||||
matrix:
|
||||
containerd_version: ['lts', 'active']
|
||||
vmm: ['clh', 'qemu', 'dragonball']
|
||||
runs-on: garm-ubuntu-2204
|
||||
env:
|
||||
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
|
||||
GOPATH: ${{ github.workspace }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: bash tests/integration/nydus/gha-run.sh install-dependencies
|
||||
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
|
||||
path: kata-artifacts
|
||||
|
||||
- name: Install kata
|
||||
run: bash tests/integration/nydus/gha-run.sh install-kata kata-artifacts
|
||||
|
||||
- name: Run nydus tests
|
||||
run: bash tests/integration/nydus/gha-run.sh run
|
||||
37
.github/workflows/run-vfio-tests.yaml
vendored
Normal file
37
.github/workflows/run-vfio-tests.yaml
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
name: CI | Run vfio tests
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
tarball-suffix:
|
||||
required: false
|
||||
type: string
|
||||
commit-hash:
|
||||
required: false
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
run-vfio:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
vmm: ['clh', 'qemu']
|
||||
runs-on: garm-ubuntu-2204
|
||||
env:
|
||||
GOPATH: ${{ github.workspace }}
|
||||
KATA_HYPERVISOR: ${{ matrix.vmm }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ inputs.commit-hash }}
|
||||
|
||||
- name: Install dependencies
|
||||
run: bash tests/functional/vfio/gha-run.sh install-dependencies
|
||||
|
||||
- name: get-kata-tarball
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
|
||||
path: kata-artifacts
|
||||
|
||||
- name: Run vfio tests
|
||||
run: bash tests/functional/vfio/gha-run.sh run
|
||||
@@ -7,10 +7,14 @@ on:
|
||||
- synchronize
|
||||
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
name: Static checks dragonball
|
||||
jobs:
|
||||
test-dragonball:
|
||||
runs-on: self-hosted
|
||||
runs-on: dragonball
|
||||
env:
|
||||
RUST_BACKTRACE: "1"
|
||||
steps:
|
||||
|
||||
8
.github/workflows/static-checks.yaml
vendored
8
.github/workflows/static-checks.yaml
vendored
@@ -6,6 +6,10 @@ on:
|
||||
- reopened
|
||||
- synchronize
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
name: Static checks
|
||||
jobs:
|
||||
static-checks:
|
||||
@@ -41,8 +45,8 @@ jobs:
|
||||
cd "${{ github.workspace }}/src/github.com/${{ github.repository }}"
|
||||
kernel_dir="tools/packaging/kernel/"
|
||||
kernel_version_file="${kernel_dir}kata_config_version"
|
||||
modified_files=$(git diff --name-only origin/CCv0..HEAD)
|
||||
if git diff --name-only origin/CCv0..HEAD "${kernel_dir}" | grep "${kernel_dir}"; then
|
||||
modified_files=$(git diff --name-only origin/main..HEAD)
|
||||
if git diff --name-only origin/main..HEAD "${kernel_dir}" | grep "${kernel_dir}"; then
|
||||
echo "Kernel directory has changed, checking if $kernel_version_file has been updated"
|
||||
if echo "$modified_files" | grep -v "README.md" | grep "${kernel_dir}" >>"/dev/null"; then
|
||||
echo "$modified_files" | grep "$kernel_version_file" >>/dev/null || ( echo "Please bump version in $kernel_version_file" && exit 1)
|
||||
|
||||
7
Makefile
7
Makefile
@@ -24,6 +24,10 @@ TOOLS += trace-forwarder
|
||||
|
||||
STANDARD_TARGETS = build check clean install static-checks-build test vendor
|
||||
|
||||
# Variables for the build-and-publish-kata-debug target
|
||||
KATA_DEBUG_REGISTRY ?= ""
|
||||
KATA_DEBUG_TAG ?= ""
|
||||
|
||||
default: all
|
||||
|
||||
include utils.mk
|
||||
@@ -44,6 +48,9 @@ static-checks: static-checks-build
|
||||
docs-url-alive-check:
|
||||
bash ci/docs-url-alive-check.sh
|
||||
|
||||
build-and-publish-kata-debug:
|
||||
bash tools/packaging/kata-debug/kata-debug-build-and-upload-payload.sh ${KATA_DEBUG_REGISTRY} ${KATA_DEBUG_TAG}
|
||||
|
||||
.PHONY: \
|
||||
all \
|
||||
kata-tarball \
|
||||
|
||||
@@ -134,6 +134,7 @@ The table below lists the remaining parts of the project:
|
||||
| [packaging](tools/packaging) | infrastructure | Scripts and metadata for producing packaged binaries<br/>(components, hypervisors, kernel and rootfs). |
|
||||
| [kernel](https://www.kernel.org) | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored [here](tools/packaging/kernel). |
|
||||
| [osbuilder](tools/osbuilder) | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
|
||||
| [kata-debug](tools/packaging/kata-debug/README.md) | infrastructure | Utility tool to gather Kata Containers debug information from Kubernetes clusters. |
|
||||
| [`agent-ctl`](src/tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
|
||||
| [`kata-ctl`](src/tools/kata-ctl) | utility | Tool that provides advanced commands and debug facilities. |
|
||||
| [`log-parser-rs`](src/tools/log-parser-rs) | utility | Tool that aid in analyzing logs from the kata runtime. |
|
||||
|
||||
@@ -72,8 +72,7 @@ build_and_install_gperf() {
|
||||
curl -sLO "${gperf_tarball_url}"
|
||||
tar -xf "${gperf_tarball}"
|
||||
pushd "gperf-${gperf_version}"
|
||||
# gperf is a build time dependency of libseccomp and not to be used in the target.
|
||||
# Unset $CC since that might point to a cross compiler.
|
||||
# Unset $CC for configure, we will always use native for gperf
|
||||
CC= ./configure --prefix="${gperf_install_dir}"
|
||||
make
|
||||
make install
|
||||
@@ -88,7 +87,8 @@ build_and_install_libseccomp() {
|
||||
curl -sLO "${libseccomp_tarball_url}"
|
||||
tar -xf "${libseccomp_tarball}"
|
||||
pushd "libseccomp-${libseccomp_version}"
|
||||
./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
|
||||
[ "${arch}" == $(uname -m) ] && cc_name="" || cc_name="${arch}-linux-gnu-gcc"
|
||||
CC=${cc_name} ./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
|
||||
make
|
||||
make install
|
||||
popd
|
||||
|
||||
83
ci/lib.sh
83
ci/lib.sh
@@ -64,86 +64,3 @@ run_get_pr_changed_file_details()
|
||||
source "$tests_repo_dir/.ci/lib.sh"
|
||||
get_pr_changed_file_details
|
||||
}
|
||||
|
||||
# Check if the 1st argument version is greater than and equal to 2nd one
|
||||
# Version format: [0-9]+ separated by period (e.g. 2.4.6, 1.11.3 and etc.)
|
||||
#
|
||||
# Parameters:
|
||||
# $1 - a version to be tested
|
||||
# $2 - a target version
|
||||
#
|
||||
# Return:
|
||||
# 0 if $1 is greater than and equal to $2
|
||||
# 1 otherwise
|
||||
version_greater_than_equal() {
|
||||
local current_version=$1
|
||||
local target_version=$2
|
||||
smaller_version=$(echo -e "$current_version\n$target_version" | sort -V | head -1)
|
||||
if [ "${smaller_version}" = "${target_version}" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Build a IBM zSystem secure execution (SE) image
|
||||
#
|
||||
# Parameters:
|
||||
# $1 - kernel_parameters
|
||||
# $2 - a source directory where kernel and initrd are located
|
||||
# $3 - a destination directory where a SE image is built
|
||||
#
|
||||
# Return:
|
||||
# 0 if the image is successfully built
|
||||
# 1 otherwise
|
||||
build_secure_image() {
|
||||
kernel_params="${1:-}"
|
||||
install_src_dir="${2:-}"
|
||||
install_dest_dir="${3:-}"
|
||||
|
||||
if [ ! -f "${install_src_dir}/vmlinuz.container" ] ||
|
||||
[ ! -f "${install_src_dir}/kata-containers-initrd.img" ]; then
|
||||
cat << EOF >&2
|
||||
Either kernel or initrd does not exist or is mistakenly named
|
||||
A file name for kernel must be vmlinuz.container (raw binary)
|
||||
A file name for initrd must be kata-containers-initrd.img
|
||||
EOF
|
||||
return 1
|
||||
fi
|
||||
|
||||
cmdline="${kernel_params} panic=1 scsi_mod.scan=none swiotlb=262144"
|
||||
parmfile="$(mktemp --suffix=-cmdline)"
|
||||
echo "${cmdline}" > "${parmfile}"
|
||||
chmod 600 "${parmfile}"
|
||||
|
||||
[ -n "${HKD_PATH:-}" ] || (echo >&2 "No host key document specified." && return 1)
|
||||
cert_list=($(ls -1 $HKD_PATH))
|
||||
declare hkd_options
|
||||
eval "for cert in ${cert_list[*]}; do
|
||||
hkd_options+=\"--host-key-document=\\\"\$HKD_PATH/\$cert\\\" \"
|
||||
done"
|
||||
|
||||
command -v genprotimg > /dev/null 2>&1 || { apt update; apt install -y s390-tools; }
|
||||
extra_arguments=""
|
||||
genprotimg_version=$(genprotimg --version | grep -Po '(?<=version )[^-]+')
|
||||
if ! version_greater_than_equal "${genprotimg_version}" "2.17.0"; then
|
||||
extra_arguments="--x-pcf '0xe0'"
|
||||
fi
|
||||
|
||||
eval genprotimg \
|
||||
"${extra_arguments}" \
|
||||
"${hkd_options}" \
|
||||
--output="${install_dest_dir}/kata-containers-secure.img" \
|
||||
--image="${install_src_dir}/vmlinuz.container" \
|
||||
--ramdisk="${install_src_dir}/kata-containers-initrd.img" \
|
||||
--parmfile="${parmfile}" \
|
||||
--no-verify # no verification for CI testing purposes
|
||||
|
||||
build_result=$?
|
||||
rm -f "${parmfile}"
|
||||
if [ $build_result -eq 0 ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ Kata Containers design documents:
|
||||
- [`Inotify` support](inotify.md)
|
||||
- [`Hooks` support](hooks-handling.md)
|
||||
- [Metrics(Kata 2.0)](kata-2-0-metrics.md)
|
||||
- [Metrics in Rust Runtime(runtime-rs)](kata-metrics-in-runtime-rs.md)
|
||||
- [Design for Kata Containers `Lazyload` ability with `nydus`](kata-nydus-design.md)
|
||||
- [Design for direct-assigned volume](direct-blk-device-assignment.md)
|
||||
- [Design for core-scheduling](core-scheduling.md)
|
||||
|
||||
@@ -3,11 +3,11 @@
|
||||
[Kubernetes](https://github.com/kubernetes/kubernetes/), or K8s, is a popular open source
|
||||
container orchestration engine. In Kubernetes, a set of containers sharing resources
|
||||
such as networking, storage, mount, PID, etc. is called a
|
||||
[pod](https://kubernetes.io/docs/user-guide/pods/).
|
||||
[pod](https://kubernetes.io/docs/concepts/workloads/pods/).
|
||||
|
||||
A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster
|
||||
only needs to run a container runtime and a container agent (called a
|
||||
[Kubelet](https://kubernetes.io/docs/admin/kubelet/)).
|
||||
[Kubelet](https://kubernetes.io/docs/concepts/overview/components/#kubelet)).
|
||||
|
||||
Kata Containers represents a Kubelet pod as a VM.
|
||||
|
||||
|
||||
50
docs/design/kata-metrics-in-runtime-rs.md
Normal file
50
docs/design/kata-metrics-in-runtime-rs.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Kata Metrics in Rust Runtime(runtime-rs)
|
||||
|
||||
Rust Runtime(runtime-rs) is responsible for:
|
||||
|
||||
- Gather metrics about `shim`.
|
||||
- Gather metrics from `hypervisor` (through `channel`).
|
||||
- Get metrics from `agent` (through `ttrpc`).
|
||||
|
||||
---
|
||||
|
||||
Here are listed all the metrics gathered by `runtime-rs`.
|
||||
|
||||
> * Current status of each entry is marked as:
|
||||
> * ✅:DONE
|
||||
> * 🚧:TODO
|
||||
|
||||
### Kata Shim
|
||||
|
||||
| STATUS | Metric name | Type | Units | Labels |
|
||||
| ------ | ------------------------------------------------------------ | ----------- | -------------- | ------------------------------------------------------------ |
|
||||
| 🚧 | `kata_shim_agent_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_fds`: <br> Kata containerd shim v2 open FDs. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_io_stat`: <br> Kata containerd shim v2 process IO statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebytes`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_netdev`: <br> Kata containerd shim v2 network devices statistics. | `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recv_bytes`</li><li>`recv_compressed`</li><li>`recv_drop`</li><li>`recv_errs`</li><li>`recv_fifo`</li><li>`recv_frame`</li><li>`recv_multicast`</li><li>`recv_packets`</li><li>`sent_bytes`</li><li>`sent_carrier`</li><li>`sent_colls`</li><li>`sent_compressed`</li><li>`sent_drop`</li><li>`sent_errs`</li><li>`sent_fifo`</li><li>`sent_packets`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_pod_overhead_cpu`: <br> Kata Pod overhead for CPU resources(percent). | `GAUGE` | percent | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_pod_overhead_memory_in_bytes`: <br> Kata Pod overhead for memory resources(bytes). | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_proc_stat`: <br> Kata containerd shim v2 process statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_proc_status`: <br> Kata containerd shim v2 process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntary_ctxt_switches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpmd`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntary_ctxt_switches`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_cpu_seconds_total`: <br> Total user and system CPU time spent in seconds. | `COUNTER` | `seconds` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_max_fds`: <br> Maximum number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_open_fds`: <br> Number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_resident_memory_bytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_start_time_seconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_virtual_memory_bytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_process_virtual_memory_max_bytes`: <br> Maximum amount of virtual memory available in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> |
|
||||
| 🚧 | `kata_shim_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (Kata shim v2 actions)<ul><li>`checkpoint`</li><li>`close_io`</li><li>`connect`</li><li>`create`</li><li>`delete`</li><li>`exec`</li><li>`kill`</li><li>`pause`</li><li>`pids`</li><li>`resize_pty`</li><li>`resume`</li><li>`shutdown`</li><li>`start`</li><li>`state`</li><li>`stats`</li><li>`update`</li><li>`wait`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| ✅ | `kata_shim_threads`: <br> Kata containerd shim v2 process threads. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> |
|
||||
|
||||
### Kata Hypervisor
|
||||
|
||||
Different from golang runtime, hypervisor and shim in runtime-rs belong to the **same process**, so all previous metrics for hypervisor and shim only need to be gathered once. Thus, we currently only collect previous metrics in kata shim.
|
||||
|
||||
At the same time, we added the interface(`VmmAction::GetHypervisorMetrics`) to gather hypervisor metrics, in case we design tailor-made metrics for hypervisor in the future. Here're metrics exposed from [src/dragonball/src/metric.rs](https://github.com/kata-containers/kata-containers/blob/main/src/dragonball/src/metric.rs).
|
||||
|
||||
| Metric name | Type | Units | Labels |
|
||||
| ------------------------------------------------------------ | ---------- | ----- | ------------------------------------------------------------ |
|
||||
| `kata_hypervisor_scrape_count`: <br> Metrics scrape count | `COUNTER` | | <ul><li>`sandbox_id`</li></ul> |
|
||||
| `kata_hypervisor_vcpu`: <br>Hypervisor metrics specific to VCPUs' mode of functioning. | `IntGauge` | | <ul><li>`item`<ul><li>`exit_io_in`</li><li>`exit_io_out`</li><li>`exit_mmio_read`</li><li>`exit_mmio_write`</li><li>`failures`</li><li>`filter_cpuid`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| `kata_hypervisor_seccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`num_faults`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
| `kata_hypervisor_seccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`sigbus`</li><li>`sigsegv`</li></ul></li><li>`sandbox_id`</li></ul> |
|
||||
@@ -45,8 +45,4 @@
|
||||
- [How to run Kata Containers with `nydus`](how-to-use-virtio-fs-nydus-with-kata.md)
|
||||
- [How to run Kata Containers with AMD SEV-SNP](how-to-run-kata-containers-with-SNP-VMs.md)
|
||||
- [How to use EROFS to build rootfs in Kata Containers](how-to-use-erofs-build-rootfs.md)
|
||||
- [How to run Kata Containers with kinds of Block Volumes](how-to-run-kata-containers-with-kinds-of-Block-Volumes.md)
|
||||
|
||||
## Confidential Containers
|
||||
- [How to use build and test the Confidential Containers `CCv0` proof of concept](how-to-build-and-test-ccv0.md)
|
||||
- [How to generate a Kata Containers payload for the Confidential Containers Operator](how-to-generate-a-kata-containers-payload-for-the-confidential-containers-operator.md)
|
||||
- [How to run Kata Containers with kinds of Block Volumes](how-to-run-kata-containers-with-kinds-of-Block-Volumes.md)
|
||||
@@ -1,635 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
#
|
||||
# Copyright (c) 2021, 2023 IBM Corporation
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# Disclaimer: This script is work in progress for supporting the CCv0 prototype
|
||||
# It shouldn't be considered supported by the Kata Containers community, or anyone else
|
||||
|
||||
# Based on https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md,
|
||||
# but with elements of the tests/.ci scripts used
|
||||
|
||||
readonly script_name="$(basename "${BASH_SOURCE[0]}")"
|
||||
|
||||
# By default in Golang >= 1.16 GO111MODULE is set to "on", but not all modules support it, so overwrite to "auto"
|
||||
export GO111MODULE="auto"
|
||||
|
||||
# Setup kata containers environments if not set - we default to use containerd
|
||||
export CRI_CONTAINERD=${CRI_CONTAINERD:-"yes"}
|
||||
export CRI_RUNTIME=${CRI_RUNTIME:-"containerd"}
|
||||
export CRIO=${CRIO:-"no"}
|
||||
export KATA_HYPERVISOR="${KATA_HYPERVISOR:-qemu}"
|
||||
export KUBERNETES=${KUBERNETES:-"no"}
|
||||
export AGENT_INIT="${AGENT_INIT:-${TEST_INITRD:-no}}"
|
||||
export AA_KBC="${AA_KBC:-offline_fs_kbc}"
|
||||
export KATA_BUILD_CC=${KATA_BUILD_CC:-"yes"}
|
||||
export TEE_TYPE=${TEE_TYPE:-}
|
||||
export PREFIX="${PREFIX:-/opt/confidential-containers}"
|
||||
export RUNTIME_CONFIG_PATH="${RUNTIME_CONFIG_PATH:-${PREFIX}/share/defaults/kata-containers/configuration.toml}"
|
||||
|
||||
# Allow the user to overwrite the default repo and branch names if they want to build from a fork
|
||||
export katacontainers_repo="${katacontainers_repo:-github.com/kata-containers/kata-containers}"
|
||||
export katacontainers_branch="${katacontainers_branch:-CCv0}"
|
||||
export kata_default_branch=${katacontainers_branch}
|
||||
export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
|
||||
export tests_branch="${tests_branch:-CCv0}"
|
||||
export target_branch=${tests_branch} # kata-containers/ci/lib.sh uses target branch var to check out tests repo
|
||||
|
||||
# if .bash_profile exists then use it, otherwise fall back to .profile
|
||||
export PROFILE="${HOME}/.profile"
|
||||
if [ -r "${HOME}/.bash_profile" ]; then
|
||||
export PROFILE="${HOME}/.bash_profile"
|
||||
fi
|
||||
# Stop PS1: unbound variable error happening
|
||||
export PS1=${PS1:-}
|
||||
|
||||
# Create a bunch of common, derived values up front so we don't need to create them in all the different functions
|
||||
. ${PROFILE}
|
||||
if [ -z ${GOPATH} ]; then
|
||||
export GOPATH=${HOME}/go
|
||||
fi
|
||||
export tests_repo_dir="${GOPATH}/src/${tests_repo}"
|
||||
export katacontainers_repo_dir="${GOPATH}/src/${katacontainers_repo}"
|
||||
export ROOTFS_DIR="${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder/rootfs"
|
||||
export PULL_IMAGE="${PULL_IMAGE:-quay.io/kata-containers/confidential-containers:signed}" # Doesn't need authentication
|
||||
export CONTAINER_ID="${CONTAINER_ID:-0123456789}"
|
||||
source /etc/os-release || source /usr/lib/os-release
|
||||
grep -Eq "\<fedora\>" /etc/os-release 2> /dev/null && export USE_PODMAN=true
|
||||
|
||||
|
||||
# If we've already checked out the test repo then source the confidential scripts
|
||||
if [ "${KUBERNETES}" == "yes" ]; then
|
||||
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/kubernetes/confidential"
|
||||
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
|
||||
else
|
||||
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/containerd/confidential"
|
||||
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
|
||||
fi
|
||||
|
||||
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
|
||||
|
||||
usage() {
|
||||
exit_code="$1"
|
||||
cat <<EOF
|
||||
Overview:
|
||||
Build and test kata containers from source
|
||||
Optionally set kata-containers and tests repo and branch as exported variables before running
|
||||
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/${script_name} build_and_install_all
|
||||
Usage:
|
||||
${script_name} [options] <command>
|
||||
Commands:
|
||||
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
|
||||
- agent_pull_image: Run PullImage command against the agent with agent-ctl
|
||||
- all: Build and install everything, test kata with containerd and capture the logs
|
||||
- build_and_add_agent_to_rootfs: Builds the kata-agent and adds it to the rootfs
|
||||
- build_and_install_all: Build and install everything
|
||||
- build_and_install_rootfs: Builds and installs the rootfs image
|
||||
- build_kata_runtime: Build and install the kata runtime
|
||||
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
|
||||
- build_qemu: Checkout, patch, build and install QEMU
|
||||
- configure: Configure Kata to use rootfs and enable debug
|
||||
- connect_to_ssh_demo_pod: Ssh into the ssh demo pod, showing that the decryption succeeded
|
||||
- copy_signature_files_to_guest Copies signature verification files to guest
|
||||
- create_rootfs: Create a local rootfs
|
||||
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
|
||||
- crictl_create_cc_pod Use crictl to create a new kata cc pod
|
||||
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
|
||||
- help: Display this help
|
||||
- init_kubernetes: initialize a Kubernetes cluster on this system
|
||||
- initialize: Install dependencies and check out kata-containers source
|
||||
- install_guest_kernel: Setup, build and install the guest kernel
|
||||
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
|
||||
- kubernetes_create_ssh_demo_pod: Create a Kata CC runtime pod based on the ssh demo
|
||||
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
|
||||
- kubernetes_delete_ssh_demo_pod: Delete the Kata CC runtime pod based on the ssh demo
|
||||
- open_kata_shell: Open a shell into the kata runtime
|
||||
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
|
||||
- shim_pull_image: Run PullImage command against the shim with ctr
|
||||
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
|
||||
- test: Test using kata with containerd
|
||||
|
||||
Options:
|
||||
-d: Enable debug
|
||||
-h: Display this help
|
||||
EOF
|
||||
# if script sourced don't exit as this will exit the main shell, just return instead
|
||||
[[ $_ != $0 ]] && return "$exit_code" || exit "$exit_code"
|
||||
}
|
||||
|
||||
build_and_install_all() {
|
||||
initialize
|
||||
build_and_install_kata_runtime
|
||||
configure
|
||||
create_a_local_rootfs
|
||||
build_and_install_rootfs
|
||||
install_guest_kernel_image
|
||||
case "$KATA_HYPERVISOR" in
|
||||
"qemu")
|
||||
build_qemu
|
||||
;;
|
||||
"cloud-hypervisor")
|
||||
build_cloud_hypervisor
|
||||
;;
|
||||
*)
|
||||
echo "Invalid option: $KATA_HYPERVISOR is not supported." >&2
|
||||
;;
|
||||
esac
|
||||
|
||||
check_kata_runtime
|
||||
if [ "${KUBERNETES}" == "yes" ]; then
|
||||
init_kubernetes
|
||||
fi
|
||||
}
|
||||
|
||||
rebuild_and_install_kata() {
|
||||
checkout_tests_repo
|
||||
checkout_kata_containers_repo
|
||||
build_and_install_kata_runtime
|
||||
build_and_add_agent_to_rootfs
|
||||
build_and_install_rootfs
|
||||
check_kata_runtime
|
||||
}
|
||||
|
||||
# Based on the jenkins_job_build.sh script in kata-containers/tests/.ci - checks out source code and installs dependencies
|
||||
initialize() {
|
||||
# We need git to checkout and bootstrap the ci scripts and some other packages used in testing
|
||||
sudo apt-get update && sudo apt-get install -y curl git qemu-utils
|
||||
|
||||
grep -qxF "export GOPATH=\${HOME}/go" "${PROFILE}" || echo "export GOPATH=\${HOME}/go" >> "${PROFILE}"
|
||||
grep -qxF "export GOROOT=/usr/local/go" "${PROFILE}" || echo "export GOROOT=/usr/local/go" >> "${PROFILE}"
|
||||
grep -qxF "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" "${PROFILE}" || echo "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" >> "${PROFILE}"
|
||||
|
||||
# Load the new go and PATH parameters from the profile
|
||||
. ${PROFILE}
|
||||
mkdir -p "${GOPATH}"
|
||||
|
||||
checkout_tests_repo
|
||||
|
||||
pushd "${tests_repo_dir}"
|
||||
local ci_dir_name=".ci"
|
||||
sudo -E PATH=$PATH -s "${ci_dir_name}/install_go.sh" -p -f
|
||||
sudo -E PATH=$PATH -s "${ci_dir_name}/install_rust.sh"
|
||||
# Need to change ownership of rustup so later process can create temp files there
|
||||
sudo chown -R ${USER}:${USER} "${HOME}/.rustup"
|
||||
|
||||
checkout_kata_containers_repo
|
||||
|
||||
# Run setup, but don't install kata as we will build it ourselves in locations matching the developer guide
|
||||
export INSTALL_KATA="no"
|
||||
sudo -E PATH=$PATH -s ${ci_dir_name}/setup.sh
|
||||
# Reload the profile to pick up installed dependencies
|
||||
. ${PROFILE}
|
||||
popd
|
||||
}
|
||||
|
||||
checkout_tests_repo() {
|
||||
echo "Creating repo: ${tests_repo} and branch ${tests_branch} into ${tests_repo_dir}..."
|
||||
# Due to git https://github.blog/2022-04-12-git-security-vulnerability-announced/ the tests repo needs
|
||||
# to be owned by root as it is re-checked out in rootfs.sh
|
||||
mkdir -p $(dirname "${tests_repo_dir}")
|
||||
[ -d "${tests_repo_dir}" ] || sudo -E git clone "https://${tests_repo}.git" "${tests_repo_dir}"
|
||||
sudo -E chown -R root:root "${tests_repo_dir}"
|
||||
pushd "${tests_repo_dir}"
|
||||
sudo -E git fetch
|
||||
if [ -n "${tests_branch}" ]; then
|
||||
sudo -E git checkout ${tests_branch}
|
||||
fi
|
||||
sudo -E git reset --hard origin/${tests_branch}
|
||||
popd
|
||||
|
||||
source "${BATS_TEST_DIRNAME}/lib.sh"
|
||||
source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
|
||||
}
|
||||
|
||||
# Note: clone_katacontainers_repo using go, so that needs to be installed first
|
||||
checkout_kata_containers_repo() {
|
||||
source "${tests_repo_dir}/.ci/lib.sh"
|
||||
echo "Creating repo: ${katacontainers_repo} and branch ${kata_default_branch} into ${katacontainers_repo_dir}..."
|
||||
clone_katacontainers_repo
|
||||
sudo -E chown -R ${USER}:${USER} "${katacontainers_repo_dir}"
|
||||
}
|
||||
|
||||
build_and_install_kata_runtime() {
|
||||
export DEFAULT_HYPERVISOR=${KATA_HYPERVISOR}
|
||||
${tests_repo_dir}/.ci/install_runtime.sh
|
||||
}
|
||||
|
||||
configure() {
|
||||
# configure kata to use rootfs, not initrd
|
||||
sudo sed -i 's/^\(initrd =.*\)/# \1/g' ${RUNTIME_CONFIG_PATH}
|
||||
|
||||
enable_full_debug
|
||||
enable_agent_console
|
||||
|
||||
# Switch image offload to true in kata config
|
||||
switch_image_service_offload "on"
|
||||
|
||||
configure_cc_containerd
|
||||
# From crictl v1.24.1 the default timoout leads to the pod creation failing, so update it
|
||||
sudo crictl config --set timeout=10
|
||||
|
||||
# Verity checks aren't working locally, as we aren't re-genning the hash maybe? so remove it from the kernel parameters
|
||||
remove_kernel_param "cc_rootfs_verity.scheme"
|
||||
}
|
||||
|
||||
build_and_add_agent_to_rootfs() {
|
||||
build_a_custom_kata_agent
|
||||
add_custom_agent_to_rootfs
|
||||
}
|
||||
|
||||
build_a_custom_kata_agent() {
|
||||
# Install libseccomp for static linking
|
||||
sudo -E PATH=$PATH GOPATH=$GOPATH ${katacontainers_repo_dir}/ci/install_libseccomp.sh /tmp/kata-libseccomp /tmp/kata-gperf
|
||||
export LIBSECCOMP_LINK_TYPE=static
|
||||
export LIBSECCOMP_LIB_PATH=/tmp/kata-libseccomp/lib
|
||||
|
||||
. "$HOME/.cargo/env"
|
||||
pushd ${katacontainers_repo_dir}/src/agent
|
||||
sudo -E PATH=$PATH make
|
||||
|
||||
ARCH=$(uname -m)
|
||||
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
|
||||
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
|
||||
|
||||
# Run a make install into the rootfs directory in order to create the kata-agent.service file which is required when we add to the rootfs
|
||||
sudo -E PATH=$PATH make install DESTDIR="${ROOTFS_DIR}"
|
||||
popd
|
||||
}
|
||||
|
||||
create_a_local_rootfs() {
|
||||
sudo rm -rf "${ROOTFS_DIR}"
|
||||
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
|
||||
export distro="ubuntu"
|
||||
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
|
||||
sudo -E OS_VERSION="${OS_VERSION:-}" GOPATH=$GOPATH EXTRA_PKGS="vim iputils-ping net-tools" DEBUG="${DEBUG:-}" USE_DOCKER="${use_docker:-}" SKOPEO=${SKOPEO:-} AA_KBC=${AA_KBC:-} UMOCI=yes SECCOMP=yes ./rootfs.sh -r ${ROOTFS_DIR} ${distro}
|
||||
|
||||
# Install_rust.sh during rootfs.sh switches us to the main branch of the tests repo, so switch back now
|
||||
pushd "${tests_repo_dir}"
|
||||
sudo -E git checkout ${tests_branch}
|
||||
popd
|
||||
# During the ./rootfs.sh call the kata agent is built as root, so we need to update the permissions, so we can rebuild it
|
||||
sudo chown -R ${USER}:${USER} "${katacontainers_repo_dir}/src/agent/"
|
||||
|
||||
popd
|
||||
}
|
||||
|
||||
add_custom_agent_to_rootfs() {
|
||||
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
|
||||
|
||||
ARCH=$(uname -m)
|
||||
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
|
||||
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
|
||||
|
||||
sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/usr/bin ${katacontainers_repo_dir}/src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent
|
||||
sudo install -o root -g root -m 0440 ../../../src/agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/
|
||||
sudo install -o root -g root -m 0440 ../../../src/agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/
|
||||
popd
|
||||
}
|
||||
|
||||
build_and_install_rootfs() {
|
||||
build_rootfs_image
|
||||
install_rootfs_image
|
||||
}
|
||||
|
||||
build_rootfs_image() {
|
||||
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
|
||||
# Logic from install_kata_image.sh - if we aren't using podman (ie on a fedora like), then use docker
|
||||
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
|
||||
sudo -E USE_DOCKER="${use_docker:-}" ./image_builder.sh ${ROOTFS_DIR}
|
||||
popd
|
||||
}
|
||||
|
||||
install_rootfs_image() {
|
||||
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
|
||||
local commit=$(git log --format=%h -1 HEAD)
|
||||
local date=$(date +%Y-%m-%d-%T.%N%z)
|
||||
local image="kata-containers-${date}-${commit}"
|
||||
sudo install -o root -g root -m 0640 -D kata-containers.img "${PREFIX}/share/kata-containers/${image}"
|
||||
(cd ${PREFIX}/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
|
||||
echo "Built Rootfs from ${ROOTFS_DIR} to ${PREFIX}/share/kata-containers/${image}"
|
||||
ls -al ${PREFIX}/share/kata-containers
|
||||
popd
|
||||
}
|
||||
|
||||
install_guest_kernel_image() {
|
||||
${tests_repo_dir}/.ci/install_kata_kernel.sh
|
||||
}
|
||||
|
||||
build_qemu() {
|
||||
${tests_repo_dir}/.ci/install_virtiofsd.sh
|
||||
${tests_repo_dir}/.ci/install_qemu.sh
|
||||
}
|
||||
|
||||
build_cloud_hypervisor() {
|
||||
${tests_repo_dir}/.ci/install_virtiofsd.sh
|
||||
${tests_repo_dir}/.ci/install_cloud_hypervisor.sh
|
||||
}
|
||||
|
||||
check_kata_runtime() {
|
||||
sudo kata-runtime check
|
||||
}
|
||||
|
||||
k8s_pod_file="${HOME}/busybox-cc.yaml"
|
||||
init_kubernetes() {
|
||||
# Check that kubeadm was installed and install it otherwise
|
||||
if ! [ -x "$(command -v kubeadm)" ]; then
|
||||
pushd "${tests_repo_dir}/.ci"
|
||||
sudo -E PATH=$PATH -s install_kubernetes.sh
|
||||
if [ "${CRI_CONTAINERD}" == "yes" ]; then
|
||||
sudo -E PATH=$PATH -s "configure_containerd_for_kubernetes.sh"
|
||||
fi
|
||||
popd
|
||||
fi
|
||||
|
||||
# If kubernetes init has previously run we need to clean it by removing the image and resetting k8s
|
||||
local cid=$(sudo docker ps -a -q -f name=^/kata-registry$)
|
||||
if [ -n "${cid}" ]; then
|
||||
sudo docker stop ${cid} && sudo docker rm ${cid}
|
||||
fi
|
||||
local k8s_nodes=$(kubectl get nodes -o name 2>/dev/null || true)
|
||||
if [ -n "${k8s_nodes}" ]; then
|
||||
sudo kubeadm reset -f
|
||||
fi
|
||||
|
||||
export CI="true" && sudo -E PATH=$PATH -s ${tests_repo_dir}/integration/kubernetes/init.sh
|
||||
sudo chown ${USER}:$(id -g -n ${USER}) "$HOME/.kube/config"
|
||||
cat << EOF > ${k8s_pod_file}
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox-cc
|
||||
spec:
|
||||
runtimeClassName: kata
|
||||
containers:
|
||||
- name: nginx
|
||||
image: quay.io/kata-containers/confidential-containers:signed
|
||||
imagePullPolicy: Always
|
||||
EOF
|
||||
}
|
||||
|
||||
call_kubernetes_create_cc_pod() {
|
||||
kubernetes_create_cc_pod ${k8s_pod_file}
|
||||
}
|
||||
|
||||
call_kubernetes_delete_cc_pod() {
|
||||
pod_name=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
|
||||
kubernetes_delete_cc_pod $pod_name
|
||||
}
|
||||
|
||||
call_kubernetes_create_ssh_demo_pod() {
|
||||
setup_decryption_files_in_guest
|
||||
kubernetes_create_ssh_demo_pod
|
||||
}
|
||||
|
||||
call_connect_to_ssh_demo_pod() {
|
||||
connect_to_ssh_demo_pod
|
||||
}
|
||||
|
||||
call_kubernetes_delete_ssh_demo_pod() {
|
||||
pod=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
|
||||
kubernetes_delete_ssh_demo_pod $pod
|
||||
}
|
||||
|
||||
crictl_sandbox_name=kata-cc-busybox-sandbox
|
||||
call_crictl_create_cc_pod() {
|
||||
# Update iptables to allow forwarding to the cni0 bridge avoiding issues caused by the docker0 bridge
|
||||
sudo iptables -P FORWARD ACCEPT
|
||||
|
||||
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
|
||||
get_pod_config
|
||||
|
||||
crictl_delete_cc_pod_if_exists "${crictl_sandbox_name}"
|
||||
crictl_create_cc_pod "${pod_config}"
|
||||
sudo crictl pods
|
||||
}
|
||||
|
||||
call_crictl_create_cc_container() {
|
||||
# Create container configuration yaml based on our test copy of busybox
|
||||
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
|
||||
get_pod_config
|
||||
|
||||
local container_config="${FIXTURES_DIR}/${CONTAINER_CONFIG_FILE:-container-config.yaml}"
|
||||
local pod_name=${crictl_sandbox_name}
|
||||
crictl_create_cc_container ${pod_name} ${pod_config} ${container_config}
|
||||
sudo crictl ps -a
|
||||
}
|
||||
|
||||
crictl_delete_cc() {
|
||||
crictl_delete_cc_pod ${crictl_sandbox_name}
|
||||
}
|
||||
|
||||
test_kata_runtime() {
|
||||
echo "Running ctr with the kata runtime..."
|
||||
local test_image="quay.io/kata-containers/confidential-containers:signed"
|
||||
if [ -z $(sudo ctr images ls -q name=="${test_image}") ]; then
|
||||
sudo ctr image pull "${test_image}"
|
||||
fi
|
||||
sudo ctr run --runtime "io.containerd.kata.v2" --rm -t "${test_image}" test-kata uname -a
|
||||
}
|
||||
|
||||
run_kata_and_capture_logs() {
|
||||
echo "Clearing systemd journal..."
|
||||
sudo systemctl stop systemd-journald
|
||||
sudo rm -f /var/log/journal/*/* /run/log/journal/*/*
|
||||
sudo systemctl start systemd-journald
|
||||
test_kata_runtime
|
||||
echo "Collecting logs..."
|
||||
sudo journalctl -q -o cat -a -t kata-runtime > ${HOME}/kata-runtime.log
|
||||
sudo journalctl -q -o cat -a -t kata > ${HOME}/shimv2.log
|
||||
echo "Logs output to ${HOME}/kata-runtime.log and ${HOME}/shimv2.log"
|
||||
}
|
||||
|
||||
get_ids() {
|
||||
guest_cid=$(sudo ss -H --vsock | awk '{print $6}' | cut -d: -f1)
|
||||
sandbox_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
|
||||
}
|
||||
|
||||
open_kata_shell() {
|
||||
get_ids
|
||||
sudo -E "PATH=$PATH" kata-runtime exec ${sandbox_id}
|
||||
}
|
||||
|
||||
build_bundle_dir_if_necessary() {
|
||||
bundle_dir="/tmp/bundle"
|
||||
if [ ! -d "${bundle_dir}" ]; then
|
||||
rootfs_dir="$bundle_dir/rootfs"
|
||||
image="quay.io/kata-containers/confidential-containers:signed"
|
||||
mkdir -p "$rootfs_dir" && (cd "$bundle_dir" && runc spec)
|
||||
sudo docker export $(sudo docker create "$image") | tar -C "$rootfs_dir" -xvf -
|
||||
fi
|
||||
# There were errors in create container agent-ctl command due to /bin/ seemingly not being on the path, so hardcode it
|
||||
sudo sed -i -e 's%^\(\t*\)"sh"$%\1"/bin/sh"%g' "${bundle_dir}/config.json"
|
||||
}
|
||||
|
||||
build_agent_ctl() {
|
||||
cd ${GOPATH}/src/${katacontainers_repo}/src/tools/agent-ctl/
|
||||
if [ -e "${HOME}/.cargo/registry" ]; then
|
||||
sudo chown -R ${USER}:${USER} "${HOME}/.cargo/registry"
|
||||
fi
|
||||
sudo -E PATH=$PATH -s make
|
||||
ARCH=$(uname -m)
|
||||
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
|
||||
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
|
||||
cd "./target/${ARCH}-unknown-linux-${LIBC}/release/"
|
||||
}
|
||||
|
||||
run_agent_ctl_command() {
|
||||
get_ids
|
||||
build_bundle_dir_if_necessary
|
||||
command=$1
|
||||
# If kata-agent-ctl pre-built in this directory, use it directly, otherwise build it first and switch to release
|
||||
if [ ! -x kata-agent-ctl ]; then
|
||||
build_agent_ctl
|
||||
fi
|
||||
./kata-agent-ctl -l debug connect --bundle-dir "${bundle_dir}" --server-address "vsock://${guest_cid}:1024" -c "${command}"
|
||||
}
|
||||
|
||||
agent_pull_image() {
|
||||
run_agent_ctl_command "PullImage image=${PULL_IMAGE} cid=${CONTAINER_ID} source_creds=${SOURCE_CREDS}"
|
||||
}
|
||||
|
||||
agent_create_container() {
|
||||
run_agent_ctl_command "CreateContainer cid=${CONTAINER_ID}"
|
||||
}
|
||||
|
||||
shim_pull_image() {
|
||||
get_ids
|
||||
local ctr_shim_command="sudo ctr --namespace k8s.io shim --id ${sandbox_id} pull-image ${PULL_IMAGE} ${CONTAINER_ID}"
|
||||
echo "Issuing command '${ctr_shim_command}'"
|
||||
${ctr_shim_command}
|
||||
}
|
||||
|
||||
call_copy_signature_files_to_guest() {
|
||||
# TODO #5173 - remove this once the kernel_params aren't ignored by the agent config
|
||||
export DEBUG_CONSOLE="true"
|
||||
|
||||
if [ "${SKOPEO:-}" = "yes" ]; then
|
||||
add_kernel_params "agent.container_policy_file=/etc/containers/quay_verification/quay_policy.json"
|
||||
setup_skopeo_signature_files_in_guest
|
||||
else
|
||||
# TODO #4888 - set config to specifically enable signature verification to be on in ImageClient
|
||||
setup_offline_fs_kbc_signature_files_in_guest
|
||||
fi
|
||||
}
|
||||
|
||||
main() {
|
||||
while getopts "dh" opt; do
|
||||
case "$opt" in
|
||||
d)
|
||||
export DEBUG="-d"
|
||||
set -x
|
||||
;;
|
||||
h)
|
||||
usage 0
|
||||
;;
|
||||
\?)
|
||||
echo "Invalid option: -$OPTARG" >&2
|
||||
usage 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
shift $((OPTIND - 1))
|
||||
|
||||
subcmd="${1:-}"
|
||||
|
||||
[ -z "${subcmd}" ] && usage 1
|
||||
|
||||
case "${subcmd}" in
|
||||
all)
|
||||
build_and_install_all
|
||||
run_kata_and_capture_logs
|
||||
;;
|
||||
build_and_install_all)
|
||||
build_and_install_all
|
||||
;;
|
||||
rebuild_and_install_kata)
|
||||
rebuild_and_install_kata
|
||||
;;
|
||||
initialize)
|
||||
initialize
|
||||
;;
|
||||
build_kata_runtime)
|
||||
build_and_install_kata_runtime
|
||||
;;
|
||||
configure)
|
||||
configure
|
||||
;;
|
||||
create_rootfs)
|
||||
create_a_local_rootfs
|
||||
;;
|
||||
build_and_add_agent_to_rootfs)
|
||||
build_and_add_agent_to_rootfs
|
||||
;;
|
||||
build_and_install_rootfs)
|
||||
build_and_install_rootfs
|
||||
;;
|
||||
install_guest_kernel)
|
||||
install_guest_kernel_image
|
||||
;;
|
||||
build_cloud_hypervisor)
|
||||
build_cloud_hypervisor
|
||||
;;
|
||||
build_qemu)
|
||||
build_qemu
|
||||
;;
|
||||
init_kubernetes)
|
||||
init_kubernetes
|
||||
;;
|
||||
crictl_create_cc_pod)
|
||||
call_crictl_create_cc_pod
|
||||
;;
|
||||
crictl_create_cc_container)
|
||||
call_crictl_create_cc_container
|
||||
;;
|
||||
crictl_delete_cc)
|
||||
crictl_delete_cc
|
||||
;;
|
||||
kubernetes_create_cc_pod)
|
||||
call_kubernetes_create_cc_pod
|
||||
;;
|
||||
kubernetes_delete_cc_pod)
|
||||
call_kubernetes_delete_cc_pod
|
||||
;;
|
||||
kubernetes_create_ssh_demo_pod)
|
||||
call_kubernetes_create_ssh_demo_pod
|
||||
;;
|
||||
connect_to_ssh_demo_pod)
|
||||
call_connect_to_ssh_demo_pod
|
||||
;;
|
||||
kubernetes_delete_ssh_demo_pod)
|
||||
call_kubernetes_delete_ssh_demo_pod
|
||||
;;
|
||||
test)
|
||||
test_kata_runtime
|
||||
;;
|
||||
test_capture_logs)
|
||||
run_kata_and_capture_logs
|
||||
;;
|
||||
open_kata_console)
|
||||
open_kata_console
|
||||
;;
|
||||
open_kata_shell)
|
||||
open_kata_shell
|
||||
;;
|
||||
agent_pull_image)
|
||||
agent_pull_image
|
||||
;;
|
||||
shim_pull_image)
|
||||
shim_pull_image
|
||||
;;
|
||||
agent_create_container)
|
||||
agent_create_container
|
||||
;;
|
||||
copy_signature_files_to_guest)
|
||||
call_copy_signature_files_to_guest
|
||||
;;
|
||||
*)
|
||||
usage 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
main $@
|
||||
@@ -1,45 +0,0 @@
|
||||
# Copyright (c) 2021 IBM Corp.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
aa_kbc_params = "$AA_KBC_PARAMS"
|
||||
https_proxy = "$HTTPS_PROXY"
|
||||
[endpoints]
|
||||
allowed = [
|
||||
"AddARPNeighborsRequest",
|
||||
"AddSwapRequest",
|
||||
"CloseStdinRequest",
|
||||
"CopyFileRequest",
|
||||
"CreateContainerRequest",
|
||||
"CreateSandboxRequest",
|
||||
"DestroySandboxRequest",
|
||||
#"ExecProcessRequest",
|
||||
"GetMetricsRequest",
|
||||
"GetOOMEventRequest",
|
||||
"GuestDetailsRequest",
|
||||
"ListInterfacesRequest",
|
||||
"ListRoutesRequest",
|
||||
"MemHotplugByProbeRequest",
|
||||
"OnlineCPUMemRequest",
|
||||
"PauseContainerRequest",
|
||||
"PullImageRequest",
|
||||
"ReadStreamRequest",
|
||||
"RemoveContainerRequest",
|
||||
#"ReseedRandomDevRequest",
|
||||
"ResizeVolumeRequest",
|
||||
"ResumeContainerRequest",
|
||||
"SetGuestDateTimeRequest",
|
||||
"SignalProcessRequest",
|
||||
"StartContainerRequest",
|
||||
"StartTracingRequest",
|
||||
"StatsContainerRequest",
|
||||
"StopTracingRequest",
|
||||
"TtyWinResizeRequest",
|
||||
"UpdateContainerRequest",
|
||||
"UpdateInterfaceRequest",
|
||||
"UpdateRoutesRequest",
|
||||
"VolumeStatsRequest",
|
||||
"WaitProcessRequest",
|
||||
"WriteStreamRequest"
|
||||
]
|
||||
@@ -1,475 +0,0 @@
|
||||
# How to build, run and test Kata CCv0
|
||||
|
||||
## Introduction and Background
|
||||
|
||||
In order to try and make building (locally) and demoing the Kata Containers `CCv0` code base as simple as possible I've
|
||||
shared a script [`ccv0.sh`](./ccv0.sh). This script was originally my attempt to automate the steps of the
|
||||
[Developer Guide](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md) so that I could do
|
||||
different sections of them repeatedly and reliably as I was playing around with make changes to different parts of the
|
||||
Kata code base. I then tried to weave in some of the [`tests/.ci`](https://github.com/kata-containers/tests/tree/main/.ci)
|
||||
scripts in order to have less duplicated code.
|
||||
As we're progress on the confidential containers journey I hope to add more features to demonstrate the functionality
|
||||
we have working.
|
||||
|
||||
*Disclaimer: This script has mostly just been used and tested by me ([@stevenhorsman](https://github.com/stevenhorsman)),*
|
||||
*so there might be issues with it. I'm happy to try and help solve these if possible, but this shouldn't be considered a*
|
||||
*fully supported process by the Kata Containers community.*
|
||||
|
||||
### Basic script set-up and optional environment variables
|
||||
|
||||
In order to build, configure and demo the CCv0 functionality, these are the set-up steps I take:
|
||||
- Provision a new VM
|
||||
- *I choose a Ubuntu 20.04 8GB VM for this as I had one available. There are some dependences on apt-get installed*
|
||||
*packages, so these will need re-working to be compatible with other platforms.*
|
||||
- Copy the script over to your VM *(I put it in the home directory)* and ensure it has execute permission by running
|
||||
```bash
|
||||
$ chmod u+x ccv0.sh
|
||||
```
|
||||
- Optionally set up some environment variables
|
||||
- By default the script checks out the `CCv0` branches of the `kata-containers/kata-containers` and
|
||||
`kata-containers/tests` repositories, but it is designed to be used to test of personal forks and branches as well.
|
||||
If you want to build and run these you can export the `katacontainers_repo`, `katacontainers_branch`, `tests_repo`
|
||||
and `tests_branch` variables e.g.
|
||||
```bash
|
||||
$ export katacontainers_repo=github.com/stevenhorsman/kata-containers
|
||||
$ export katacontainers_branch=stevenh/agent-pull-image-endpoint
|
||||
$ export tests_repo=github.com/stevenhorsman/tests
|
||||
$ export tests_branch=stevenh/add-ccv0-changes-to-build
|
||||
```
|
||||
before running the script.
|
||||
- By default the build and configuration are using `QEMU` as the hypervisor. In order to use `Cloud Hypervisor` instead
|
||||
set:
|
||||
```
|
||||
$ export KATA_HYPERVISOR="cloud-hypervisor"
|
||||
```
|
||||
before running the build.
|
||||
|
||||
- At this point you can provision a Kata confidential containers pod and container with either
|
||||
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image),
|
||||
or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
|
||||
and then test and use it.
|
||||
|
||||
### Using crictl for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
|
||||
|
||||
- Run the full build process with Kubernetes turned off, so its configuration doesn't interfere with `crictl` using:
|
||||
```bash
|
||||
$ export KUBERNETES="no"
|
||||
$ export KATA_HYPERVISOR="qemu"
|
||||
$ ~/ccv0.sh -d build_and_install_all
|
||||
```
|
||||
> **Note**: Much of this script has to be run as `sudo`, so you are likely to get prompted for your password.
|
||||
- *I run this script sourced just so that the required installed components are accessible on the `PATH` to the rest*
|
||||
*of the process without having to reload the session.*
|
||||
- The steps that `build_and_install_all` takes is:
|
||||
- Checkout the git repos for the `tests` and `kata-containers` repos as specified by the environment variables
|
||||
(default to `CCv0` branches if they are not supplied)
|
||||
- Use the `tests/.ci` scripts to install the build dependencies
|
||||
- Build and install the Kata runtime
|
||||
- Configure Kata to use containerd and for debug and confidential containers features to be enabled (including
|
||||
enabling console access to the Kata guest shell, which should only be done in development)
|
||||
- Create, build and install a rootfs for the Kata hypervisor to use. For 'CCv0' this is currently based on Ubuntu
|
||||
20.04.
|
||||
- Build the Kata guest kernel
|
||||
- Install the hypervisor (in order to select which hypervisor will be used, the `KATA_HYPERVISOR` environment
|
||||
variable can be used to select between `qemu` or `cloud-hypervisor`)
|
||||
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
|
||||
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
|
||||
this, login into Docker Hub and pull the images used with:
|
||||
> ```bash
|
||||
> $ sudo docker login
|
||||
> $ sudo docker pull ubuntu
|
||||
> ```
|
||||
> then re-run the command.
|
||||
- The first time this runs it may take a while, but subsequent runs will be quicker as more things are already
|
||||
installed and they can be further cut down by not running all the above steps
|
||||
[see "Additional script usage" below](#additional-script-usage)
|
||||
|
||||
- Create a new Kata sandbox pod using `crictl` with:
|
||||
```bash
|
||||
$ ~/ccv0.sh crictl_create_cc_pod
|
||||
```
|
||||
- This creates a pod configuration file, creates the pod from this using
|
||||
`sudo crictl runp -r kata ~/pod-config.yaml` and runs `sudo crictl pods` to show the pod
|
||||
- Create a new Kata confidential container with:
|
||||
```bash
|
||||
$ ~/ccv0.sh crictl_create_cc_container
|
||||
```
|
||||
- This creates a container (based on `busybox:1.33.1`) in the Kata cc sandbox and prints a list of containers.
|
||||
This will have been created based on an image pulled in the Kata pod sandbox/guest, not on the host machine.
|
||||
|
||||
As this point you should have a `crictl` pod and container that is using the Kata confidential containers runtime.
|
||||
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
|
||||
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
|
||||
|
||||
#### Clean up the `crictl` pod sandbox and container
|
||||
- When the testing is complete you can delete the container and pod by running:
|
||||
```bash
|
||||
$ ~/ccv0.sh crictl_delete_cc
|
||||
```
|
||||
### Using Kubernetes for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
|
||||
|
||||
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
|
||||
configured and created using the VM
|
||||
as a single node cluster:
|
||||
```bash
|
||||
$ export KUBERNETES="yes"
|
||||
$ ~/ccv0.sh build_and_install_all
|
||||
```
|
||||
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
|
||||
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
|
||||
this, login into Docker Hub and pull the images used with:
|
||||
> ```bash
|
||||
> $ sudo docker login
|
||||
> $ sudo docker pull registry:2
|
||||
> $ sudo docker pull ubuntu:20.04
|
||||
> ```
|
||||
> then re-run the command.
|
||||
- Check that your Kubernetes cluster has been correctly set-up by running :
|
||||
```bash
|
||||
$ kubectl get nodes
|
||||
```
|
||||
and checking that you see a single node e.g.
|
||||
```text
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
|
||||
```
|
||||
- Create a Kata confidential containers pod by running:
|
||||
```bash
|
||||
$ ~/ccv0.sh kubernetes_create_cc_pod
|
||||
```
|
||||
- Wait a few seconds for pod to start then check that the pod's status is `Running` with
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
```
|
||||
which should show something like:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
busybox-cc 1/1 Running 0 54s
|
||||
```
|
||||
|
||||
- As this point you should have a Kubernetes pod and container running, that is using the Kata
|
||||
confidential containers runtime.
|
||||
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
|
||||
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
|
||||
|
||||
#### Clean up the Kubernetes pod sandbox and container
|
||||
- When the testing is complete you can delete the container and pod by running:
|
||||
```bash
|
||||
$ ~/ccv0.sh kubernetes_delete_cc_pod
|
||||
```
|
||||
|
||||
### Validate that the container image was pulled on the guest
|
||||
|
||||
There are a couple of ways we can check that the container pull image action was offloaded to the guest, by checking
|
||||
the guest's file system for the unpacked bundle and checking the host's directories to ensure it wasn't also pulled
|
||||
there.
|
||||
- To check the guest's file system:
|
||||
- Open a shell into the Kata guest with:
|
||||
```bash
|
||||
$ ~/ccv0.sh open_kata_shell
|
||||
```
|
||||
- List the files in the directory that the container image bundle should have been unpacked to with:
|
||||
```bash
|
||||
$ ls -ltr /run/kata-containers/confidential-containers_signed/
|
||||
```
|
||||
- This should give something like
|
||||
```
|
||||
total 72
|
||||
-rw-r--r-- 1 root root 2977 Jan 20 10:03 config.json
|
||||
drwxr-xr-x 12 root root 240 Jan 20 10:03 rootfs
|
||||
```
|
||||
which shows how the image has been pulled and then unbundled on the guest.
|
||||
- Leave the Kata guest shell by running:
|
||||
```bash
|
||||
$ exit
|
||||
```
|
||||
- To verify that the image wasn't pulled on the host system we can look at the shared sandbox on the host and we
|
||||
should only see a single bundle for the pause container as the `busybox` based container image should have been
|
||||
pulled on the guest:
|
||||
- Find all the `rootfs` directories under in the pod's shared directory with:
|
||||
```bash
|
||||
$ pod_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
|
||||
$ sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs
|
||||
```
|
||||
which should only show a single `rootfs` directory if the container image was pulled on the guest, not the host
|
||||
- Looking that `rootfs` directory with
|
||||
```bash
|
||||
$ sudo ls -ltr $(sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs)
|
||||
```
|
||||
shows something similar to
|
||||
```
|
||||
total 668
|
||||
-rwxr-xr-x 1 root root 682696 Aug 25 13:58 pause
|
||||
drwxr-xr-x 2 root root 6 Jan 20 02:01 proc
|
||||
drwxr-xr-x 2 root root 6 Jan 20 02:01 dev
|
||||
drwxr-xr-x 2 root root 6 Jan 20 02:01 sys
|
||||
drwxr-xr-x 2 root root 25 Jan 20 02:01 etc
|
||||
```
|
||||
which is clearly the pause container indicating that the `busybox` based container image is not exposed to the host.
|
||||
|
||||
### Using a Kata pod sandbox for testing with `agent-ctl` or `ctr shim`
|
||||
|
||||
Once you have a kata pod sandbox created as described above, either using
|
||||
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image), or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
|
||||
, you can use this to test specific components of the Kata confidential
|
||||
containers architecture. This can be useful for development and debugging to isolate and test features
|
||||
that aren't broadly supported end-to-end. Here are some examples:
|
||||
|
||||
- In the first terminal run the pull image on guest command against the Kata agent, via the shim (`containerd-shim-kata-v2`).
|
||||
This can be achieved using the [containerd](https://github.com/containerd/containerd) CLI tool, `ctr`, which can be used to
|
||||
interact with the shim directly. The command takes the form
|
||||
`ctr --namespace k8s.io shim --id <sandbox-id> pull-image <image> <new-container-id>` and can been run directly, or through
|
||||
the `ccv0.sh` script to automatically fill in the variables:
|
||||
- Optionally, set up some environment variables to set the image and credentials used:
|
||||
- By default the shim pull image test in `ccv0.sh` will use the `busybox:1.33.1` based test image
|
||||
`quay.io/kata-containers/confidential-containers:signed` which requires no authentication. To use a different
|
||||
image, set the `PULL_IMAGE` environment variable e.g.
|
||||
```bash
|
||||
$ export PULL_IMAGE="docker.io/library/busybox:latest"
|
||||
```
|
||||
Currently the containerd shim pull image
|
||||
code doesn't support using a container registry that requires authentication, so if this is required, see the
|
||||
below steps to run the pull image command against the agent directly.
|
||||
- Run the pull image agent endpoint with:
|
||||
```bash
|
||||
$ ~/ccv0.sh shim_pull_image
|
||||
```
|
||||
which we print the `ctr shim` command for reference
|
||||
- Alternatively you can issue the command directly to the `kata-agent` pull image endpoint, which also supports
|
||||
credentials in order to pull from an authenticated registry:
|
||||
- Optionally set up some environment variables to set the image and credentials used:
|
||||
- Set the `PULL_IMAGE` environment variable e.g. `export PULL_IMAGE="docker.io/library/busybox:latest"`
|
||||
if a specific container image is required.
|
||||
- If the container registry for the image requires authentication then this can be set with an environment
|
||||
variable `SOURCE_CREDS`. For example to use Docker Hub (`docker.io`) as an authenticated user first run
|
||||
`export SOURCE_CREDS="<dockerhub username>:<dockerhub api key>"`
|
||||
> **Note**: the credentials support on the agent request is a tactical solution for the short-term
|
||||
proof of concept to allow more images to be pulled and tested. Once we have support for getting
|
||||
keys into the Kata guest image using the attestation-agent and/or KBS I'd expect container registry
|
||||
credentials to be looked up using that mechanism.
|
||||
- Run the pull image agent endpoint with
|
||||
```bash
|
||||
$ ~/ccv0.sh agent_pull_image
|
||||
```
|
||||
and you should see output which includes `Command PullImage (1 of 1) returned (Ok(()), false)` to indicate
|
||||
that the `PullImage` request was successful e.g.
|
||||
```
|
||||
Finished release [optimized] target(s) in 0.21s
|
||||
{"msg":"announce","level":"INFO","ts":"2021-09-15T08:40:14.189360410-07:00","subsystem":"rpc","name":"kata-agent-ctl","pid":"830920","version":"0.1.0","source":"kata-agent-ctl","config":"Config { server_address: \"vsock://1970354082:1024\", bundle_dir: \"/tmp/bundle\", timeout_nano: 0, interactive: false, ignore_errors: false }"}
|
||||
{"msg":"client setup complete","level":"INFO","ts":"2021-09-15T08:40:14.193639057-07:00","pid":"830920","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","server-address":"vsock://1970354082:1024"}
|
||||
{"msg":"Run command PullImage (1 of 1)","level":"INFO","ts":"2021-09-15T08:40:14.196643765-07:00","pid":"830920","source":"kata-agent-ctl","subsystem":"rpc","name":"kata-agent-ctl","version":"0.1.0"}
|
||||
{"msg":"response received","level":"INFO","ts":"2021-09-15T08:40:43.828200633-07:00","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","pid":"830920","response":""}
|
||||
{"msg":"Command PullImage (1 of 1) returned (Ok(()), false)","level":"INFO","ts":"2021-09-15T08:40:43.828261708-07:00","subsystem":"rpc","pid":"830920","source":"kata-agent-ctl","version":"0.1.0","name":"kata-agent-ctl"}
|
||||
```
|
||||
> **Note**: The first time that `~/ccv0.sh agent_pull_image` is run, the `agent-ctl` tool will be built
|
||||
which may take a few minutes.
|
||||
- To validate that the image pull was successful, you can open a shell into the Kata guest with:
|
||||
```bash
|
||||
$ ~/ccv0.sh open_kata_shell
|
||||
```
|
||||
- Check the `/run/kata-containers/` directory to verify that the container image bundle has been created in a directory
|
||||
named either `01234556789` (for the container id), or the container image name, e.g.
|
||||
```bash
|
||||
$ ls -ltr /run/kata-containers/confidential-containers_signed/
|
||||
```
|
||||
which should show something like
|
||||
```
|
||||
total 72
|
||||
drwxr-xr-x 10 root root 200 Jan 1 1970 rootfs
|
||||
-rw-r--r-- 1 root root 2977 Jan 20 16:45 config.json
|
||||
```
|
||||
- Leave the Kata shell by running:
|
||||
```bash
|
||||
$ exit
|
||||
```
|
||||
|
||||
## Verifying signed images
|
||||
|
||||
For this sample demo, we use local attestation to pass through the required
|
||||
configuration to do container image signature verification. Due to this, the ability to verify images is limited
|
||||
to a pre-created selection of test images in our test
|
||||
repository [`quay.io/kata-containers/confidential-containers`](https://quay.io/repository/kata-containers/confidential-containers?tab=tags).
|
||||
For pulling images not in this test repository (called an *unprotected* registry below), we fall back to the behaviour
|
||||
of not enforcing signatures. More documentation on how to customise this to match your own containers through local,
|
||||
or remote attestation will be available in future.
|
||||
|
||||
In our test repository there are three tagged images:
|
||||
|
||||
| Test Image | Base Image used | Signature status | GPG key status |
|
||||
| --- | --- | --- | --- |
|
||||
| `quay.io/kata-containers/confidential-containers:signed` | `busybox:1.33.1` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/signatures.tar) embedded in kata rootfs | [public key](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/public.gpg) embedded in kata rootfs |
|
||||
| `quay.io/kata-containers/confidential-containers:unsigned` | `busybox:1.33.1` | not signed | not signed |
|
||||
| `quay.io/kata-containers/confidential-containers:other_signed` | `nginx:1.21.3` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/x86_64/signatures.tar) embedded in kata rootfs | GPG key not kept |
|
||||
|
||||
Using a standard unsigned `busybox` image that can be pulled from another, *unprotected*, `quay.io` repository we can
|
||||
test a few scenarios.
|
||||
|
||||
In this sample, with local attestation, we pass in the the public GPG key and signature files, and the [`offline_fs_kbc`
|
||||
configuration](https://github.com/confidential-containers/attestation-agent/blob/main/src/kbc_modules/offline_fs_kbc/README.md)
|
||||
into the guest image which specifies that any container image from `quay.io/kata-containers`
|
||||
must be signed with the embedded GPG key and the agent configuration needs updating to enable this.
|
||||
With this policy set a few tests of image verification can be done to test different scenarios by attempting
|
||||
to create containers from these images using `crictl`:
|
||||
|
||||
- If you don't already have the Kata Containers CC code built and configured for `crictl`, then follow the
|
||||
[instructions above](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
|
||||
up to the `~/ccv0.sh crictl_create_cc_pod` command.
|
||||
|
||||
- In order to enable the guest image, you will need to setup the required configuration, policy and signature files
|
||||
needed by running
|
||||
`~/ccv0.sh copy_signature_files_to_guest` and then run `~/ccv0.sh crictl_create_cc_pod` which will delete and recreate
|
||||
your pod - adding in the new files.
|
||||
|
||||
- To test the fallback behaviour works using an unsigned image from an *unprotected* registry we can pull the `busybox`
|
||||
image by running:
|
||||
```bash
|
||||
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-unprotected.yaml
|
||||
$ ~/ccv0.sh crictl_create_cc_container
|
||||
```
|
||||
- This finishes showing the running container e.g.
|
||||
```text
|
||||
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
|
||||
98c70fefe997a quay.io/prometheus/busybox:latest Less than a second ago Running prometheus-busybox-signed 0 70119e0539238
|
||||
```
|
||||
- To test that an unsigned image from our *protected* test container registry is rejected we can run:
|
||||
```bash
|
||||
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-protected.yaml
|
||||
$ ~/ccv0.sh crictl_create_cc_container
|
||||
```
|
||||
- This correctly results in an error message from `crictl`:
|
||||
`PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [Match reference failed.]" image="quay.io/kata-containers/confidential-containers:unsigned"`
|
||||
- To test that the signed image our *protected* test container registry is accepted we can run:
|
||||
```bash
|
||||
$ export CONTAINER_CONFIG_FILE=container-config.yaml
|
||||
$ ~/ccv0.sh crictl_create_cc_container
|
||||
```
|
||||
- This finishes by showing a new `kata-cc-busybox-signed` running container e.g.
|
||||
```text
|
||||
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
|
||||
b4d85c2132ed9 quay.io/kata-containers/confidential-containers:signed Less than a second ago Running kata-cc-busybox-signed 0 70119e0539238
|
||||
...
|
||||
```
|
||||
- Finally to check the image with a valid signature, but invalid GPG key (the real trusted piece of information we really
|
||||
want to protect with the attestation agent in future) fails we can run:
|
||||
```bash
|
||||
$ export CONTAINER_CONFIG_FILE=container-config_signed-protected-other.yaml
|
||||
$ ~/ccv0.sh crictl_create_cc_container
|
||||
```
|
||||
- Again this results in an error message from `crictl`:
|
||||
`"PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [signature verify failed! There is no pubkey can verify the signature!]" image="quay.io/kata-containers/confidential-containers:other_signed"`
|
||||
|
||||
### Using Kubernetes to create a Kata confidential containers pod from the encrypted ssh demo sample image
|
||||
|
||||
The [ssh-demo](https://github.com/confidential-containers/documentation/tree/main/demos/ssh-demo) explains how to
|
||||
demonstrate creating a Kata confidential containers pod from an encrypted image with the runtime created by the
|
||||
[confidential-containers operator](https://github.com/confidential-containers/documentation/blob/main/demos/operator-demo).
|
||||
To be fully confidential, this should be run on a Trusted Execution Environment, but it can be tested on generic
|
||||
hardware as well.
|
||||
|
||||
If you wish to build the Kata confidential containers runtime to do this yourself, then you can using the following
|
||||
steps:
|
||||
|
||||
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
|
||||
configured and created using the VM as a single node cluster and with `AA_KBC` set to `offline_fs_kbc`.
|
||||
```bash
|
||||
$ export KUBERNETES="yes"
|
||||
$ export AA_KBC=offline_fs_kbc
|
||||
$ ~/ccv0.sh build_and_install_all
|
||||
```
|
||||
- The `AA_KBC=offline_fs_kbc` mode will ensure that, when creating the rootfs of the Kata guest, the
|
||||
[attestation-agent](https://github.com/confidential-containers/attestation-agent) will be added along with the
|
||||
[sample offline KBC](https://github.com/confidential-containers/documentation/blob/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json)
|
||||
and an agent configuration file
|
||||
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
|
||||
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
|
||||
this, login into Docker Hub and pull the images used with:
|
||||
> ```bash
|
||||
> $ sudo docker login
|
||||
> $ sudo docker pull registry:2
|
||||
> $ sudo docker pull ubuntu:20.04
|
||||
> ```
|
||||
> then re-run the command.
|
||||
- Check that your Kubernetes cluster has been correctly set-up by running :
|
||||
```bash
|
||||
$ kubectl get nodes
|
||||
```
|
||||
and checking that you see a single node e.g.
|
||||
```text
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
|
||||
```
|
||||
- Create a sample Kata confidential containers ssh pod by running:
|
||||
```bash
|
||||
$ ~/ccv0.sh kubernetes_create_ssh_demo_pod
|
||||
```
|
||||
- As this point you should have a Kubernetes pod running the Kata confidential containers runtime that has pulled
|
||||
the [sample image](https://hub.docker.com/r/katadocker/ccv0-ssh) which was encrypted by the key file that we included
|
||||
in the rootfs.
|
||||
During the pod deployment the image was pulled and then decrypted using the key file, on the Kata guest image, without
|
||||
it ever being available to the host.
|
||||
|
||||
- To validate that the container is working you, can connect to the image via SSH by running:
|
||||
```bash
|
||||
$ ~/ccv0.sh connect_to_ssh_demo_pod
|
||||
```
|
||||
- During this connection the host key fingerprint is shown and should match:
|
||||
`ED25519 key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0.`
|
||||
- After you are finished connecting then run:
|
||||
```bash
|
||||
$ exit
|
||||
```
|
||||
|
||||
- To delete the sample SSH demo pod run:
|
||||
```bash
|
||||
$ ~/ccv0.sh kubernetes_delete_ssh_demo_pod
|
||||
```
|
||||
|
||||
## Additional script usage
|
||||
|
||||
As well as being able to use the script as above to build all of `kata-containers` from scratch it can be used to just
|
||||
re-build bits of it by running the script with different parameters. For example after the first build you will often
|
||||
not need to re-install the dependencies, the hypervisor or the Guest kernel, but just test code changes made to the
|
||||
runtime and agent. This can be done by running `~/ccv0.sh rebuild_and_install_kata`. (*Note this does a hard checkout*
|
||||
*from git, so if your changes are only made locally it is better to do the individual steps e.g.*
|
||||
`~/ccv0.sh build_kata_runtime && ~/ccv0.sh build_and_add_agent_to_rootfs && ~/ccv0.sh build_and_install_rootfs`).
|
||||
There are commands for a lot of steps in building, setting up and testing and the full list can be seen by running
|
||||
`~/ccv0.sh help`:
|
||||
```
|
||||
$ ~/ccv0.sh help
|
||||
Overview:
|
||||
Build and test kata containers from source
|
||||
Optionally set kata-containers and tests repo and branch as exported variables before running
|
||||
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/ccv0.sh build_and_install_all
|
||||
Usage:
|
||||
ccv0.sh [options] <command>
|
||||
Commands:
|
||||
- help: Display this help
|
||||
- all: Build and install everything, test kata with containerd and capture the logs
|
||||
- build_and_install_all: Build and install everything
|
||||
- initialize: Install dependencies and check out kata-containers source
|
||||
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
|
||||
- build_kata_runtime: Build and install the kata runtime
|
||||
- configure: Configure Kata to use rootfs and enable debug
|
||||
- create_rootfs: Create a local rootfs
|
||||
- build_and_add_agent_to_rootfs:Builds the kata-agent and adds it to the rootfs
|
||||
- build_and_install_rootfs: Builds and installs the rootfs image
|
||||
- install_guest_kernel: Setup, build and install the guest kernel
|
||||
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
|
||||
- build_qemu: Checkout, patch, build and install QEMU
|
||||
- init_kubernetes: initialize a Kubernetes cluster on this system
|
||||
- crictl_create_cc_pod Use crictl to create a new kata cc pod
|
||||
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
|
||||
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
|
||||
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
|
||||
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
|
||||
- open_kata_shell: Open a shell into the kata runtime
|
||||
- agent_pull_image: Run PullImage command against the agent with agent-ctl
|
||||
- shim_pull_image: Run PullImage command against the shim with ctr
|
||||
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
|
||||
- test: Test using kata with containerd
|
||||
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
|
||||
|
||||
Options:
|
||||
-d: Enable debug
|
||||
-h: Display this help
|
||||
```
|
||||
@@ -1,44 +0,0 @@
|
||||
# Generating a Kata Containers payload for the Confidential Containers Operator
|
||||
|
||||
[Confidential Containers
|
||||
Operator](https://github.com/confidential-containers/operator) consumes a Kata
|
||||
Containers payload, generated from the `CCv0` branch, and here one can find all
|
||||
the necessary info on how to build such a payload.
|
||||
|
||||
## Requirements
|
||||
|
||||
* `make` installed in the machine
|
||||
* Docker installed in the machine
|
||||
* `sudo` access to the machine
|
||||
|
||||
## Process
|
||||
|
||||
* Clone [Kata Containers](https://github.com/kata-containers/kata-containers)
|
||||
```sh
|
||||
git clone --branch CCv0 https://github.com/kata-containers/kata-containers
|
||||
```
|
||||
* In case you've already cloned the repo, make sure to switch to the `CCv0` branch
|
||||
```sh
|
||||
git checkout CCv0
|
||||
```
|
||||
* Ensure your tree is clean and in sync with upstream `CCv0`
|
||||
```sh
|
||||
git clean -xfd
|
||||
git reset --hard <upstream>/CCv0
|
||||
```
|
||||
* Make sure you're authenticated to `quay.io`
|
||||
```sh
|
||||
sudo docker login quay.io
|
||||
```
|
||||
* From the top repo directory, run:
|
||||
```sh
|
||||
sudo make cc-payload
|
||||
```
|
||||
* Make sure the image was upload to the [Confidential Containers
|
||||
runtime-payload
|
||||
registry](https://quay.io/repository/confidential-containers/runtime-payload?tab=tags)
|
||||
|
||||
## Notes
|
||||
|
||||
Make sure to run it on a machine that's not the one you're hacking on, prepare a
|
||||
cup of tea, and get back to it an hour later (at least).
|
||||
@@ -94,16 +94,6 @@ There are several kinds of Kata configurations and they are listed below.
|
||||
| `io.katacontainers.config.hypervisor.enable_guest_swap` | `boolean` | enable swap in the guest |
|
||||
| `io.katacontainers.config.hypervisor.use_legacy_serial` | `boolean` | uses legacy serial device for guest's console (QEMU) |
|
||||
|
||||
## Confidential Computing Options
|
||||
| Key | Value Type | Comments |
|
||||
|-------| ----- | ----- |
|
||||
| `io.katacontainers.config.pre_attestation.enabled"` | `bool` |
|
||||
determines if SEV/-ES attestation is enabled |
|
||||
| `io.katacontainers.config.pre_attestation.uri"` | `string` |
|
||||
specify the location of the attestation server |
|
||||
| `io.katacontainers.config.sev.policy"` | `uint32` |
|
||||
specify the SEV guest policy |
|
||||
|
||||
## Container Options
|
||||
| Key | Value Type | Comments |
|
||||
|-------| ----- | ----- |
|
||||
|
||||
@@ -27,8 +27,6 @@ $ image="quay.io/prometheus/busybox:latest"
|
||||
$ cat << EOF > "${pod_yaml}"
|
||||
metadata:
|
||||
name: busybox-sandbox1
|
||||
uid: $(uuidgen)
|
||||
namespace: default
|
||||
EOF
|
||||
$ cat << EOF > "${container_yaml}"
|
||||
metadata:
|
||||
|
||||
@@ -32,7 +32,6 @@ The `nydus-sandbox.yaml` looks like below:
|
||||
metadata:
|
||||
attempt: 1
|
||||
name: nydus-sandbox
|
||||
uid: nydus-uid
|
||||
namespace: default
|
||||
log_directory: /tmp
|
||||
linux:
|
||||
|
||||
@@ -42,8 +42,6 @@ $ image="quay.io/prometheus/busybox:latest"
|
||||
$ cat << EOF > "${pod_yaml}"
|
||||
metadata:
|
||||
name: busybox-sandbox1
|
||||
uid: $(uuidgen)
|
||||
namespace: default
|
||||
EOF
|
||||
$ cat << EOF > "${container_yaml}"
|
||||
metadata:
|
||||
|
||||
3705
src/agent/Cargo.lock
generated
3705
src/agent/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -23,12 +23,11 @@ regex = "1.5.6"
|
||||
serial_test = "0.5.1"
|
||||
kata-sys-util = { path = "../libs/kata-sys-util" }
|
||||
kata-types = { path = "../libs/kata-types" }
|
||||
url = "2.2.2"
|
||||
|
||||
# Async helpers
|
||||
async-trait = "0.1.42"
|
||||
async-recursion = "0.3.2"
|
||||
futures = "0.3.28"
|
||||
futures = "0.3.17"
|
||||
|
||||
# Async runtime
|
||||
tokio = { version = "1.28.1", features = ["full"] }
|
||||
@@ -44,6 +43,7 @@ ipnetwork = "0.17.0"
|
||||
logging = { path = "../libs/logging" }
|
||||
slog = "2.5.2"
|
||||
slog-scope = "4.1.2"
|
||||
slog-term = "2.9.0"
|
||||
|
||||
# Redirect ttrpc log calls
|
||||
slog-stdlog = "4.0.0"
|
||||
@@ -67,22 +67,12 @@ serde = { version = "1.0.129", features = ["derive"] }
|
||||
toml = "0.5.8"
|
||||
clap = { version = "3.0.1", features = ["derive"] }
|
||||
|
||||
# "vendored" feature for openssl is required by musl build
|
||||
openssl = { version = "0.10.38", features = ["vendored"] }
|
||||
|
||||
# Image pull/decrypt
|
||||
image-rs = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.7.0", default-features = false, features = ["kata-cc-native-tls"] }
|
||||
|
||||
[patch.crates-io]
|
||||
oci-distribution = { git = "https://github.com/krustlet/oci-distribution.git", rev = "f44124c" }
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3.1.0"
|
||||
test-utils = { path = "../libs/test-utils" }
|
||||
which = "4.3.0"
|
||||
|
||||
[workspace]
|
||||
resolver = "2"
|
||||
members = [
|
||||
"rustjail",
|
||||
]
|
||||
|
||||
@@ -26,7 +26,7 @@ export VERSION_COMMIT := $(if $(COMMIT),$(VERSION)-$(COMMIT),$(VERSION))
|
||||
EXTRA_RUSTFEATURES :=
|
||||
|
||||
##VAR SECCOMP=yes|no define if agent enables seccomp feature
|
||||
SECCOMP := yes
|
||||
SECCOMP ?= yes
|
||||
|
||||
# Enable seccomp feature of rust build
|
||||
ifeq ($(SECCOMP),yes)
|
||||
|
||||
@@ -541,11 +541,8 @@ fn linux_device_to_cgroup_device(d: &LinuxDevice) -> Option<DeviceResource> {
|
||||
}
|
||||
|
||||
fn linux_device_group_to_cgroup_device(d: &LinuxDeviceCgroup) -> Option<DeviceResource> {
|
||||
let dev_type = match &d.r#type {
|
||||
Some(t_s) => match DeviceType::from_char(t_s.chars().next()) {
|
||||
Some(t_c) => t_c,
|
||||
None => return None,
|
||||
},
|
||||
let dev_type = match DeviceType::from_char(d.r#type.chars().next()) {
|
||||
Some(t) => t,
|
||||
None => return None,
|
||||
};
|
||||
|
||||
@@ -602,7 +599,7 @@ lazy_static! {
|
||||
// all mknod to all char devices
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(WILDCARD),
|
||||
minor: Some(WILDCARD),
|
||||
access: "m".to_string(),
|
||||
@@ -611,7 +608,7 @@ lazy_static! {
|
||||
// all mknod to all block devices
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("b".to_string()),
|
||||
r#type: "b".to_string(),
|
||||
major: Some(WILDCARD),
|
||||
minor: Some(WILDCARD),
|
||||
access: "m".to_string(),
|
||||
@@ -620,7 +617,7 @@ lazy_static! {
|
||||
// all read/write/mknod to char device /dev/console
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(5),
|
||||
minor: Some(1),
|
||||
access: "rwm".to_string(),
|
||||
@@ -629,7 +626,7 @@ lazy_static! {
|
||||
// all read/write/mknod to char device /dev/pts/<N>
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(136),
|
||||
minor: Some(WILDCARD),
|
||||
access: "rwm".to_string(),
|
||||
@@ -638,7 +635,7 @@ lazy_static! {
|
||||
// all read/write/mknod to char device /dev/ptmx
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(5),
|
||||
minor: Some(2),
|
||||
access: "rwm".to_string(),
|
||||
@@ -647,7 +644,7 @@ lazy_static! {
|
||||
// all read/write/mknod to char device /dev/net/tun
|
||||
LinuxDeviceCgroup {
|
||||
allow: true,
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(10),
|
||||
minor: Some(200),
|
||||
access: "rwm".to_string(),
|
||||
|
||||
@@ -241,12 +241,6 @@ pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources
|
||||
let devices = {
|
||||
let mut d = Vec::new();
|
||||
for dev in res.Devices.iter() {
|
||||
let dev_type = if dev.Type.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(dev.Type.clone())
|
||||
};
|
||||
|
||||
let major = if dev.Major == -1 {
|
||||
None
|
||||
} else {
|
||||
@@ -260,7 +254,7 @@ pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources
|
||||
};
|
||||
d.push(oci::LinuxDeviceCgroup {
|
||||
allow: dev.Allow,
|
||||
r#type: dev_type,
|
||||
r#type: dev.Type.clone(),
|
||||
major,
|
||||
minor,
|
||||
access: dev.Access.clone(),
|
||||
|
||||
@@ -1118,6 +1118,7 @@ mod tests {
|
||||
use std::fs::create_dir;
|
||||
use std::fs::create_dir_all;
|
||||
use std::fs::remove_dir_all;
|
||||
use std::fs::remove_file;
|
||||
use std::io;
|
||||
use std::os::unix::fs;
|
||||
use std::os::unix::io::AsRawFd;
|
||||
@@ -1333,14 +1334,9 @@ mod tests {
|
||||
fn test_mknod_dev() {
|
||||
skip_if_not_root!();
|
||||
|
||||
let tempdir = tempdir().unwrap();
|
||||
|
||||
let olddir = unistd::getcwd().unwrap();
|
||||
defer!(let _ = unistd::chdir(&olddir););
|
||||
let _ = unistd::chdir(tempdir.path());
|
||||
|
||||
let path = "/dev/fifo-test";
|
||||
let dev = oci::LinuxDevice {
|
||||
path: "/fifo".to_string(),
|
||||
path: path.to_string(),
|
||||
r#type: "c".to_string(),
|
||||
major: 0,
|
||||
minor: 0,
|
||||
@@ -1348,13 +1344,16 @@ mod tests {
|
||||
uid: Some(unistd::getuid().as_raw()),
|
||||
gid: Some(unistd::getgid().as_raw()),
|
||||
};
|
||||
let path = Path::new("fifo");
|
||||
|
||||
let ret = mknod_dev(&dev, path);
|
||||
let ret = mknod_dev(&dev, Path::new(path));
|
||||
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
|
||||
|
||||
let ret = stat::stat(path);
|
||||
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
|
||||
|
||||
// clear test device node
|
||||
let ret = remove_file(path);
|
||||
assert!(ret.is_ok(), "Should pass, Got: {:?}", ret);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -161,7 +161,7 @@ impl Process {
|
||||
|
||||
pub fn notify_term_close(&mut self) {
|
||||
let notify = self.term_exit_notifier.clone();
|
||||
notify.notify_one();
|
||||
notify.notify_waiters();
|
||||
}
|
||||
|
||||
pub fn close_stdin(&mut self) {
|
||||
|
||||
@@ -11,7 +11,6 @@ use std::fs;
|
||||
use std::str::FromStr;
|
||||
use std::time;
|
||||
use tracing::instrument;
|
||||
use url::Url;
|
||||
|
||||
use kata_types::config::default::DEFAULT_AGENT_VSOCK_PORT;
|
||||
|
||||
@@ -26,14 +25,6 @@ const LOG_VPORT_OPTION: &str = "agent.log_vport";
|
||||
const CONTAINER_PIPE_SIZE_OPTION: &str = "agent.container_pipe_size";
|
||||
const UNIFIED_CGROUP_HIERARCHY_OPTION: &str = "agent.unified_cgroup_hierarchy";
|
||||
const CONFIG_FILE: &str = "agent.config_file";
|
||||
const AA_KBC_PARAMS: &str = "agent.aa_kbc_params";
|
||||
const HTTPS_PROXY: &str = "agent.https_proxy";
|
||||
const NO_PROXY: &str = "agent.no_proxy";
|
||||
const ENABLE_DATA_INTEGRITY: &str = "agent.data_integrity";
|
||||
const ENABLE_SIGNATURE_VERIFICATION: &str = "agent.enable_signature_verification";
|
||||
const IMAGE_POLICY_FILE: &str = "agent.image_policy";
|
||||
const IMAGE_REGISTRY_AUTH_FILE: &str = "agent.image_registry_auth";
|
||||
const SIMPLE_SIGNING_SIGSTORE_CONFIG: &str = "agent.simple_signing_sigstore_config";
|
||||
|
||||
const DEFAULT_LOG_LEVEL: slog::Level = slog::Level::Info;
|
||||
const DEFAULT_HOTPLUG_TIMEOUT: time::Duration = time::Duration::from_secs(3);
|
||||
@@ -86,15 +77,6 @@ pub struct AgentConfig {
|
||||
pub tracing: bool,
|
||||
pub endpoints: AgentEndpoints,
|
||||
pub supports_seccomp: bool,
|
||||
pub container_policy_path: String,
|
||||
pub aa_kbc_params: String,
|
||||
pub https_proxy: String,
|
||||
pub no_proxy: String,
|
||||
pub data_integrity: bool,
|
||||
pub enable_signature_verification: bool,
|
||||
pub image_policy_file: String,
|
||||
pub image_registry_auth_file: String,
|
||||
pub simple_signing_sigstore_config: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
@@ -110,15 +92,6 @@ pub struct AgentConfigBuilder {
|
||||
pub unified_cgroup_hierarchy: Option<bool>,
|
||||
pub tracing: Option<bool>,
|
||||
pub endpoints: Option<EndpointsConfig>,
|
||||
pub container_policy_path: Option<String>,
|
||||
pub aa_kbc_params: Option<String>,
|
||||
pub https_proxy: Option<String>,
|
||||
pub no_proxy: Option<String>,
|
||||
pub data_integrity: Option<bool>,
|
||||
pub enable_signature_verification: Option<bool>,
|
||||
pub image_policy_file: Option<String>,
|
||||
pub image_registry_auth_file: Option<String>,
|
||||
pub simple_signing_sigstore_config: Option<String>,
|
||||
}
|
||||
|
||||
macro_rules! config_override {
|
||||
@@ -180,15 +153,6 @@ impl Default for AgentConfig {
|
||||
tracing: false,
|
||||
endpoints: Default::default(),
|
||||
supports_seccomp: rpc::have_seccomp(),
|
||||
container_policy_path: String::from(""),
|
||||
aa_kbc_params: String::from(""),
|
||||
https_proxy: String::from(""),
|
||||
no_proxy: String::from(""),
|
||||
data_integrity: false,
|
||||
enable_signature_verification: true,
|
||||
image_policy_file: String::from(""),
|
||||
image_registry_auth_file: String::from(""),
|
||||
simple_signing_sigstore_config: String::from(""),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -217,23 +181,6 @@ impl FromStr for AgentConfig {
|
||||
config_override!(agent_config_builder, agent_config, server_addr);
|
||||
config_override!(agent_config_builder, agent_config, unified_cgroup_hierarchy);
|
||||
config_override!(agent_config_builder, agent_config, tracing);
|
||||
config_override!(agent_config_builder, agent_config, container_policy_path);
|
||||
config_override!(agent_config_builder, agent_config, aa_kbc_params);
|
||||
config_override!(agent_config_builder, agent_config, https_proxy);
|
||||
config_override!(agent_config_builder, agent_config, no_proxy);
|
||||
config_override!(agent_config_builder, agent_config, data_integrity);
|
||||
config_override!(
|
||||
agent_config_builder,
|
||||
agent_config,
|
||||
enable_signature_verification
|
||||
);
|
||||
config_override!(agent_config_builder, agent_config, image_policy_file);
|
||||
config_override!(agent_config_builder, agent_config, image_registry_auth_file);
|
||||
config_override!(
|
||||
agent_config_builder,
|
||||
agent_config,
|
||||
simple_signing_sigstore_config
|
||||
);
|
||||
|
||||
// Populate the allowed endpoints hash set, if we got any from the config file.
|
||||
if let Some(endpoints) = agent_config_builder.endpoints {
|
||||
@@ -262,10 +209,6 @@ impl AgentConfig {
|
||||
let mut config: AgentConfig = Default::default();
|
||||
let cmdline = fs::read_to_string(file)?;
|
||||
let params: Vec<&str> = cmdline.split_ascii_whitespace().collect();
|
||||
|
||||
let mut using_config_file = false;
|
||||
// Check if there is config file before parsing params that might
|
||||
// override values from the config file.
|
||||
for param in params.iter() {
|
||||
// If we get a configuration file path from the command line, we
|
||||
// generate our config from it.
|
||||
@@ -273,15 +216,10 @@ impl AgentConfig {
|
||||
// or if it can't be parsed properly.
|
||||
if param.starts_with(format!("{}=", CONFIG_FILE).as_str()) {
|
||||
let config_file = get_string_value(param)?;
|
||||
config = AgentConfig::from_config_file(&config_file)
|
||||
.context("AgentConfig from kernel cmdline")
|
||||
.unwrap();
|
||||
using_config_file = true;
|
||||
break;
|
||||
return AgentConfig::from_config_file(&config_file)
|
||||
.context("AgentConfig from kernel cmdline");
|
||||
}
|
||||
}
|
||||
|
||||
for param in params.iter() {
|
||||
// parse cmdline flags
|
||||
parse_cmdline_param!(param, DEBUG_CONSOLE_FLAG, config.debug_console);
|
||||
parse_cmdline_param!(param, DEV_MODE_FLAG, config.dev_mode);
|
||||
@@ -341,48 +279,6 @@ impl AgentConfig {
|
||||
config.unified_cgroup_hierarchy,
|
||||
get_bool_value
|
||||
);
|
||||
|
||||
parse_cmdline_param!(param, AA_KBC_PARAMS, config.aa_kbc_params, get_string_value);
|
||||
parse_cmdline_param!(param, HTTPS_PROXY, config.https_proxy, get_url_value);
|
||||
parse_cmdline_param!(param, NO_PROXY, config.no_proxy, get_string_value);
|
||||
parse_cmdline_param!(
|
||||
param,
|
||||
ENABLE_DATA_INTEGRITY,
|
||||
config.data_integrity,
|
||||
get_bool_value
|
||||
);
|
||||
|
||||
parse_cmdline_param!(
|
||||
param,
|
||||
ENABLE_SIGNATURE_VERIFICATION,
|
||||
config.enable_signature_verification,
|
||||
get_bool_value
|
||||
);
|
||||
|
||||
// URI of the image security file
|
||||
parse_cmdline_param!(
|
||||
param,
|
||||
IMAGE_POLICY_FILE,
|
||||
config.image_policy_file,
|
||||
get_string_value
|
||||
);
|
||||
|
||||
// URI of the registry auth file
|
||||
parse_cmdline_param!(
|
||||
param,
|
||||
IMAGE_REGISTRY_AUTH_FILE,
|
||||
config.image_registry_auth_file,
|
||||
get_string_value
|
||||
);
|
||||
|
||||
// URI of the simple signing sigstore file
|
||||
// used when simple signing verification is used
|
||||
parse_cmdline_param!(
|
||||
param,
|
||||
SIMPLE_SIGNING_SIGSTORE_CONFIG,
|
||||
config.simple_signing_sigstore_config,
|
||||
get_string_value
|
||||
);
|
||||
}
|
||||
|
||||
if let Ok(addr) = env::var(SERVER_ADDR_ENV_VAR) {
|
||||
@@ -402,9 +298,7 @@ impl AgentConfig {
|
||||
}
|
||||
|
||||
// We did not get a configuration file: allow all endpoints.
|
||||
if !using_config_file {
|
||||
config.endpoints.all_allowed = true;
|
||||
}
|
||||
config.endpoints.all_allowed = true;
|
||||
|
||||
Ok(config)
|
||||
}
|
||||
@@ -539,12 +433,6 @@ fn get_container_pipe_size(param: &str) -> Result<i32> {
|
||||
Ok(value)
|
||||
}
|
||||
|
||||
#[instrument]
|
||||
fn get_url_value(param: &str) -> Result<String> {
|
||||
let value = get_string_value(param)?;
|
||||
Ok(Url::parse(&value)?.to_string())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use test_utils::assert_result;
|
||||
@@ -563,11 +451,6 @@ mod tests {
|
||||
assert!(!config.dev_mode);
|
||||
assert_eq!(config.log_level, DEFAULT_LOG_LEVEL);
|
||||
assert_eq!(config.hotplug_timeout, DEFAULT_HOTPLUG_TIMEOUT);
|
||||
assert_eq!(config.container_policy_path, "");
|
||||
assert!(config.enable_signature_verification);
|
||||
assert_eq!(config.image_policy_file, "");
|
||||
assert_eq!(config.image_registry_auth_file, "");
|
||||
assert_eq!(config.simple_signing_sigstore_config, "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -586,15 +469,6 @@ mod tests {
|
||||
server_addr: &'a str,
|
||||
unified_cgroup_hierarchy: bool,
|
||||
tracing: bool,
|
||||
container_policy_path: &'a str,
|
||||
aa_kbc_params: &'a str,
|
||||
https_proxy: &'a str,
|
||||
no_proxy: &'a str,
|
||||
data_integrity: bool,
|
||||
enable_signature_verification: bool,
|
||||
image_policy_file: &'a str,
|
||||
image_registry_auth_file: &'a str,
|
||||
simple_signing_sigstore_config: &'a str,
|
||||
}
|
||||
|
||||
impl Default for TestData<'_> {
|
||||
@@ -610,15 +484,6 @@ mod tests {
|
||||
server_addr: TEST_SERVER_ADDR,
|
||||
unified_cgroup_hierarchy: false,
|
||||
tracing: false,
|
||||
container_policy_path: "",
|
||||
aa_kbc_params: "",
|
||||
https_proxy: "",
|
||||
no_proxy: "",
|
||||
data_integrity: false,
|
||||
enable_signature_verification: true,
|
||||
image_policy_file: "",
|
||||
image_registry_auth_file: "",
|
||||
simple_signing_sigstore_config: "",
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -988,126 +853,6 @@ mod tests {
|
||||
tracing: true,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.aa_kbc_params=offline_fs_kbc::null",
|
||||
aa_kbc_params: "offline_fs_kbc::null",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.aa_kbc_params=eaa_kbc::127.0.0.1:50000",
|
||||
aa_kbc_params: "eaa_kbc::127.0.0.1:50000",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.https_proxy=http://proxy.url.com:81/",
|
||||
https_proxy: "http://proxy.url.com:81/",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.https_proxy=http://192.168.1.100:81/",
|
||||
https_proxy: "http://192.168.1.100:81/",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.no_proxy=*.internal.url.com",
|
||||
no_proxy: "*.internal.url.com",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.no_proxy=192.168.1.0/24,172.16.0.0/12",
|
||||
no_proxy: "192.168.1.0/24,172.16.0.0/12",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "",
|
||||
data_integrity: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.data_integrity=true",
|
||||
data_integrity: true,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.data_integrity=false",
|
||||
data_integrity: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.data_integrity=1",
|
||||
data_integrity: true,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.data_integrity=0",
|
||||
data_integrity: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.enable_signature_verification=false",
|
||||
enable_signature_verification: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.enable_signature_verification=0",
|
||||
enable_signature_verification: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.enable_signature_verification=1",
|
||||
enable_signature_verification: true,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.enable_signature_verification=foo",
|
||||
enable_signature_verification: false,
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_policy=file:///etc/policy.json",
|
||||
image_policy_file: "file:///etc/policy.json",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_policy=kbs:///default/security-policy/test",
|
||||
image_policy_file: "kbs:///default/security-policy/test",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_policy=kbs://example.kbs.org/default/security-policy/test",
|
||||
image_policy_file: "kbs://example.kbs.org/default/security-policy/test",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_registry_auth=file:///etc/auth.json",
|
||||
image_registry_auth_file: "file:///etc/auth.json",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_registry_auth=kbs:///default/credential/test",
|
||||
image_registry_auth_file: "kbs:///default/credential/test",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.image_registry_auth=kbs://example.kbs.org/default/credential/test",
|
||||
image_registry_auth_file: "kbs://example.kbs.org/default/credential/test",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.simple_signing_sigstore_config=file:///etc/containers/signature/default.yml",
|
||||
simple_signing_sigstore_config: "file:///etc/containers/signature/default.yml",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.simple_signing_sigstore_config=kbs:///default/sigstore-config/test",
|
||||
simple_signing_sigstore_config: "kbs:///default/sigstore-config/test",
|
||||
..Default::default()
|
||||
},
|
||||
TestData {
|
||||
contents: "agent.simple_signing_sigstore_config=kbs://example.kbs.org/default/sigstore-config/test",
|
||||
simple_signing_sigstore_config: "kbs://example.kbs.org/default/sigstore-config/test",
|
||||
..Default::default()
|
||||
},
|
||||
];
|
||||
|
||||
let dir = tempdir().expect("failed to create tmpdir");
|
||||
@@ -1155,31 +900,6 @@ mod tests {
|
||||
assert_eq!(d.container_pipe_size, config.container_pipe_size, "{}", msg);
|
||||
assert_eq!(d.server_addr, config.server_addr, "{}", msg);
|
||||
assert_eq!(d.tracing, config.tracing, "{}", msg);
|
||||
assert_eq!(
|
||||
d.container_policy_path, config.container_policy_path,
|
||||
"{}",
|
||||
msg
|
||||
);
|
||||
assert_eq!(d.aa_kbc_params, config.aa_kbc_params, "{}", msg);
|
||||
assert_eq!(d.https_proxy, config.https_proxy, "{}", msg);
|
||||
assert_eq!(d.no_proxy, config.no_proxy, "{}", msg);
|
||||
assert_eq!(d.data_integrity, config.data_integrity, "{}", msg);
|
||||
assert_eq!(
|
||||
d.enable_signature_verification, config.enable_signature_verification,
|
||||
"{}",
|
||||
msg
|
||||
);
|
||||
assert_eq!(d.image_policy_file, config.image_policy_file, "{}", msg);
|
||||
assert_eq!(
|
||||
d.image_registry_auth_file, config.image_registry_auth_file,
|
||||
"{}",
|
||||
msg
|
||||
);
|
||||
assert_eq!(
|
||||
d.simple_signing_sigstore_config, config.simple_signing_sigstore_config,
|
||||
"{}",
|
||||
msg
|
||||
);
|
||||
|
||||
for v in vars_to_unset {
|
||||
env::remove_var(v);
|
||||
@@ -1681,50 +1401,4 @@ Caused by:
|
||||
// Verify that the default values are valid
|
||||
assert_eq!(config.hotplug_timeout, DEFAULT_HOTPLUG_TIMEOUT);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_config_from_cmdline_and_config_file() {
|
||||
let dir = tempdir().expect("failed to create tmpdir");
|
||||
|
||||
let agent_config = r#"
|
||||
dev_mode = false
|
||||
server_addr = 'vsock://8:2048'
|
||||
|
||||
[endpoints]
|
||||
allowed = ["CreateContainer", "StartContainer"]
|
||||
"#;
|
||||
|
||||
let config_path = dir.path().join("agent-config.toml");
|
||||
let config_filename = config_path.to_str().expect("failed to get config filename");
|
||||
|
||||
fs::write(config_filename, agent_config).expect("failed to write agen config");
|
||||
|
||||
let cmdline = format!("agent.devmode agent.config_file={}", config_filename);
|
||||
|
||||
let cmdline_path = dir.path().join("cmdline");
|
||||
let cmdline_filename = cmdline_path
|
||||
.to_str()
|
||||
.expect("failed to get cmdline filename");
|
||||
|
||||
fs::write(cmdline_filename, cmdline).expect("failed to write agen config");
|
||||
|
||||
let config = AgentConfig::from_cmdline(cmdline_filename, vec![])
|
||||
.expect("failed to parse command line");
|
||||
|
||||
// Should be overwritten by cmdline
|
||||
assert!(config.dev_mode);
|
||||
|
||||
// Should be from agent config
|
||||
assert_eq!(config.server_addr, "vsock://8:2048");
|
||||
|
||||
// Should be from agent config
|
||||
assert_eq!(
|
||||
config.endpoints.allowed,
|
||||
vec!["CreateContainer".to_string(), "StartContainer".to_string()]
|
||||
.iter()
|
||||
.cloned()
|
||||
.collect()
|
||||
);
|
||||
assert!(!config.endpoints.all_allowed);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -651,15 +651,13 @@ fn update_spec_devices(spec: &mut Spec, mut updates: HashMap<&str, DevUpdate>) -
|
||||
|
||||
if let Some(resources) = linux.resources.as_mut() {
|
||||
for r in &mut resources.devices {
|
||||
if let (Some(host_type), Some(host_major), Some(host_minor)) =
|
||||
(r.r#type.as_ref(), r.major, r.minor)
|
||||
{
|
||||
if let Some(update) = res_updates.get(&(host_type.as_str(), host_major, host_minor))
|
||||
if let (Some(host_major), Some(host_minor)) = (r.major, r.minor) {
|
||||
if let Some(update) = res_updates.get(&(r.r#type.as_str(), host_major, host_minor))
|
||||
{
|
||||
info!(
|
||||
sl(),
|
||||
"update_spec_devices() updating resource";
|
||||
"type" => &host_type,
|
||||
"type" => &r.r#type,
|
||||
"host_major" => host_major,
|
||||
"host_minor" => host_minor,
|
||||
"guest_major" => update.guest_major,
|
||||
@@ -971,7 +969,7 @@ pub fn update_device_cgroup(spec: &mut Spec) -> Result<()> {
|
||||
allow: false,
|
||||
major: Some(major),
|
||||
minor: Some(minor),
|
||||
r#type: Some(String::from("b")),
|
||||
r#type: String::from("b"),
|
||||
access: String::from("rw"),
|
||||
});
|
||||
|
||||
@@ -1134,13 +1132,13 @@ mod tests {
|
||||
resources: Some(LinuxResources {
|
||||
devices: vec![
|
||||
oci::LinuxDeviceCgroup {
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(host_major_a),
|
||||
minor: Some(host_minor_a),
|
||||
..oci::LinuxDeviceCgroup::default()
|
||||
},
|
||||
oci::LinuxDeviceCgroup {
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(host_major_b),
|
||||
minor: Some(host_minor_b),
|
||||
..oci::LinuxDeviceCgroup::default()
|
||||
@@ -1233,13 +1231,13 @@ mod tests {
|
||||
resources: Some(LinuxResources {
|
||||
devices: vec![
|
||||
LinuxDeviceCgroup {
|
||||
r#type: Some("c".to_string()),
|
||||
r#type: "c".to_string(),
|
||||
major: Some(host_major),
|
||||
minor: Some(host_minor),
|
||||
..LinuxDeviceCgroup::default()
|
||||
},
|
||||
LinuxDeviceCgroup {
|
||||
r#type: Some("b".to_string()),
|
||||
r#type: "b".to_string(),
|
||||
major: Some(host_major),
|
||||
minor: Some(host_minor),
|
||||
..LinuxDeviceCgroup::default()
|
||||
|
||||
@@ -1,366 +0,0 @@
|
||||
// Copyright (c) 2021 Alibaba Cloud
|
||||
// Copyright (c) 2021, 2023 IBM Corporation
|
||||
// Copyright (c) 2022 Intel Corporation
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
use std::env;
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use std::process::Command;
|
||||
use std::sync::atomic::{AtomicBool, AtomicU16, Ordering};
|
||||
use std::sync::Arc;
|
||||
|
||||
use anyhow::{anyhow, Result};
|
||||
use async_trait::async_trait;
|
||||
use protocols::image;
|
||||
use tokio::sync::Mutex;
|
||||
use ttrpc::{self, error::get_rpc_status as ttrpc_error};
|
||||
|
||||
use crate::rpc::{verify_cid, CONTAINER_BASE};
|
||||
use crate::sandbox::Sandbox;
|
||||
use crate::AGENT_CONFIG;
|
||||
|
||||
use image_rs::image::ImageClient;
|
||||
use std::io::Write;
|
||||
|
||||
const AA_PATH: &str = "/usr/local/bin/attestation-agent";
|
||||
|
||||
const AA_KEYPROVIDER_URI: &str =
|
||||
"unix:///run/confidential-containers/attestation-agent/keyprovider.sock";
|
||||
const AA_GETRESOURCE_URI: &str =
|
||||
"unix:///run/confidential-containers/attestation-agent/getresource.sock";
|
||||
|
||||
const OCICRYPT_CONFIG_PATH: &str = "/tmp/ocicrypt_config.json";
|
||||
// kata rootfs is readonly, use tmpfs before CC storage is implemented.
|
||||
const KATA_CC_IMAGE_WORK_DIR: &str = "/run/image/";
|
||||
const KATA_CC_PAUSE_BUNDLE: &str = "/pause_bundle";
|
||||
const CONFIG_JSON: &str = "config.json";
|
||||
|
||||
// Convenience function to obtain the scope logger.
|
||||
fn sl() -> slog::Logger {
|
||||
slog_scope::logger().new(o!("subsystem" => "cgroups"))
|
||||
}
|
||||
|
||||
pub struct ImageService {
|
||||
sandbox: Arc<Mutex<Sandbox>>,
|
||||
attestation_agent_started: AtomicBool,
|
||||
image_client: Arc<Mutex<ImageClient>>,
|
||||
container_count: Arc<AtomicU16>,
|
||||
}
|
||||
|
||||
impl ImageService {
|
||||
pub async fn new(sandbox: Arc<Mutex<Sandbox>>) -> Self {
|
||||
env::set_var("CC_IMAGE_WORK_DIR", KATA_CC_IMAGE_WORK_DIR);
|
||||
let mut image_client = ImageClient::default();
|
||||
|
||||
let image_policy_file = &AGENT_CONFIG.image_policy_file;
|
||||
if !image_policy_file.is_empty() {
|
||||
image_client.config.file_paths.sigstore_config = image_policy_file.clone();
|
||||
}
|
||||
|
||||
let simple_signing_sigstore_config = &AGENT_CONFIG.simple_signing_sigstore_config;
|
||||
if !simple_signing_sigstore_config.is_empty() {
|
||||
image_client.config.file_paths.sigstore_config = simple_signing_sigstore_config.clone();
|
||||
}
|
||||
|
||||
let image_registry_auth_file = &AGENT_CONFIG.image_registry_auth_file;
|
||||
if !image_registry_auth_file.is_empty() {
|
||||
image_client.config.file_paths.auth_file = image_registry_auth_file.clone();
|
||||
}
|
||||
|
||||
Self {
|
||||
sandbox,
|
||||
attestation_agent_started: AtomicBool::new(false),
|
||||
image_client: Arc::new(Mutex::new(image_client)),
|
||||
container_count: Arc::new(AtomicU16::new(0)),
|
||||
}
|
||||
}
|
||||
|
||||
// pause image is packaged in rootfs for CC
|
||||
fn unpack_pause_image(cid: &str) -> Result<()> {
|
||||
let cc_pause_bundle = Path::new(KATA_CC_PAUSE_BUNDLE);
|
||||
if !cc_pause_bundle.exists() {
|
||||
return Err(anyhow!("Pause image not present in rootfs"));
|
||||
}
|
||||
|
||||
info!(sl(), "use guest pause image cid {:?}", cid);
|
||||
let pause_bundle = Path::new(CONTAINER_BASE).join(cid);
|
||||
let pause_rootfs = pause_bundle.join("rootfs");
|
||||
let pause_config = pause_bundle.join(CONFIG_JSON);
|
||||
let pause_binary = pause_rootfs.join("pause");
|
||||
fs::create_dir_all(&pause_rootfs)?;
|
||||
if !pause_config.exists() {
|
||||
fs::copy(
|
||||
cc_pause_bundle.join(CONFIG_JSON),
|
||||
pause_bundle.join(CONFIG_JSON),
|
||||
)?;
|
||||
}
|
||||
if !pause_binary.exists() {
|
||||
fs::copy(cc_pause_bundle.join("rootfs").join("pause"), pause_binary)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// If we fail to start the AA, ocicrypt won't be able to unwrap keys
|
||||
// and container decryption will fail.
|
||||
fn init_attestation_agent() -> Result<()> {
|
||||
let config_path = OCICRYPT_CONFIG_PATH;
|
||||
|
||||
// The image will need to be encrypted using a keyprovider
|
||||
// that has the same name (at least according to the config).
|
||||
let ocicrypt_config = serde_json::json!({
|
||||
"key-providers": {
|
||||
"attestation-agent":{
|
||||
"ttrpc":AA_KEYPROVIDER_URI
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
let mut config_file = fs::File::create(config_path)?;
|
||||
config_file.write_all(ocicrypt_config.to_string().as_bytes())?;
|
||||
|
||||
// The Attestation Agent will run for the duration of the guest.
|
||||
Command::new(AA_PATH)
|
||||
.arg("--keyprovider_sock")
|
||||
.arg(AA_KEYPROVIDER_URI)
|
||||
.arg("--getresource_sock")
|
||||
.arg(AA_GETRESOURCE_URI)
|
||||
.spawn()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Determines the container id (cid) to use for a given request.
|
||||
///
|
||||
/// If the request specifies a non-empty id, use it; otherwise derive it from the image path.
|
||||
/// In either case, verify that the chosen id is valid.
|
||||
fn cid_from_request(&self, req: &image::PullImageRequest) -> Result<String> {
|
||||
let req_cid = req.container_id();
|
||||
let cid = if !req_cid.is_empty() {
|
||||
req_cid.to_string()
|
||||
} else if let Some(last) = req.image().rsplit('/').next() {
|
||||
// Support multiple containers with same image
|
||||
let index = self.container_count.fetch_add(1, Ordering::Relaxed);
|
||||
|
||||
// ':' not valid for container id
|
||||
format!("{}_{}", last.replace(':', "_"), index)
|
||||
} else {
|
||||
return Err(anyhow!("Invalid image name. {}", req.image()));
|
||||
};
|
||||
verify_cid(&cid)?;
|
||||
Ok(cid)
|
||||
}
|
||||
|
||||
async fn pull_image(&self, req: &image::PullImageRequest) -> Result<String> {
|
||||
env::set_var("OCICRYPT_KEYPROVIDER_CONFIG", OCICRYPT_CONFIG_PATH);
|
||||
|
||||
let https_proxy = &AGENT_CONFIG.https_proxy;
|
||||
if !https_proxy.is_empty() {
|
||||
env::set_var("HTTPS_PROXY", https_proxy);
|
||||
}
|
||||
|
||||
let no_proxy = &AGENT_CONFIG.no_proxy;
|
||||
if !no_proxy.is_empty() {
|
||||
env::set_var("NO_PROXY", no_proxy);
|
||||
}
|
||||
|
||||
let cid = self.cid_from_request(req)?;
|
||||
let image = req.image();
|
||||
if cid.starts_with("pause") {
|
||||
Self::unpack_pause_image(&cid)?;
|
||||
|
||||
let mut sandbox = self.sandbox.lock().await;
|
||||
sandbox.images.insert(String::from(image), cid);
|
||||
return Ok(image.to_owned());
|
||||
}
|
||||
|
||||
let aa_kbc_params = &AGENT_CONFIG.aa_kbc_params;
|
||||
if !aa_kbc_params.is_empty() {
|
||||
match self.attestation_agent_started.compare_exchange_weak(
|
||||
false,
|
||||
true,
|
||||
Ordering::SeqCst,
|
||||
Ordering::SeqCst,
|
||||
) {
|
||||
Ok(_) => Self::init_attestation_agent()?,
|
||||
Err(_) => info!(sl(), "Attestation Agent already running"),
|
||||
}
|
||||
}
|
||||
// If the attestation-agent is being used, then enable the authenticated credentials support
|
||||
info!(
|
||||
sl(),
|
||||
"image_client.config.auth set to: {}",
|
||||
!aa_kbc_params.is_empty()
|
||||
);
|
||||
self.image_client.lock().await.config.auth = !aa_kbc_params.is_empty();
|
||||
|
||||
// Read enable signature verification from the agent config and set it in the image_client
|
||||
let enable_signature_verification = &AGENT_CONFIG.enable_signature_verification;
|
||||
info!(
|
||||
sl(),
|
||||
"enable_signature_verification set to: {}", enable_signature_verification
|
||||
);
|
||||
self.image_client.lock().await.config.security_validate = *enable_signature_verification;
|
||||
|
||||
let source_creds = (!req.source_creds().is_empty()).then(|| req.source_creds());
|
||||
|
||||
let bundle_path = Path::new(CONTAINER_BASE).join(&cid);
|
||||
fs::create_dir_all(&bundle_path)?;
|
||||
|
||||
let decrypt_config = format!("provider:attestation-agent:{}", aa_kbc_params);
|
||||
|
||||
info!(sl(), "pull image {:?}, bundle path {:?}", cid, bundle_path);
|
||||
// Image layers will store at KATA_CC_IMAGE_WORK_DIR, generated bundles
|
||||
// with rootfs and config.json will store under CONTAINER_BASE/cid.
|
||||
let res = self
|
||||
.image_client
|
||||
.lock()
|
||||
.await
|
||||
.pull_image(image, &bundle_path, &source_creds, &Some(&decrypt_config))
|
||||
.await;
|
||||
|
||||
match res {
|
||||
Ok(image) => {
|
||||
info!(
|
||||
sl(),
|
||||
"pull and unpack image {:?}, cid: {:?}, with image-rs succeed. ", image, cid
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
error!(
|
||||
sl(),
|
||||
"pull and unpack image {:?}, cid: {:?}, with image-rs failed with {:?}. ",
|
||||
image,
|
||||
cid,
|
||||
e.to_string()
|
||||
);
|
||||
return Err(e);
|
||||
}
|
||||
};
|
||||
|
||||
let mut sandbox = self.sandbox.lock().await;
|
||||
sandbox.images.insert(String::from(image), cid);
|
||||
Ok(image.to_owned())
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl protocols::image_ttrpc_async::Image for ImageService {
|
||||
async fn pull_image(
|
||||
&self,
|
||||
_ctx: &ttrpc::r#async::TtrpcContext,
|
||||
req: image::PullImageRequest,
|
||||
) -> ttrpc::Result<image::PullImageResponse> {
|
||||
match self.pull_image(&req).await {
|
||||
Ok(r) => {
|
||||
let mut resp = image::PullImageResponse::new();
|
||||
resp.image_ref = r;
|
||||
return Ok(resp);
|
||||
}
|
||||
Err(e) => {
|
||||
return Err(ttrpc_error(ttrpc::Code::INTERNAL, e.to_string()));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::ImageService;
|
||||
use crate::sandbox::Sandbox;
|
||||
use protocols::image;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_cid_from_request() {
|
||||
struct Case {
|
||||
cid: &'static str,
|
||||
image: &'static str,
|
||||
result: Option<&'static str>,
|
||||
}
|
||||
|
||||
let cases = [
|
||||
Case {
|
||||
cid: "",
|
||||
image: "",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "..",
|
||||
image: "",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "..",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "abc/..",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "abc/",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "../abc",
|
||||
result: Some("abc_4"),
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "../9abc",
|
||||
result: Some("9abc_5"),
|
||||
},
|
||||
Case {
|
||||
cid: "some-string.1_2",
|
||||
image: "",
|
||||
result: Some("some-string.1_2"),
|
||||
},
|
||||
Case {
|
||||
cid: "0some-string.1_2",
|
||||
image: "",
|
||||
result: Some("0some-string.1_2"),
|
||||
},
|
||||
Case {
|
||||
cid: "a:b",
|
||||
image: "",
|
||||
result: None,
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "prefix/a:b",
|
||||
result: Some("a_b_6"),
|
||||
},
|
||||
Case {
|
||||
cid: "",
|
||||
image: "/a/b/c/d:e",
|
||||
result: Some("d_e_7"),
|
||||
},
|
||||
];
|
||||
|
||||
let logger = slog::Logger::root(slog::Discard, o!());
|
||||
let s = Sandbox::new(&logger).unwrap();
|
||||
let image_service = ImageService::new(Arc::new(Mutex::new(s))).await;
|
||||
for case in &cases {
|
||||
let mut req = image::PullImageRequest::new();
|
||||
req.set_image(case.image.to_string());
|
||||
req.set_container_id(case.cid.to_string());
|
||||
let ret = image_service.cid_from_request(&req);
|
||||
match (case.result, ret) {
|
||||
(Some(expected), Ok(actual)) => assert_eq!(expected, actual),
|
||||
(None, Err(_)) => (),
|
||||
(None, Ok(r)) => panic!("Expected an error, got {}", r),
|
||||
(Some(expected), Err(e)) => {
|
||||
panic!("Expected {} but got an error ({})", expected, e)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -33,7 +33,7 @@ pub fn create_pci_root_bus_path() -> String {
|
||||
|
||||
// check if there is pci bus path for acpi
|
||||
acpi_sysfs_dir.push_str(&acpi_root_bus_path);
|
||||
if let Ok(_) = fs::metadata(&acpi_sysfs_dir) {
|
||||
if fs::metadata(&acpi_sysfs_dir).is_ok() {
|
||||
return acpi_root_bus_path;
|
||||
}
|
||||
|
||||
|
||||
@@ -70,7 +70,6 @@ use tokio::{
|
||||
task::JoinHandle,
|
||||
};
|
||||
|
||||
mod image_rpc;
|
||||
mod rpc;
|
||||
mod tracer;
|
||||
|
||||
@@ -345,7 +344,7 @@ async fn start_sandbox(
|
||||
sandbox.lock().await.sender = Some(tx);
|
||||
|
||||
// vsock:///dev/vsock, port
|
||||
let mut server = rpc::start(sandbox.clone(), config.server_addr.as_str(), init_mode).await?;
|
||||
let mut server = rpc::start(sandbox.clone(), config.server_addr.as_str(), init_mode)?;
|
||||
server.start().await?;
|
||||
|
||||
rx.await?;
|
||||
|
||||
@@ -36,6 +36,7 @@ use crate::Sandbox;
|
||||
use crate::{ccw, device::get_virtio_blk_ccw_device_name};
|
||||
use anyhow::{anyhow, Context, Result};
|
||||
use slog::Logger;
|
||||
|
||||
use tracing::instrument;
|
||||
|
||||
pub const TYPE_ROOTFS: &str = "rootfs";
|
||||
@@ -145,6 +146,11 @@ pub const STORAGE_HANDLER_LIST: &[&str] = &[
|
||||
DRIVER_WATCHABLE_BIND_TYPE,
|
||||
];
|
||||
|
||||
#[instrument]
|
||||
pub fn get_mounts() -> Result<String, std::io::Error> {
|
||||
fs::read_to_string("/proc/mounts")
|
||||
}
|
||||
|
||||
#[instrument]
|
||||
pub fn baremount(
|
||||
source: &Path,
|
||||
@@ -168,6 +174,31 @@ pub fn baremount(
|
||||
return Err(anyhow!("need mount FS type"));
|
||||
}
|
||||
|
||||
let destination_str = destination.to_string_lossy();
|
||||
let mounts = get_mounts().unwrap_or_else(|_| String::new());
|
||||
let already_mounted = mounts
|
||||
.lines()
|
||||
.map(|line| line.split_whitespace().collect::<Vec<&str>>())
|
||||
.filter(|parts| parts.len() >= 3) // ensure we have at least [source}, destination, and fs_type
|
||||
.any(|parts| {
|
||||
// Check if source, destination and fs_type match any entry in /proc/mounts
|
||||
// minimal check is for destination an fstype since source can have different names like:
|
||||
// udev /dev devtmpfs
|
||||
// dev /dev devtmpfs
|
||||
// depending on which entity is mounting the dev/fs/pseudo-fs
|
||||
parts[1] == destination_str && parts[2] == fs_type
|
||||
});
|
||||
|
||||
if already_mounted {
|
||||
slog_info!(
|
||||
logger,
|
||||
"{:?} is already mounted at {:?}",
|
||||
source,
|
||||
destination
|
||||
);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
info!(
|
||||
logger,
|
||||
"baremount source={:?}, dest={:?}, fs_type={:?}, options={:?}, flags={:?}",
|
||||
@@ -725,6 +756,14 @@ pub fn recursive_ownership_change(
|
||||
mask |= EXEC_MASK;
|
||||
mask |= MODE_SETGID;
|
||||
}
|
||||
|
||||
// We do not want to change the permission of the underlying file
|
||||
// using symlink. Hence we skip symlinks from recursive ownership
|
||||
// and permission changes.
|
||||
if path.is_symlink() {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
nix::unistd::chown(path, uid, gid)?;
|
||||
|
||||
if gid.is_some() {
|
||||
@@ -1102,6 +1141,7 @@ fn parse_options(option_list: Vec<String>) -> HashMap<String, String> {
|
||||
mod tests {
|
||||
use super::*;
|
||||
use protocols::agent::FSGroup;
|
||||
use slog::Drain;
|
||||
use std::fs::File;
|
||||
use std::fs::OpenOptions;
|
||||
use std::io::Write;
|
||||
@@ -1112,6 +1152,31 @@ mod tests {
|
||||
skip_if_not_root, skip_loop_by_user, skip_loop_if_not_root, skip_loop_if_root,
|
||||
};
|
||||
|
||||
#[test]
|
||||
fn test_already_baremounted() {
|
||||
let plain = slog_term::PlainSyncDecorator::new(std::io::stdout());
|
||||
let logger = Logger::root(slog_term::FullFormat::new(plain).build().fuse(), o!());
|
||||
|
||||
let test_cases = [
|
||||
("dev", "/dev", "devtmpfs"),
|
||||
("udev", "/dev", "devtmpfs"),
|
||||
("proc", "/proc", "proc"),
|
||||
("sysfs", "/sys", "sysfs"),
|
||||
];
|
||||
|
||||
for &(source, destination, fs_type) in &test_cases {
|
||||
let source = Path::new(source);
|
||||
let destination = Path::new(destination);
|
||||
let flags = MsFlags::MS_RDONLY;
|
||||
let options = "mode=755";
|
||||
println!(
|
||||
"testing if already mounted baremount({:?} {:?} {:?})",
|
||||
source, destination, fs_type
|
||||
);
|
||||
assert!(baremount(source, destination, fs_type, flags, options, &logger).is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mount() {
|
||||
#[derive(Debug)]
|
||||
|
||||
@@ -37,10 +37,7 @@ use protocols::health::{
|
||||
VersionCheckResponse,
|
||||
};
|
||||
use protocols::types::Interface;
|
||||
use protocols::{
|
||||
agent_ttrpc_async as agent_ttrpc, health_ttrpc_async as health_ttrpc,
|
||||
image_ttrpc_async as image_ttrpc,
|
||||
};
|
||||
use protocols::{agent_ttrpc_async as agent_ttrpc, health_ttrpc_async as health_ttrpc};
|
||||
use rustjail::cgroups::notifier;
|
||||
use rustjail::container::{BaseContainer, Container, LinuxContainer, SYSTEMD_CGROUP_PATH_FORMAT};
|
||||
use rustjail::mount::parse_mount_table;
|
||||
@@ -56,7 +53,6 @@ use rustjail::process::ProcessOperations;
|
||||
use crate::device::{
|
||||
add_devices, get_virtio_blk_pci_device_name, update_device_cgroup, update_env_pci,
|
||||
};
|
||||
use crate::image_rpc;
|
||||
use crate::linux_abi::*;
|
||||
use crate::metrics::get_metrics;
|
||||
use crate::mount::{add_storages, baremount, update_ephemeral_mounts, STORAGE_HANDLER_LIST};
|
||||
@@ -88,12 +84,8 @@ use std::io::{BufRead, BufReader, Write};
|
||||
use std::os::unix::fs::FileExt;
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub const CONTAINER_BASE: &str = "/run/kata-containers";
|
||||
const CONTAINER_BASE: &str = "/run/kata-containers";
|
||||
const MODPROBE_PATH: &str = "/sbin/modprobe";
|
||||
const ANNO_K8S_IMAGE_NAME: &str = "io.kubernetes.cri.image-name";
|
||||
const CONFIG_JSON: &str = "config.json";
|
||||
const INIT_TRUSTED_STORAGE: &str = "/usr/bin/kata-init-trusted-storage";
|
||||
const TRUSTED_STORAGE_DEVICE: &str = "/dev/trusted_store";
|
||||
|
||||
/// the iptables seriers binaries could appear either in /sbin
|
||||
/// or /usr/sbin, we need to check both of them
|
||||
@@ -145,41 +137,6 @@ pub struct AgentService {
|
||||
init_mode: bool,
|
||||
}
|
||||
|
||||
// A container ID must match this regex:
|
||||
//
|
||||
// ^[a-zA-Z0-9][a-zA-Z0-9_.-]+$
|
||||
//
|
||||
pub fn verify_cid(id: &str) -> Result<()> {
|
||||
let mut chars = id.chars();
|
||||
|
||||
let valid = matches!(chars.next(), Some(first) if first.is_alphanumeric()
|
||||
&& id.len() > 1
|
||||
&& chars.all(|c| c.is_alphanumeric() || ['.', '-', '_'].contains(&c)));
|
||||
|
||||
match valid {
|
||||
true => Ok(()),
|
||||
false => Err(anyhow!("invalid container ID: {:?}", id)),
|
||||
}
|
||||
}
|
||||
|
||||
// Partially merge an OCI process specification into another one.
|
||||
fn merge_oci_process(target: &mut oci::Process, source: &oci::Process) {
|
||||
if target.args.is_empty() && !source.args.is_empty() {
|
||||
target.args.append(&mut source.args.clone());
|
||||
}
|
||||
|
||||
if target.cwd == "/" && source.cwd != "/" {
|
||||
target.cwd = String::from(&source.cwd);
|
||||
}
|
||||
|
||||
for source_env in &source.env {
|
||||
let variable_name: Vec<&str> = source_env.split('=').collect();
|
||||
if !target.env.iter().any(|i| i.contains(variable_name[0])) {
|
||||
target.env.push(source_env.to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl AgentService {
|
||||
#[instrument]
|
||||
async fn do_create_container(
|
||||
@@ -210,9 +167,6 @@ impl AgentService {
|
||||
"receive createcontainer, storages: {:?}", &req.storages
|
||||
);
|
||||
|
||||
// Merge the image bundle OCI spec into the container creation request OCI spec.
|
||||
self.merge_bundle_oci(&mut oci).await?;
|
||||
|
||||
// Some devices need some extra processing (the ones invoked with
|
||||
// --device for instance), and that's what this call is doing. It
|
||||
// updates the devices listed in the OCI spec, so that they actually
|
||||
@@ -220,30 +174,6 @@ impl AgentService {
|
||||
// cannot predict everything from the caller.
|
||||
add_devices(&req.devices.to_vec(), &mut oci, &self.sandbox).await?;
|
||||
|
||||
let linux = oci
|
||||
.linux
|
||||
.as_mut()
|
||||
.ok_or_else(|| anyhow!("Spec didn't contain linux field"))?;
|
||||
|
||||
for specdev in &mut linux.devices {
|
||||
let dev_major_minor = format!("{}:{}", specdev.major, specdev.minor);
|
||||
|
||||
if specdev.path == TRUSTED_STORAGE_DEVICE {
|
||||
let data_integrity = AGENT_CONFIG.data_integrity;
|
||||
info!(
|
||||
sl(),
|
||||
"trusted_store device major:min {}, enable data integrity {}",
|
||||
dev_major_minor,
|
||||
data_integrity.to_string()
|
||||
);
|
||||
|
||||
Command::new(INIT_TRUSTED_STORAGE)
|
||||
.args([&dev_major_minor, &data_integrity.to_string()])
|
||||
.output()
|
||||
.expect("Failed to initialize confidential storage");
|
||||
}
|
||||
}
|
||||
|
||||
// Both rootfs and volumes (invoked with --volume for instance) will
|
||||
// be processed the same way. The idea is to always mount any provided
|
||||
// storage to the specified MountPoint, so that it will match what's
|
||||
@@ -665,15 +595,16 @@ impl AgentService {
|
||||
let cid = req.container_id;
|
||||
let eid = req.exec_id;
|
||||
|
||||
let mut term_exit_notifier = Arc::new(tokio::sync::Notify::new());
|
||||
let term_exit_notifier;
|
||||
let reader = {
|
||||
let s = self.sandbox.clone();
|
||||
let mut sandbox = s.lock().await;
|
||||
|
||||
let p = sandbox.find_container_process(cid.as_str(), eid.as_str())?;
|
||||
|
||||
term_exit_notifier = p.term_exit_notifier.clone();
|
||||
|
||||
if p.term_master.is_some() {
|
||||
term_exit_notifier = p.term_exit_notifier.clone();
|
||||
p.get_reader(StreamType::TermMaster)
|
||||
} else if stdout {
|
||||
if p.parent_stdout.is_some() {
|
||||
@@ -693,9 +624,12 @@ impl AgentService {
|
||||
let reader = reader.ok_or_else(|| anyhow!("cannot get stream reader"))?;
|
||||
|
||||
tokio::select! {
|
||||
_ = term_exit_notifier.notified() => {
|
||||
Err(anyhow!("eof"))
|
||||
}
|
||||
// Poll the futures in the order they appear from top to bottom
|
||||
// it is very important to avoid data loss. If there is still
|
||||
// data in the buffer and read_stream branch will return
|
||||
// Poll::Ready so that the term_exit_notifier will never polled
|
||||
// before all data were read.
|
||||
biased;
|
||||
v = read_stream(reader, req.len as usize) => {
|
||||
let vector = v?;
|
||||
let mut resp = ReadStreamResponse::new();
|
||||
@@ -703,55 +637,10 @@ impl AgentService {
|
||||
|
||||
Ok(resp)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// When being passed an image name through a container annotation, merge its
|
||||
// corresponding bundle OCI specification into the passed container creation one.
|
||||
async fn merge_bundle_oci(&self, container_oci: &mut oci::Spec) -> Result<()> {
|
||||
if let Some(image_name) = container_oci
|
||||
.annotations
|
||||
.get(&ANNO_K8S_IMAGE_NAME.to_string())
|
||||
{
|
||||
if let Some(container_id) = self.sandbox.clone().lock().await.images.get(image_name) {
|
||||
let image_oci_config_path = Path::new(CONTAINER_BASE)
|
||||
.join(container_id)
|
||||
.join(CONFIG_JSON);
|
||||
debug!(
|
||||
sl(),
|
||||
"Image bundle config path: {:?}", image_oci_config_path
|
||||
);
|
||||
|
||||
let image_oci =
|
||||
oci::Spec::load(image_oci_config_path.to_str().ok_or_else(|| {
|
||||
anyhow!(
|
||||
"Invalid container image OCI config path {:?}",
|
||||
image_oci_config_path
|
||||
)
|
||||
})?)
|
||||
.context("load image bundle")?;
|
||||
|
||||
if let Some(container_root) = container_oci.root.as_mut() {
|
||||
if let Some(image_root) = image_oci.root.as_ref() {
|
||||
let root_path = Path::new(CONTAINER_BASE)
|
||||
.join(container_id)
|
||||
.join(image_root.path.clone());
|
||||
container_root.path =
|
||||
String::from(root_path.to_str().ok_or_else(|| {
|
||||
anyhow!("Invalid container image root path {:?}", root_path)
|
||||
})?);
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(container_process) = container_oci.process.as_mut() {
|
||||
if let Some(image_process) = image_oci.process.as_ref() {
|
||||
merge_oci_process(container_process, image_process);
|
||||
}
|
||||
}
|
||||
_ = term_exit_notifier.notified() => {
|
||||
Err(anyhow!("eof"))
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1834,13 +1723,9 @@ async fn read_stream(reader: Arc<Mutex<ReadHalf<PipeStream>>>, l: usize) -> Resu
|
||||
Ok(content)
|
||||
}
|
||||
|
||||
pub async fn start(
|
||||
s: Arc<Mutex<Sandbox>>,
|
||||
server_address: &str,
|
||||
init_mode: bool,
|
||||
) -> Result<TtrpcServer> {
|
||||
pub fn start(s: Arc<Mutex<Sandbox>>, server_address: &str, init_mode: bool) -> Result<TtrpcServer> {
|
||||
let agent_service = Box::new(AgentService {
|
||||
sandbox: s.clone(),
|
||||
sandbox: s,
|
||||
init_mode,
|
||||
}) as Box<dyn agent_ttrpc::AgentService + Send + Sync>;
|
||||
|
||||
@@ -1849,20 +1734,14 @@ pub async fn start(
|
||||
let health_service = Box::new(HealthService {}) as Box<dyn health_ttrpc::Health + Send + Sync>;
|
||||
let health_worker = Arc::new(health_service);
|
||||
|
||||
let image_service = Box::new(image_rpc::ImageService::new(s).await)
|
||||
as Box<dyn image_ttrpc::Image + Send + Sync>;
|
||||
|
||||
let aservice = agent_ttrpc::create_agent_service(agent_worker);
|
||||
|
||||
let hservice = health_ttrpc::create_health(health_worker);
|
||||
|
||||
let iservice = image_ttrpc::create_image(Arc::new(image_service));
|
||||
|
||||
let server = TtrpcServer::new()
|
||||
.bind(server_address)?
|
||||
.register_service(aservice)
|
||||
.register_service(hservice)
|
||||
.register_service(iservice);
|
||||
.register_service(hservice);
|
||||
|
||||
info!(sl(), "ttRPC server started"; "address" => server_address);
|
||||
|
||||
@@ -2063,38 +1942,6 @@ fn do_copy_file(req: &CopyFileRequest) -> Result<()> {
|
||||
}
|
||||
}
|
||||
|
||||
let sflag = stat::SFlag::from_bits_truncate(req.file_mode);
|
||||
|
||||
if sflag.contains(stat::SFlag::S_IFDIR) {
|
||||
fs::create_dir(path.clone()).or_else(|e| {
|
||||
if e.kind() != std::io::ErrorKind::AlreadyExists {
|
||||
return Err(e);
|
||||
}
|
||||
Ok(())
|
||||
})?;
|
||||
|
||||
std::fs::set_permissions(path.clone(), std::fs::Permissions::from_mode(req.file_mode))?;
|
||||
|
||||
unistd::chown(
|
||||
&path,
|
||||
Some(Uid::from_raw(req.uid as u32)),
|
||||
Some(Gid::from_raw(req.gid as u32)),
|
||||
)?;
|
||||
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if sflag.contains(stat::SFlag::S_IFLNK) {
|
||||
let src = PathBuf::from(String::from_utf8(req.data.clone()).unwrap());
|
||||
|
||||
unistd::symlinkat(&src, None, &path)?;
|
||||
let path_str = CString::new(path.to_str().unwrap())?;
|
||||
let ret = unsafe { libc::lchown(path_str.as_ptr(), req.uid as u32, req.gid as u32) };
|
||||
Errno::result(ret).map(drop)?;
|
||||
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut tmpfile = path.clone();
|
||||
tmpfile.set_extension("tmp");
|
||||
|
||||
@@ -2160,26 +2007,18 @@ pub fn setup_bundle(cid: &str, spec: &mut Spec) -> Result<PathBuf> {
|
||||
let spec_root_path = Path::new(&spec_root.path);
|
||||
|
||||
let bundle_path = Path::new(CONTAINER_BASE).join(cid);
|
||||
let config_path = bundle_path.join(CONFIG_JSON);
|
||||
let config_path = bundle_path.join("config.json");
|
||||
let rootfs_path = bundle_path.join("rootfs");
|
||||
|
||||
let rootfs_exists = Path::new(&rootfs_path).exists();
|
||||
info!(
|
||||
fs::create_dir_all(&rootfs_path)?;
|
||||
baremount(
|
||||
spec_root_path,
|
||||
&rootfs_path,
|
||||
"bind",
|
||||
MsFlags::MS_BIND,
|
||||
"",
|
||||
&sl(),
|
||||
"The rootfs_path is {:?} and exists: {}", rootfs_path, rootfs_exists
|
||||
);
|
||||
|
||||
if !rootfs_exists {
|
||||
fs::create_dir_all(&rootfs_path)?;
|
||||
baremount(
|
||||
spec_root_path,
|
||||
&rootfs_path,
|
||||
"bind",
|
||||
MsFlags::MS_BIND,
|
||||
"",
|
||||
&sl(),
|
||||
)?;
|
||||
}
|
||||
)?;
|
||||
|
||||
let rootfs_path_name = rootfs_path
|
||||
.to_str()
|
||||
@@ -3160,135 +2999,4 @@ COMMIT
|
||||
"We should see the resulting rule"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_merge_cwd() {
|
||||
#[derive(Debug)]
|
||||
struct TestData<'a> {
|
||||
container_process_cwd: &'a str,
|
||||
image_process_cwd: &'a str,
|
||||
expected: &'a str,
|
||||
}
|
||||
|
||||
let tests = &[
|
||||
// Image cwd should override blank container cwd
|
||||
// TODO - how can we tell the user didn't specifically set it to `/` vs not setting at all? Is that scenario valid?
|
||||
TestData {
|
||||
container_process_cwd: "/",
|
||||
image_process_cwd: "/imageDir",
|
||||
expected: "/imageDir",
|
||||
},
|
||||
// Container cwd should override image cwd
|
||||
TestData {
|
||||
container_process_cwd: "/containerDir",
|
||||
image_process_cwd: "/imageDir",
|
||||
expected: "/containerDir",
|
||||
},
|
||||
// Container cwd should override blank image cwd
|
||||
TestData {
|
||||
container_process_cwd: "/containerDir",
|
||||
image_process_cwd: "/",
|
||||
expected: "/containerDir",
|
||||
},
|
||||
];
|
||||
|
||||
for (i, d) in tests.iter().enumerate() {
|
||||
let msg = format!("test[{}]: {:?}", i, d);
|
||||
|
||||
let mut container_process = oci::Process {
|
||||
cwd: d.container_process_cwd.to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let image_process = oci::Process {
|
||||
cwd: d.image_process_cwd.to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
merge_oci_process(&mut container_process, &image_process);
|
||||
|
||||
assert_eq!(d.expected, container_process.cwd, "{}", msg);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_merge_env() {
|
||||
#[derive(Debug)]
|
||||
struct TestData {
|
||||
container_process_env: Vec<String>,
|
||||
image_process_env: Vec<String>,
|
||||
expected: Vec<String>,
|
||||
}
|
||||
|
||||
let tests = &[
|
||||
// Test that the pods environment overrides the images
|
||||
TestData {
|
||||
container_process_env: vec!["ISPRODUCTION=true".to_string()],
|
||||
image_process_env: vec!["ISPRODUCTION=false".to_string()],
|
||||
expected: vec!["ISPRODUCTION=true".to_string()],
|
||||
},
|
||||
// Test that multiple environment variables can be overrided
|
||||
TestData {
|
||||
container_process_env: vec![
|
||||
"ISPRODUCTION=true".to_string(),
|
||||
"ISDEVELOPMENT=false".to_string(),
|
||||
],
|
||||
image_process_env: vec![
|
||||
"ISPRODUCTION=false".to_string(),
|
||||
"ISDEVELOPMENT=true".to_string(),
|
||||
],
|
||||
expected: vec![
|
||||
"ISPRODUCTION=true".to_string(),
|
||||
"ISDEVELOPMENT=false".to_string(),
|
||||
],
|
||||
},
|
||||
// Test that when none of the variables match do not override them
|
||||
TestData {
|
||||
container_process_env: vec!["ANOTHERENV=TEST".to_string()],
|
||||
image_process_env: vec![
|
||||
"ISPRODUCTION=false".to_string(),
|
||||
"ISDEVELOPMENT=true".to_string(),
|
||||
],
|
||||
expected: vec![
|
||||
"ANOTHERENV=TEST".to_string(),
|
||||
"ISPRODUCTION=false".to_string(),
|
||||
"ISDEVELOPMENT=true".to_string(),
|
||||
],
|
||||
},
|
||||
// Test a mix of both overriding and not
|
||||
TestData {
|
||||
container_process_env: vec![
|
||||
"ANOTHERENV=TEST".to_string(),
|
||||
"ISPRODUCTION=true".to_string(),
|
||||
],
|
||||
image_process_env: vec![
|
||||
"ISPRODUCTION=false".to_string(),
|
||||
"ISDEVELOPMENT=true".to_string(),
|
||||
],
|
||||
expected: vec![
|
||||
"ANOTHERENV=TEST".to_string(),
|
||||
"ISPRODUCTION=true".to_string(),
|
||||
"ISDEVELOPMENT=true".to_string(),
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
for (i, d) in tests.iter().enumerate() {
|
||||
let msg = format!("test[{}]: {:?}", i, d);
|
||||
|
||||
let mut container_process = oci::Process {
|
||||
env: d.container_process_env.clone(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
let image_process = oci::Process {
|
||||
env: d.image_process_env.clone(),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
merge_oci_process(&mut container_process, &image_process);
|
||||
|
||||
assert_eq!(d.expected, container_process.env, "{}", msg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -62,7 +62,6 @@ pub struct Sandbox {
|
||||
pub event_tx: Option<Sender<String>>,
|
||||
pub bind_watcher: BindWatcher,
|
||||
pub pcimap: HashMap<pci::Address, pci::Address>,
|
||||
pub images: HashMap<String, String>,
|
||||
}
|
||||
|
||||
impl Sandbox {
|
||||
@@ -96,7 +95,6 @@ impl Sandbox {
|
||||
event_tx: Some(tx),
|
||||
bind_watcher: BindWatcher::new(),
|
||||
pcimap: HashMap::new(),
|
||||
images: HashMap::new(),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -435,7 +433,7 @@ fn online_resources(logger: &Logger, path: &str, pattern: &str, num: i32) -> Res
|
||||
}
|
||||
|
||||
// max wait for all CPUs to online will use 50 * 100 = 5 seconds.
|
||||
const ONLINE_CPUMEM_WATI_MILLIS: u64 = 50;
|
||||
const ONLINE_CPUMEM_WAIT_MILLIS: u64 = 50;
|
||||
const ONLINE_CPUMEM_MAX_RETRIES: i32 = 100;
|
||||
|
||||
#[instrument]
|
||||
@@ -465,7 +463,7 @@ fn online_cpus(logger: &Logger, num: i32) -> Result<i32> {
|
||||
);
|
||||
return Ok(num);
|
||||
}
|
||||
thread::sleep(time::Duration::from_millis(ONLINE_CPUMEM_WATI_MILLIS));
|
||||
thread::sleep(time::Duration::from_millis(ONLINE_CPUMEM_WAIT_MILLIS));
|
||||
}
|
||||
|
||||
Err(anyhow!(
|
||||
|
||||
@@ -57,7 +57,7 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
|
||||
continue;
|
||||
}
|
||||
|
||||
let mut p = process.unwrap();
|
||||
let p = process.unwrap();
|
||||
|
||||
let ret: i32 = match wait_status {
|
||||
WaitStatus::Exited(_, c) => c,
|
||||
|
||||
1777
src/dragonball/Cargo.lock
generated
1777
src/dragonball/Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -10,18 +10,19 @@ license = "Apache-2.0"
|
||||
edition = "2018"
|
||||
|
||||
[dependencies]
|
||||
anyhow = "1.0.32"
|
||||
arc-swap = "1.5.0"
|
||||
bytes = "1.1.0"
|
||||
dbs-address-space = "0.3.0"
|
||||
dbs-allocator = "0.1.0"
|
||||
dbs-arch = "0.2.0"
|
||||
dbs-boot = "0.4.0"
|
||||
dbs-device = "0.2.0"
|
||||
dbs-interrupt = { version = "0.2.0", features = ["kvm-irq"] }
|
||||
dbs-legacy-devices = "0.1.0"
|
||||
dbs-upcall = { version = "0.3.0", optional = true }
|
||||
dbs-utils = "0.2.0"
|
||||
dbs-virtio-devices = { version = "0.3.1", optional = true, features = ["virtio-mmio"] }
|
||||
dbs-address-space = { path = "./src/dbs_address_space" }
|
||||
dbs-allocator = { path = "./src/dbs_allocator" }
|
||||
dbs-arch = { path = "./src/dbs_arch" }
|
||||
dbs-boot = { path = "./src/dbs_boot" }
|
||||
dbs-device = { path = "./src/dbs_device" }
|
||||
dbs-interrupt = { path = "./src/dbs_interrupt", features = ["kvm-irq"] }
|
||||
dbs-legacy-devices = { path = "./src/dbs_legacy_devices" }
|
||||
dbs-upcall = { path = "./src/dbs_upcall" , optional = true }
|
||||
dbs-utils = { path = "./src/dbs_utils" }
|
||||
dbs-virtio-devices = { path = "./src/dbs_virtio_devices", optional = true, features = ["virtio-mmio"] }
|
||||
kvm-bindings = "0.6.0"
|
||||
kvm-ioctls = "0.12.0"
|
||||
lazy_static = "1.2"
|
||||
@@ -29,6 +30,8 @@ libc = "0.2.39"
|
||||
linux-loader = "0.6.0"
|
||||
log = "0.4.14"
|
||||
nix = "0.24.2"
|
||||
procfs = "0.12.0"
|
||||
prometheus = { version = "0.13.0", features = ["process"] }
|
||||
seccompiler = "0.2.0"
|
||||
serde = "1.0.27"
|
||||
serde_derive = "1.0.27"
|
||||
@@ -42,8 +45,8 @@ vm-memory = { version = "0.9.0", features = ["backend-mmap"] }
|
||||
crossbeam-channel = "0.5.6"
|
||||
|
||||
[dev-dependencies]
|
||||
slog-term = "2.9.0"
|
||||
slog-async = "2.7.0"
|
||||
slog-term = "2.9.0"
|
||||
test-utils = { path = "../libs/test-utils" }
|
||||
|
||||
[features]
|
||||
|
||||
@@ -39,12 +39,15 @@ clean:
|
||||
|
||||
test:
|
||||
ifdef SUPPORT_VIRTUALIZATION
|
||||
cargo test --all-features --target $(TRIPLE) -- --nocapture
|
||||
RUST_BACKTRACE=1 cargo test --all-features --target $(TRIPLE) -- --nocapture --test-threads=1
|
||||
else
|
||||
@echo "INFO: skip testing dragonball, it need virtualization support."
|
||||
exit 0
|
||||
endif
|
||||
|
||||
coverage:
|
||||
RUST_BACKTRACE=1 cargo llvm-cov --all-features --target $(TRIPLE) -- --nocapture --test-threads=1
|
||||
|
||||
endif # ifeq ($(ARCH), s390x)
|
||||
|
||||
.DEFAULT_GOAL := default
|
||||
|
||||
@@ -16,10 +16,22 @@ and configuration process.
|
||||
|
||||
# Documentation
|
||||
|
||||
Device: [Device Document](docs/device.md)
|
||||
vCPU: [vCPU Document](docs/vcpu.md)
|
||||
API: [API Document](docs/api.md)
|
||||
`Upcall`: [`Upcall` Document](docs/upcall.md)
|
||||
- Device: [Device Document](docs/device.md)
|
||||
- vCPU: [vCPU Document](docs/vcpu.md)
|
||||
- API: [API Document](docs/api.md)
|
||||
- `Upcall`: [`Upcall` Document](docs/upcall.md)
|
||||
- `dbs_acpi`: [`dbs_acpi` Document](src/dbs_acpi/README.md)
|
||||
- `dbs_address_space`: [`dbs_address_space` Document](src/dbs_address_space/README.md)
|
||||
- `dbs_allocator`: [`dbs_allocator` Document](src/dbs_allocator/README.md)
|
||||
- `dbs_arch`: [`dbs_arch` Document](src/dbs_arch/README.md)
|
||||
- `dbs_boot`: [`dbs_boot` Document](src/dbs_boot/README.md)
|
||||
- `dbs_device`: [`dbs_device` Document](src/dbs_device/README.md)
|
||||
- `dbs_interrupt`: [`dbs_interrput` Document](src/dbs_interrupt/README.md)
|
||||
- `dbs_legacy_devices`: [`dbs_legacy_devices` Document](src/dbs_legacy_devices/README.md)
|
||||
- `dbs_tdx`: [`dbs_tdx` Document](src/dbs_tdx/README.md)
|
||||
- `dbs_upcall`: [`dbs_upcall` Document](src/dbs_upcall/README.md)
|
||||
- `dbs_utils`: [`dbs_utils` Document](src/dbs_utils/README.md)
|
||||
- `dbs_virtio_devices`: [`dbs_virtio_devices` Document](src/dbs_virtio_devices/README.md)
|
||||
|
||||
Currently, the documents are still actively adding.
|
||||
You could see the [official documentation](docs/) page for more details.
|
||||
|
||||
@@ -5,15 +5,6 @@
|
||||
|
||||
use serde_derive::{Deserialize, Serialize};
|
||||
|
||||
/// This struct represents the strongly typed equivalent of the json body
|
||||
/// from confidential container related requests.
|
||||
#[derive(Copy, Clone, Debug, Deserialize, PartialEq, Serialize)]
|
||||
#[serde(deny_unknown_fields)]
|
||||
pub enum ConfidentialVmType {
|
||||
/// Intel Trusted Domain
|
||||
TDX = 2,
|
||||
}
|
||||
|
||||
/// The microvm state.
|
||||
///
|
||||
/// When Dragonball starts, the instance state is Uninitialized. Once start_microvm method is
|
||||
@@ -67,12 +58,10 @@ pub struct InstanceInfo {
|
||||
pub tids: Vec<(u8, u32)>,
|
||||
/// Last instance downtime
|
||||
pub last_instance_downtime: u64,
|
||||
/// confidential vm type
|
||||
pub confidential_vm_type: Option<ConfidentialVmType>,
|
||||
}
|
||||
|
||||
impl InstanceInfo {
|
||||
/// create instance info object with given id, version, platform type and confidential vm type.
|
||||
/// create instance info object with given id, version, and platform type
|
||||
pub fn new(id: String, vmm_version: String) -> Self {
|
||||
InstanceInfo {
|
||||
id,
|
||||
@@ -83,14 +72,8 @@ impl InstanceInfo {
|
||||
async_state: AsyncState::Uninitialized,
|
||||
tids: Vec::new(),
|
||||
last_instance_downtime: 0,
|
||||
confidential_vm_type: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// return true if VM confidential type is TDX
|
||||
pub fn is_tdx_enabled(&self) -> bool {
|
||||
matches!(self.confidential_vm_type, Some(ConfidentialVmType::TDX))
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for InstanceInfo {
|
||||
@@ -104,7 +87,6 @@ impl Default for InstanceInfo {
|
||||
async_state: AsyncState::Uninitialized,
|
||||
tids: Vec::new(),
|
||||
last_instance_downtime: 0,
|
||||
confidential_vm_type: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@ pub use self::boot_source::{BootSourceConfig, BootSourceConfigError, DEFAULT_KER
|
||||
|
||||
/// Wrapper over the microVM general information.
|
||||
mod instance_info;
|
||||
pub use self::instance_info::{ConfidentialVmType, InstanceInfo, InstanceState};
|
||||
pub use self::instance_info::{InstanceInfo, InstanceState};
|
||||
|
||||
/// Wrapper for configuring the memory and CPU of the microVM.
|
||||
mod machine_config;
|
||||
|
||||
@@ -16,6 +16,8 @@ use crate::event_manager::EventManager;
|
||||
use crate::vm::{CpuTopology, KernelConfigInfo, VmConfigInfo};
|
||||
use crate::vmm::Vmm;
|
||||
|
||||
use crate::hypervisor_metrics::get_hypervisor_metrics;
|
||||
|
||||
use self::VmConfigError::*;
|
||||
use self::VmmActionError::MachineConfig;
|
||||
|
||||
@@ -58,6 +60,11 @@ pub enum VmmActionError {
|
||||
#[error("Upcall not ready, can't hotplug device.")]
|
||||
UpcallServerNotReady,
|
||||
|
||||
/// Error when get prometheus metrics.
|
||||
/// Currently does not distinguish between error types for metrics.
|
||||
#[error("failed to get hypervisor metrics")]
|
||||
GetHypervisorMetrics,
|
||||
|
||||
/// The action `ConfigureBootSource` failed either because of bad user input or an internal
|
||||
/// error.
|
||||
#[error("failed to configure boot source for VM: {0}")]
|
||||
@@ -135,6 +142,9 @@ pub enum VmmAction {
|
||||
/// Get the configuration of the microVM.
|
||||
GetVmConfiguration,
|
||||
|
||||
/// Get Prometheus Metrics.
|
||||
GetHypervisorMetrics,
|
||||
|
||||
/// Set the microVM configuration (memory & vcpu) using `VmConfig` as input. This
|
||||
/// action can only be called before the microVM has booted.
|
||||
SetVmConfiguration(VmConfigInfo),
|
||||
@@ -208,6 +218,8 @@ pub enum VmmData {
|
||||
Empty,
|
||||
/// The microVM configuration represented by `VmConfigInfo`.
|
||||
MachineConfiguration(Box<VmConfigInfo>),
|
||||
/// Prometheus Metrics represented by String.
|
||||
HypervisorMetrics(String),
|
||||
}
|
||||
|
||||
/// Request data type used to communicate between the API and the VMM.
|
||||
@@ -262,6 +274,7 @@ impl VmmService {
|
||||
VmmAction::GetVmConfiguration => Ok(VmmData::MachineConfiguration(Box::new(
|
||||
self.machine_config.clone(),
|
||||
))),
|
||||
VmmAction::GetHypervisorMetrics => self.get_hypervisor_metrics(),
|
||||
VmmAction::SetVmConfiguration(machine_config) => {
|
||||
self.set_vm_configuration(vmm, machine_config)
|
||||
}
|
||||
@@ -381,6 +394,13 @@ impl VmmService {
|
||||
Ok(VmmData::Empty)
|
||||
}
|
||||
|
||||
/// Get prometheus metrics.
|
||||
fn get_hypervisor_metrics(&self) -> VmmRequestResult {
|
||||
get_hypervisor_metrics()
|
||||
.map_err(|_| VmmActionError::GetHypervisorMetrics)
|
||||
.map(VmmData::HypervisorMetrics)
|
||||
}
|
||||
|
||||
/// Set virtual machine configuration.
|
||||
pub fn set_vm_configuration(
|
||||
&mut self,
|
||||
|
||||
14
src/dragonball/src/dbs_acpi/Cargo.toml
Normal file
14
src/dragonball/src/dbs_acpi/Cargo.toml
Normal file
@@ -0,0 +1,14 @@
|
||||
[package]
|
||||
name = "dbs-acpi"
|
||||
version = "0.1.0"
|
||||
authors = ["Alibaba Dragonball Team"]
|
||||
description = "acpi definitions for virtual machines."
|
||||
license = "Apache-2.0"
|
||||
edition = "2018"
|
||||
homepage = "https://github.com/openanolis/dragonball-sandbox"
|
||||
repository = "https://github.com/openanolis/dragonball-sandbox"
|
||||
keywords = ["dragonball", "acpi", "vmm", "secure-sandbox"]
|
||||
readme = "README.md"
|
||||
|
||||
[dependencies]
|
||||
vm-memory = "0.9.0"
|
||||
11
src/dragonball/src/dbs_acpi/README.md
Normal file
11
src/dragonball/src/dbs_acpi/README.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# dbs-acpi
|
||||
|
||||
`dbs-acpi` provides ACPI data structures for VMM to emulate ACPI behavior.
|
||||
|
||||
## Acknowledgement
|
||||
|
||||
Part of the code is derived from the [Cloud Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) project.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
|
||||
29
src/dragonball/src/dbs_acpi/src/lib.rs
Normal file
29
src/dragonball/src/dbs_acpi/src/lib.rs
Normal file
@@ -0,0 +1,29 @@
|
||||
// Copyright (c) 2019 Intel Corporation
|
||||
// Copyright (c) 2023 Alibaba Cloud
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
pub mod rsdp;
|
||||
pub mod sdt;
|
||||
|
||||
fn generate_checksum(data: &[u8]) -> u8 {
|
||||
(255 - data.iter().fold(0u8, |acc, x| acc.wrapping_add(*x))).wrapping_add(1)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
#[test]
|
||||
fn test_generate_checksum() {
|
||||
let mut buf = [0x00; 8];
|
||||
let sum = generate_checksum(&buf);
|
||||
assert_eq!(sum, 0);
|
||||
buf[0] = 0xff;
|
||||
let sum = generate_checksum(&buf);
|
||||
assert_eq!(sum, 1);
|
||||
buf[0] = 0xaa;
|
||||
buf[1] = 0xcc;
|
||||
buf[4] = generate_checksum(&buf);
|
||||
let sum = buf.iter().fold(0u8, |s, v| s.wrapping_add(*v));
|
||||
assert_eq!(sum, 0);
|
||||
}
|
||||
}
|
||||
60
src/dragonball/src/dbs_acpi/src/rsdp.rs
Normal file
60
src/dragonball/src/dbs_acpi/src/rsdp.rs
Normal file
@@ -0,0 +1,60 @@
|
||||
// Copyright (c) 2019 Intel Corporation
|
||||
// Copyright (c) 2023 Alibaba Cloud
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
// RSDP (Root System Description Pointer) is a data structure used in the ACPI programming interface.
|
||||
use vm_memory::ByteValued;
|
||||
|
||||
#[repr(packed)]
|
||||
#[derive(Clone, Copy, Default)]
|
||||
pub struct Rsdp {
|
||||
pub signature: [u8; 8],
|
||||
pub checksum: u8,
|
||||
pub oem_id: [u8; 6],
|
||||
pub revision: u8,
|
||||
_rsdt_addr: u32,
|
||||
pub length: u32,
|
||||
pub xsdt_addr: u64,
|
||||
pub extended_checksum: u8,
|
||||
_reserved: [u8; 3],
|
||||
}
|
||||
|
||||
// SAFETY: Rsdp only contains a series of integers
|
||||
unsafe impl ByteValued for Rsdp {}
|
||||
|
||||
impl Rsdp {
|
||||
pub fn new(xsdt_addr: u64) -> Self {
|
||||
let mut rsdp = Rsdp {
|
||||
signature: *b"RSD PTR ",
|
||||
checksum: 0,
|
||||
oem_id: *b"ALICLD",
|
||||
revision: 1,
|
||||
_rsdt_addr: 0,
|
||||
length: std::mem::size_of::<Rsdp>() as u32,
|
||||
xsdt_addr,
|
||||
extended_checksum: 0,
|
||||
_reserved: [0; 3],
|
||||
};
|
||||
rsdp.checksum = super::generate_checksum(&rsdp.as_slice()[0..19]);
|
||||
rsdp.extended_checksum = super::generate_checksum(rsdp.as_slice());
|
||||
rsdp
|
||||
}
|
||||
|
||||
pub fn len() -> usize {
|
||||
std::mem::size_of::<Rsdp>()
|
||||
}
|
||||
}
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::Rsdp;
|
||||
use vm_memory::bytes::ByteValued;
|
||||
#[test]
|
||||
fn test_rsdp() {
|
||||
let rsdp = Rsdp::new(0xa0000);
|
||||
let sum = rsdp
|
||||
.as_slice()
|
||||
.iter()
|
||||
.fold(0u8, |acc, x| acc.wrapping_add(*x));
|
||||
assert_eq!(sum, 0);
|
||||
}
|
||||
}
|
||||
137
src/dragonball/src/dbs_acpi/src/sdt.rs
Normal file
137
src/dragonball/src/dbs_acpi/src/sdt.rs
Normal file
@@ -0,0 +1,137 @@
|
||||
// Copyright (c) 2019 Intel Corporation
|
||||
// Copyright (c) 2023 Alibaba Cloud
|
||||
//
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
#[repr(packed)]
|
||||
pub struct GenericAddress {
|
||||
pub address_space_id: u8,
|
||||
pub register_bit_width: u8,
|
||||
pub register_bit_offset: u8,
|
||||
pub access_size: u8,
|
||||
pub address: u64,
|
||||
}
|
||||
|
||||
impl GenericAddress {
|
||||
pub fn io_port_address<T>(address: u16) -> Self {
|
||||
GenericAddress {
|
||||
address_space_id: 1,
|
||||
register_bit_width: 8 * std::mem::size_of::<T>() as u8,
|
||||
register_bit_offset: 0,
|
||||
access_size: std::mem::size_of::<T>() as u8,
|
||||
address: u64::from(address),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn mmio_address<T>(address: u64) -> Self {
|
||||
GenericAddress {
|
||||
address_space_id: 0,
|
||||
register_bit_width: 8 * std::mem::size_of::<T>() as u8,
|
||||
register_bit_offset: 0,
|
||||
access_size: std::mem::size_of::<T>() as u8,
|
||||
address,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Sdt {
|
||||
data: Vec<u8>,
|
||||
}
|
||||
|
||||
#[allow(clippy::len_without_is_empty)]
|
||||
impl Sdt {
|
||||
pub fn new(signature: [u8; 4], length: u32, revision: u8) -> Self {
|
||||
assert!(length >= 36);
|
||||
const OEM_ID: [u8; 6] = *b"ALICLD";
|
||||
const OEM_TABLE: [u8; 8] = *b"RUND ";
|
||||
const CREATOR_ID: [u8; 4] = *b"ALIC";
|
||||
let mut data = Vec::with_capacity(length as usize);
|
||||
data.extend_from_slice(&signature);
|
||||
data.extend_from_slice(&length.to_le_bytes());
|
||||
data.push(revision);
|
||||
data.push(0); // checksum
|
||||
data.extend_from_slice(&OEM_ID); // oem id u32
|
||||
data.extend_from_slice(&OEM_TABLE); // oem table
|
||||
data.extend_from_slice(&1u32.to_le_bytes()); // oem revision u32
|
||||
data.extend_from_slice(&CREATOR_ID); // creator id u32
|
||||
data.extend_from_slice(&1u32.to_le_bytes()); // creator revison u32
|
||||
assert_eq!(data.len(), 36);
|
||||
data.resize(length as usize, 0);
|
||||
let mut sdt = Sdt { data };
|
||||
sdt.update_checksum();
|
||||
sdt
|
||||
}
|
||||
|
||||
pub fn update_checksum(&mut self) {
|
||||
self.data[9] = 0;
|
||||
let checksum = super::generate_checksum(self.data.as_slice());
|
||||
self.data[9] = checksum
|
||||
}
|
||||
|
||||
pub fn as_slice(&self) -> &[u8] {
|
||||
self.data.as_slice()
|
||||
}
|
||||
|
||||
pub fn append<T>(&mut self, value: T) {
|
||||
let orig_length = self.data.len();
|
||||
let new_length = orig_length + std::mem::size_of::<T>();
|
||||
self.data.resize(new_length, 0);
|
||||
self.write_u32(4, new_length as u32);
|
||||
self.write(orig_length, value);
|
||||
}
|
||||
|
||||
pub fn append_slice(&mut self, data: &[u8]) {
|
||||
let orig_length = self.data.len();
|
||||
let new_length = orig_length + data.len();
|
||||
self.write_u32(4, new_length as u32);
|
||||
self.data.extend_from_slice(data);
|
||||
self.update_checksum();
|
||||
}
|
||||
|
||||
/// Write a value at the given offset
|
||||
pub fn write<T>(&mut self, offset: usize, value: T) {
|
||||
assert!((offset + (std::mem::size_of::<T>() - 1)) < self.data.len());
|
||||
unsafe {
|
||||
*(((self.data.as_mut_ptr() as usize) + offset) as *mut T) = value;
|
||||
}
|
||||
self.update_checksum();
|
||||
}
|
||||
|
||||
pub fn write_u8(&mut self, offset: usize, val: u8) {
|
||||
self.write(offset, val);
|
||||
}
|
||||
|
||||
pub fn write_u16(&mut self, offset: usize, val: u16) {
|
||||
self.write(offset, val);
|
||||
}
|
||||
|
||||
pub fn write_u32(&mut self, offset: usize, val: u32) {
|
||||
self.write(offset, val);
|
||||
}
|
||||
|
||||
pub fn write_u64(&mut self, offset: usize, val: u64) {
|
||||
self.write(offset, val);
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
self.data.len()
|
||||
}
|
||||
}
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::Sdt;
|
||||
#[test]
|
||||
fn test_sdt() {
|
||||
let mut sdt = Sdt::new(*b"TEST", 40, 1);
|
||||
let sum: u8 = sdt
|
||||
.as_slice()
|
||||
.iter()
|
||||
.fold(0u8, |acc, x| acc.wrapping_add(*x));
|
||||
assert_eq!(sum, 0);
|
||||
sdt.write_u32(36, 0x12345678);
|
||||
let sum: u8 = sdt
|
||||
.as_slice()
|
||||
.iter()
|
||||
.fold(0u8, |acc, x| acc.wrapping_add(*x));
|
||||
assert_eq!(sum, 0);
|
||||
}
|
||||
}
|
||||
20
src/dragonball/src/dbs_address_space/Cargo.toml
Normal file
20
src/dragonball/src/dbs_address_space/Cargo.toml
Normal file
@@ -0,0 +1,20 @@
|
||||
[package]
|
||||
name = "dbs-address-space"
|
||||
version = "0.3.0"
|
||||
authors = ["Alibaba Dragonball Team"]
|
||||
description = "address space manager for virtual machines."
|
||||
license = "Apache-2.0"
|
||||
edition = "2018"
|
||||
homepage = "https://github.com/openanolis/dragonball-sandbox"
|
||||
repository = "https://github.com/openanolis/dragonball-sandbox"
|
||||
keywords = ["dragonball", "address", "vmm", "secure-sandbox"]
|
||||
readme = "README.md"
|
||||
|
||||
[dependencies]
|
||||
arc-swap = ">=0.4.8"
|
||||
libc = "0.2.39"
|
||||
nix = "0.23.1"
|
||||
lazy_static = "1"
|
||||
thiserror = "1"
|
||||
vmm-sys-util = "0.11.0"
|
||||
vm-memory = { version = "0.9", features = ["backend-mmap", "backend-atomic"] }
|
||||
1
src/dragonball/src/dbs_address_space/LICENSE
Symbolic link
1
src/dragonball/src/dbs_address_space/LICENSE
Symbolic link
@@ -0,0 +1 @@
|
||||
../../LICENSE
|
||||
80
src/dragonball/src/dbs_address_space/README.md
Normal file
80
src/dragonball/src/dbs_address_space/README.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# dbs-address-space
|
||||
|
||||
## Design
|
||||
|
||||
The `dbs-address-space` crate is an address space manager for virtual machines, which manages memory and MMIO resources resident in the guest physical address space.
|
||||
|
||||
Main components are:
|
||||
- `AddressSpaceRegion`: Struct to maintain configuration information about a guest address region.
|
||||
```rust
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AddressSpaceRegion {
|
||||
/// Type of address space regions.
|
||||
pub ty: AddressSpaceRegionType,
|
||||
/// Base address of the region in virtual machine's physical address space.
|
||||
pub base: GuestAddress,
|
||||
/// Size of the address space region.
|
||||
pub size: GuestUsize,
|
||||
/// Host NUMA node ids assigned to this region.
|
||||
pub host_numa_node_id: Option<u32>,
|
||||
|
||||
/// File/offset tuple to back the memory allocation.
|
||||
file_offset: Option<FileOffset>,
|
||||
/// Mmap permission flags.
|
||||
perm_flags: i32,
|
||||
/// Hugepage madvise hint.
|
||||
///
|
||||
/// It needs 'advise' or 'always' policy in host shmem config.
|
||||
is_hugepage: bool,
|
||||
/// Hotplug hint.
|
||||
is_hotplug: bool,
|
||||
/// Anonymous memory hint.
|
||||
///
|
||||
/// It should be true for regions with the MADV_DONTFORK flag enabled.
|
||||
is_anon: bool,
|
||||
}
|
||||
```
|
||||
- `AddressSpaceBase`: Base implementation to manage guest physical address space, without support of region hotplug.
|
||||
```rust
|
||||
#[derive(Clone)]
|
||||
pub struct AddressSpaceBase {
|
||||
regions: Vec<Arc<AddressSpaceRegion>>,
|
||||
layout: AddressSpaceLayout,
|
||||
}
|
||||
```
|
||||
- `AddressSpaceBase`: An address space implementation with region hotplug capability.
|
||||
```rust
|
||||
/// The `AddressSpace` is a wrapper over [AddressSpaceBase] to support hotplug of
|
||||
/// address space regions.
|
||||
#[derive(Clone)]
|
||||
pub struct AddressSpace {
|
||||
state: Arc<ArcSwap<AddressSpaceBase>>,
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
```rust
|
||||
// 1. create several memory regions
|
||||
let reg = Arc::new(
|
||||
AddressSpaceRegion::create_default_memory_region(
|
||||
GuestAddress(0x100000),
|
||||
0x100000,
|
||||
None,
|
||||
"shmem",
|
||||
"",
|
||||
false,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap()
|
||||
);
|
||||
let regions = vec![reg];
|
||||
// 2. create layout (depending on archs)
|
||||
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
|
||||
// 3. create address space from regions and layout
|
||||
let address_space = AddressSpace::from_regions(regions, layout.clone());
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.
|
||||
830
src/dragonball/src/dbs_address_space/src/address_space.rs
Normal file
830
src/dragonball/src/dbs_address_space/src/address_space.rs
Normal file
@@ -0,0 +1,830 @@
|
||||
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//! Physical address space manager for virtual machines.
|
||||
|
||||
use std::sync::Arc;
|
||||
|
||||
use arc_swap::ArcSwap;
|
||||
use vm_memory::{GuestAddress, GuestMemoryMmap};
|
||||
|
||||
use crate::{AddressSpaceError, AddressSpaceLayout, AddressSpaceRegion, AddressSpaceRegionType};
|
||||
|
||||
/// Base implementation to manage guest physical address space, without support of region hotplug.
|
||||
#[derive(Clone)]
|
||||
pub struct AddressSpaceBase {
|
||||
regions: Vec<Arc<AddressSpaceRegion>>,
|
||||
layout: AddressSpaceLayout,
|
||||
}
|
||||
|
||||
impl AddressSpaceBase {
|
||||
/// Create an instance of `AddressSpaceBase` from an `AddressSpaceRegion` array.
|
||||
///
|
||||
/// To achieve better performance by using binary search algorithm, the `regions` vector
|
||||
/// will gotten sorted by guest physical address.
|
||||
///
|
||||
/// Note, panicking if some regions intersects with each other.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `regions` - prepared regions to managed by the address space instance.
|
||||
/// * `layout` - prepared address space layout configuration.
|
||||
pub fn from_regions(
|
||||
mut regions: Vec<Arc<AddressSpaceRegion>>,
|
||||
layout: AddressSpaceLayout,
|
||||
) -> Self {
|
||||
regions.sort_unstable_by_key(|v| v.base);
|
||||
for region in regions.iter() {
|
||||
if !layout.is_region_valid(region) {
|
||||
panic!(
|
||||
"Invalid region {:?} for address space layout {:?}",
|
||||
region, layout
|
||||
);
|
||||
}
|
||||
}
|
||||
for idx in 1..regions.len() {
|
||||
if regions[idx].intersect_with(®ions[idx - 1]) {
|
||||
panic!("address space regions intersect with each other");
|
||||
}
|
||||
}
|
||||
AddressSpaceBase { regions, layout }
|
||||
}
|
||||
|
||||
/// Insert a new address space region into the address space.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `region` - the new region to be inserted.
|
||||
pub fn insert_region(
|
||||
&mut self,
|
||||
region: Arc<AddressSpaceRegion>,
|
||||
) -> Result<(), AddressSpaceError> {
|
||||
if !self.layout.is_region_valid(®ion) {
|
||||
return Err(AddressSpaceError::InvalidAddressRange(
|
||||
region.start_addr().0,
|
||||
region.len(),
|
||||
));
|
||||
}
|
||||
for idx in 0..self.regions.len() {
|
||||
if self.regions[idx].intersect_with(®ion) {
|
||||
return Err(AddressSpaceError::InvalidAddressRange(
|
||||
region.start_addr().0,
|
||||
region.len(),
|
||||
));
|
||||
}
|
||||
}
|
||||
self.regions.push(region);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Enumerate all regions in the address space.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `cb` - the callback function to apply to each region.
|
||||
pub fn walk_regions<F>(&self, mut cb: F) -> Result<(), AddressSpaceError>
|
||||
where
|
||||
F: FnMut(&Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError>,
|
||||
{
|
||||
for reg in self.regions.iter() {
|
||||
cb(reg)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Get address space layout associated with the address space.
|
||||
pub fn layout(&self) -> AddressSpaceLayout {
|
||||
self.layout.clone()
|
||||
}
|
||||
|
||||
/// Get maximum of guest physical address in the address space.
|
||||
pub fn last_addr(&self) -> GuestAddress {
|
||||
let mut last_addr = GuestAddress(self.layout.mem_start);
|
||||
for reg in self.regions.iter() {
|
||||
if reg.ty != AddressSpaceRegionType::DAXMemory && reg.last_addr() > last_addr {
|
||||
last_addr = reg.last_addr();
|
||||
}
|
||||
}
|
||||
last_addr
|
||||
}
|
||||
|
||||
/// Check whether the guest physical address `guest_addr` belongs to a DAX memory region.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `guest_addr` - the guest physical address to inquire
|
||||
pub fn is_dax_region(&self, guest_addr: GuestAddress) -> bool {
|
||||
for reg in self.regions.iter() {
|
||||
// Safe because we have validate the region when creating the address space object.
|
||||
if reg.region_type() == AddressSpaceRegionType::DAXMemory
|
||||
&& reg.start_addr() <= guest_addr
|
||||
&& reg.start_addr().0 + reg.len() > guest_addr.0
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
/// Get protection flags of memory region that guest physical address `guest_addr` belongs to.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `guest_addr` - the guest physical address to inquire
|
||||
pub fn prot_flags(&self, guest_addr: GuestAddress) -> Result<i32, AddressSpaceError> {
|
||||
for reg in self.regions.iter() {
|
||||
if reg.start_addr() <= guest_addr && reg.start_addr().0 + reg.len() > guest_addr.0 {
|
||||
return Ok(reg.prot_flags());
|
||||
}
|
||||
}
|
||||
|
||||
Err(AddressSpaceError::InvalidRegionType)
|
||||
}
|
||||
|
||||
/// Get optional NUMA node id associated with guest physical address `gpa`.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `gpa` - guest physical address to query.
|
||||
pub fn numa_node_id(&self, gpa: u64) -> Option<u32> {
|
||||
for reg in self.regions.iter() {
|
||||
if gpa >= reg.base.0 && gpa < (reg.base.0 + reg.size) {
|
||||
return reg.host_numa_node_id;
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// An address space implementation with region hotplug capability.
|
||||
///
|
||||
/// The `AddressSpace` is a wrapper over [AddressSpaceBase] to support hotplug of
|
||||
/// address space regions.
|
||||
#[derive(Clone)]
|
||||
pub struct AddressSpace {
|
||||
state: Arc<ArcSwap<AddressSpaceBase>>,
|
||||
}
|
||||
|
||||
impl AddressSpace {
|
||||
/// Convert a [GuestMemoryMmap] object into `GuestMemoryAtomic<GuestMemoryMmap>`.
|
||||
pub fn convert_into_vm_as(
|
||||
gm: GuestMemoryMmap,
|
||||
) -> vm_memory::atomic::GuestMemoryAtomic<GuestMemoryMmap> {
|
||||
vm_memory::atomic::GuestMemoryAtomic::from(Arc::new(gm))
|
||||
}
|
||||
|
||||
/// Create an instance of `AddressSpace` from an `AddressSpaceRegion` array.
|
||||
///
|
||||
/// To achieve better performance by using binary search algorithm, the `regions` vector
|
||||
/// will gotten sorted by guest physical address.
|
||||
///
|
||||
/// Note, panicking if some regions intersects with each other.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `regions` - prepared regions to managed by the address space instance.
|
||||
/// * `layout` - prepared address space layout configuration.
|
||||
pub fn from_regions(regions: Vec<Arc<AddressSpaceRegion>>, layout: AddressSpaceLayout) -> Self {
|
||||
let base = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
AddressSpace {
|
||||
state: Arc::new(ArcSwap::new(Arc::new(base))),
|
||||
}
|
||||
}
|
||||
|
||||
/// Insert a new address space region into the address space.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `region` - the new region to be inserted.
|
||||
pub fn insert_region(
|
||||
&mut self,
|
||||
region: Arc<AddressSpaceRegion>,
|
||||
) -> Result<(), AddressSpaceError> {
|
||||
let curr = self.state.load().regions.clone();
|
||||
let layout = self.state.load().layout.clone();
|
||||
let mut base = AddressSpaceBase::from_regions(curr, layout);
|
||||
base.insert_region(region)?;
|
||||
let _old = self.state.swap(Arc::new(base));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Enumerate all regions in the address space.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `cb` - the callback function to apply to each region.
|
||||
pub fn walk_regions<F>(&self, cb: F) -> Result<(), AddressSpaceError>
|
||||
where
|
||||
F: FnMut(&Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError>,
|
||||
{
|
||||
self.state.load().walk_regions(cb)
|
||||
}
|
||||
|
||||
/// Get address space layout associated with the address space.
|
||||
pub fn layout(&self) -> AddressSpaceLayout {
|
||||
self.state.load().layout()
|
||||
}
|
||||
|
||||
/// Get maximum of guest physical address in the address space.
|
||||
pub fn last_addr(&self) -> GuestAddress {
|
||||
self.state.load().last_addr()
|
||||
}
|
||||
|
||||
/// Check whether the guest physical address `guest_addr` belongs to a DAX memory region.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `guest_addr` - the guest physical address to inquire
|
||||
pub fn is_dax_region(&self, guest_addr: GuestAddress) -> bool {
|
||||
self.state.load().is_dax_region(guest_addr)
|
||||
}
|
||||
|
||||
/// Get protection flags of memory region that guest physical address `guest_addr` belongs to.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `guest_addr` - the guest physical address to inquire
|
||||
pub fn prot_flags(&self, guest_addr: GuestAddress) -> Result<i32, AddressSpaceError> {
|
||||
self.state.load().prot_flags(guest_addr)
|
||||
}
|
||||
|
||||
/// Get optional NUMA node id associated with guest physical address `gpa`.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `gpa` - guest physical address to query.
|
||||
pub fn numa_node_id(&self, gpa: u64) -> Option<u32> {
|
||||
self.state.load().numa_node_id(gpa)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::io::Write;
|
||||
use vm_memory::GuestUsize;
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
||||
// define macros for unit test
|
||||
const GUEST_PHYS_END: u64 = (1 << 46) - 1;
|
||||
const GUEST_MEM_START: u64 = 0;
|
||||
const GUEST_MEM_END: u64 = GUEST_PHYS_END >> 1;
|
||||
const GUEST_DEVICE_START: u64 = GUEST_MEM_END + 1;
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_from_regions() {
|
||||
let mut file = TempFile::new().unwrap().into_file();
|
||||
let sample_buf = &[1, 2, 3, 4, 5];
|
||||
assert!(file.write_all(sample_buf).is_ok());
|
||||
file.set_len(0x10000).unwrap();
|
||||
|
||||
let reg = Arc::new(
|
||||
AddressSpaceRegion::create_device_region(GuestAddress(GUEST_DEVICE_START), 0x1000)
|
||||
.unwrap(),
|
||||
);
|
||||
let regions = vec![reg];
|
||||
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
|
||||
let address_space = AddressSpaceBase::from_regions(regions, layout.clone());
|
||||
assert_eq!(address_space.layout(), layout);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "Invalid region")]
|
||||
fn test_address_space_base_from_regions_when_region_invalid() {
|
||||
let reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x1000,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x200, 0x1800);
|
||||
let _address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "address space regions intersect with each other")]
|
||||
fn test_address_space_base_from_regions_when_region_intersected() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x200),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let _address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_insert_region() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x100, 0x1800);
|
||||
let mut address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
// Normal case.
|
||||
address_space.insert_region(reg2).unwrap();
|
||||
assert!(!address_space.regions[1].intersect_with(&address_space.regions[0]));
|
||||
|
||||
// Error invalid address range case when region invaled.
|
||||
let invalid_reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x0),
|
||||
0x100,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
assert_eq!(
|
||||
format!(
|
||||
"{:?}",
|
||||
address_space.insert_region(invalid_reg).err().unwrap()
|
||||
),
|
||||
format!("InvalidAddressRange({:?}, {:?})", 0x0, 0x100)
|
||||
);
|
||||
|
||||
// Error Error invalid address range case when region to be inserted will intersect
|
||||
// exsisting regions.
|
||||
let intersected_reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x400),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
assert_eq!(
|
||||
format!(
|
||||
"{:?}",
|
||||
address_space.insert_region(intersected_reg).err().unwrap()
|
||||
),
|
||||
format!("InvalidAddressRange({:?}, {:?})", 0x400, 0x200)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_walk_regions() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
// The argument of walk_regions is a function which takes a &Arc<AddressSpaceRegion>
|
||||
// and returns result. This function will be applied to all regions.
|
||||
fn do_not_have_hotplug(region: &Arc<AddressSpaceRegion>) -> Result<(), AddressSpaceError> {
|
||||
if region.is_hotplug() {
|
||||
Err(AddressSpaceError::InvalidRegionType) // The Error type is dictated to AddressSpaceError.
|
||||
} else {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
assert!(matches!(
|
||||
address_space.walk_regions(do_not_have_hotplug).unwrap(),
|
||||
()
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_last_addr() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
assert_eq!(address_space.last_addr(), GuestAddress(0x500 - 1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_is_dax_region() {
|
||||
let page_size = 4096;
|
||||
let address_space_region = vec![
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(page_size),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(page_size * 2),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DAXMemory,
|
||||
GuestAddress(GUEST_DEVICE_START),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
];
|
||||
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
|
||||
let address_space = AddressSpaceBase::from_regions(address_space_region, layout);
|
||||
|
||||
assert!(!address_space.is_dax_region(GuestAddress(page_size)));
|
||||
assert!(!address_space.is_dax_region(GuestAddress(page_size * 2)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + 1)));
|
||||
assert!(!address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size - 1)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_prot_flags() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
Some(0),
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x300,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
// Normal case, reg1.
|
||||
assert_eq!(address_space.prot_flags(GuestAddress(0x200)).unwrap(), 0);
|
||||
// Normal case, reg2.
|
||||
assert_eq!(
|
||||
address_space.prot_flags(GuestAddress(0x500)).unwrap(),
|
||||
libc::PROT_READ | libc::PROT_WRITE
|
||||
);
|
||||
// Inquire gpa where no region is set.
|
||||
assert!(matches!(
|
||||
address_space.prot_flags(GuestAddress(0x600)),
|
||||
Err(AddressSpaceError::InvalidRegionType)
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_base_numa_node_id() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
Some(0),
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x300,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpaceBase::from_regions(regions, layout);
|
||||
|
||||
// Normal case.
|
||||
assert_eq!(address_space.numa_node_id(0x200).unwrap(), 0);
|
||||
// Inquire region with None as its numa node id.
|
||||
assert_eq!(address_space.numa_node_id(0x400), None);
|
||||
// Inquire gpa where no region is set.
|
||||
assert_eq!(address_space.numa_node_id(0x600), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_convert_into_vm_as() {
|
||||
// ! Further and detailed test is needed here.
|
||||
let gmm = GuestMemoryMmap::<()>::from_ranges(&[(GuestAddress(0x0), 0x400)]).unwrap();
|
||||
let _vm = AddressSpace::convert_into_vm_as(gmm);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_insert_region() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x100, 0x1800);
|
||||
let mut address_space = AddressSpace::from_regions(regions, layout);
|
||||
|
||||
// Normal case.
|
||||
assert!(matches!(address_space.insert_region(reg2).unwrap(), ()));
|
||||
|
||||
// Error invalid address range case when region invaled.
|
||||
let invalid_reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x0),
|
||||
0x100,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
assert_eq!(
|
||||
format!(
|
||||
"{:?}",
|
||||
address_space.insert_region(invalid_reg).err().unwrap()
|
||||
),
|
||||
format!("InvalidAddressRange({:?}, {:?})", 0x0, 0x100)
|
||||
);
|
||||
|
||||
// Error Error invalid address range case when region to be inserted will intersect
|
||||
// exsisting regions.
|
||||
let intersected_reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x400),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
assert_eq!(
|
||||
format!(
|
||||
"{:?}",
|
||||
address_space.insert_region(intersected_reg).err().unwrap()
|
||||
),
|
||||
format!("InvalidAddressRange({:?}, {:?})", 0x400, 0x200)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_walk_regions() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpace::from_regions(regions, layout);
|
||||
|
||||
fn access_all_hotplug_flag(
|
||||
region: &Arc<AddressSpaceRegion>,
|
||||
) -> Result<(), AddressSpaceError> {
|
||||
region.is_hotplug();
|
||||
Ok(())
|
||||
}
|
||||
|
||||
assert!(matches!(
|
||||
address_space.walk_regions(access_all_hotplug_flag).unwrap(),
|
||||
()
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_layout() {
|
||||
let reg = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x1000,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpace::from_regions(regions, layout.clone());
|
||||
|
||||
assert_eq!(layout, address_space.layout());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_last_addr() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x200,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpace::from_regions(regions, layout);
|
||||
|
||||
assert_eq!(address_space.last_addr(), GuestAddress(0x500 - 1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_is_dax_region() {
|
||||
let page_size = 4096;
|
||||
let address_space_region = vec![
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(page_size),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(page_size * 2),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DAXMemory,
|
||||
GuestAddress(GUEST_DEVICE_START),
|
||||
page_size as GuestUsize,
|
||||
)),
|
||||
];
|
||||
let layout = AddressSpaceLayout::new(GUEST_PHYS_END, GUEST_MEM_START, GUEST_MEM_END);
|
||||
let address_space = AddressSpace::from_regions(address_space_region, layout);
|
||||
|
||||
assert!(!address_space.is_dax_region(GuestAddress(page_size)));
|
||||
assert!(!address_space.is_dax_region(GuestAddress(page_size * 2)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + 1)));
|
||||
assert!(!address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size)));
|
||||
assert!(address_space.is_dax_region(GuestAddress(GUEST_DEVICE_START + page_size - 1)));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_prot_flags() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
Some(0),
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x300,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpace::from_regions(regions, layout);
|
||||
|
||||
// Normal case, reg1.
|
||||
assert_eq!(address_space.prot_flags(GuestAddress(0x200)).unwrap(), 0);
|
||||
// Normal case, reg2.
|
||||
assert_eq!(
|
||||
address_space.prot_flags(GuestAddress(0x500)).unwrap(),
|
||||
libc::PROT_READ | libc::PROT_WRITE
|
||||
);
|
||||
// Inquire gpa where no region is set.
|
||||
assert!(matches!(
|
||||
address_space.prot_flags(GuestAddress(0x600)),
|
||||
Err(AddressSpaceError::InvalidRegionType)
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_numa_node_id() {
|
||||
let reg1 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x100),
|
||||
0x200,
|
||||
Some(0),
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let reg2 = Arc::new(AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x300),
|
||||
0x300,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
));
|
||||
let regions = vec![reg1, reg2];
|
||||
let layout = AddressSpaceLayout::new(0x2000, 0x0, 0x1800);
|
||||
let address_space = AddressSpace::from_regions(regions, layout);
|
||||
|
||||
// Normal case.
|
||||
assert_eq!(address_space.numa_node_id(0x200).unwrap(), 0);
|
||||
// Inquire region with None as its numa node id.
|
||||
assert_eq!(address_space.numa_node_id(0x400), None);
|
||||
// Inquire gpa where no region is set.
|
||||
assert_eq!(address_space.numa_node_id(0x600), None);
|
||||
}
|
||||
}
|
||||
154
src/dragonball/src/dbs_address_space/src/layout.rs
Normal file
154
src/dragonball/src/dbs_address_space/src/layout.rs
Normal file
@@ -0,0 +1,154 @@
|
||||
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
use lazy_static::lazy_static;
|
||||
|
||||
use crate::{AddressSpaceRegion, AddressSpaceRegionType};
|
||||
|
||||
// Max retry times for reading /proc
|
||||
const PROC_READ_RETRY: u64 = 5;
|
||||
|
||||
lazy_static! {
|
||||
/// Upper bound of host memory.
|
||||
pub static ref USABLE_END: u64 = {
|
||||
for _ in 0..PROC_READ_RETRY {
|
||||
if let Ok(buf) = std::fs::read("/proc/meminfo") {
|
||||
let content = String::from_utf8_lossy(&buf);
|
||||
for line in content.lines() {
|
||||
if line.starts_with("MemTotal:") {
|
||||
if let Some(end) = line.find(" kB") {
|
||||
if let Ok(size) = line[9..end].trim().parse::<u64>() {
|
||||
return (size << 10) - 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
panic!("Exceed max retry times. Cannot get total mem size from /proc/meminfo");
|
||||
};
|
||||
}
|
||||
|
||||
/// Address space layout configuration.
|
||||
///
|
||||
/// The layout configuration must guarantee that `mem_start` <= `mem_end` <= `phys_end`.
|
||||
/// Non-memory region should be arranged into the range [mem_end, phys_end).
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct AddressSpaceLayout {
|
||||
/// end of guest physical address
|
||||
pub phys_end: u64,
|
||||
/// start of guest memory address
|
||||
pub mem_start: u64,
|
||||
/// end of guest memory address
|
||||
pub mem_end: u64,
|
||||
/// end of usable memory address
|
||||
pub usable_end: u64,
|
||||
}
|
||||
|
||||
impl AddressSpaceLayout {
|
||||
/// Create a new instance of `AddressSpaceLayout`.
|
||||
pub fn new(phys_end: u64, mem_start: u64, mem_end: u64) -> Self {
|
||||
AddressSpaceLayout {
|
||||
phys_end,
|
||||
mem_start,
|
||||
mem_end,
|
||||
usable_end: *USABLE_END,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check whether an region is valid with the constraints of the layout.
|
||||
pub fn is_region_valid(&self, region: &AddressSpaceRegion) -> bool {
|
||||
let region_end = match region.base.0.checked_add(region.size) {
|
||||
None => return false,
|
||||
Some(v) => v,
|
||||
};
|
||||
|
||||
match region.ty {
|
||||
AddressSpaceRegionType::DefaultMemory => {
|
||||
if region.base.0 < self.mem_start || region_end > self.mem_end {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
AddressSpaceRegionType::DeviceMemory | AddressSpaceRegionType::DAXMemory => {
|
||||
if region.base.0 < self.mem_end || region_end > self.phys_end {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use vm_memory::GuestAddress;
|
||||
|
||||
#[test]
|
||||
fn test_is_region_valid() {
|
||||
let layout = AddressSpaceLayout::new(0x1_0000_0000, 0x1000_0000, 0x2000_0000);
|
||||
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x0),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x2000_0000),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1_0000),
|
||||
0x2000_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(u64::MAX),
|
||||
0x1_0000_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1000_0000),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(layout.is_region_valid(®ion));
|
||||
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(0x1000_0000),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(0x1_0000_0000),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(0x1_0000),
|
||||
0x1_0000_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(u64::MAX),
|
||||
0x1_0000_0000,
|
||||
);
|
||||
assert!(!layout.is_region_valid(®ion));
|
||||
let region = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(0x8000_0000),
|
||||
0x1_0000,
|
||||
);
|
||||
assert!(layout.is_region_valid(®ion));
|
||||
}
|
||||
}
|
||||
87
src/dragonball/src/dbs_address_space/src/lib.rs
Normal file
87
src/dragonball/src/dbs_address_space/src/lib.rs
Normal file
@@ -0,0 +1,87 @@
|
||||
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#![deny(missing_docs)]
|
||||
|
||||
//! Traits and Structs to manage guest physical address space for virtual machines.
|
||||
//!
|
||||
//! The [vm-memory](https://crates.io/crates/vm-memory) implements mechanisms to manage and access
|
||||
//! guest memory resident in guest physical address space. In addition to guest memory, there may
|
||||
//! be other type of devices resident in the same guest physical address space.
|
||||
//!
|
||||
//! The `dbs-address-space` crate provides traits and structs to manage the guest physical address
|
||||
//! space for virtual machines, and mechanisms to coordinate all the devices resident in the
|
||||
//! guest physical address space.
|
||||
|
||||
use vm_memory::GuestUsize;
|
||||
|
||||
mod address_space;
|
||||
pub use self::address_space::{AddressSpace, AddressSpaceBase};
|
||||
|
||||
mod layout;
|
||||
pub use layout::{AddressSpaceLayout, USABLE_END};
|
||||
|
||||
mod memory;
|
||||
pub use memory::{GuestMemoryHybrid, GuestMemoryManager, GuestRegionHybrid, GuestRegionRaw};
|
||||
|
||||
mod numa;
|
||||
pub use self::numa::{NumaIdTable, NumaNode, NumaNodeInfo, MPOL_MF_MOVE, MPOL_PREFERRED};
|
||||
|
||||
mod region;
|
||||
pub use region::{AddressSpaceRegion, AddressSpaceRegionType};
|
||||
|
||||
/// Errors associated with virtual machine address space management.
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum AddressSpaceError {
|
||||
/// Invalid address space region type.
|
||||
#[error("invalid address space region type")]
|
||||
InvalidRegionType,
|
||||
|
||||
/// Invalid address range.
|
||||
#[error("invalid address space region (0x{0:x}, 0x{1:x})")]
|
||||
InvalidAddressRange(u64, GuestUsize),
|
||||
|
||||
/// Invalid guest memory source type.
|
||||
#[error("invalid memory source type {0}")]
|
||||
InvalidMemorySourceType(String),
|
||||
|
||||
/// Failed to create memfd to map anonymous memory.
|
||||
#[error("can not create memfd to map anonymous memory")]
|
||||
CreateMemFd(#[source] nix::Error),
|
||||
|
||||
/// Failed to open memory file.
|
||||
#[error("can not open memory file")]
|
||||
OpenFile(#[source] std::io::Error),
|
||||
|
||||
/// Failed to create directory.
|
||||
#[error("can not create directory")]
|
||||
CreateDir(#[source] std::io::Error),
|
||||
|
||||
/// Failed to set size for memory file.
|
||||
#[error("can not set size for memory file")]
|
||||
SetFileSize(#[source] std::io::Error),
|
||||
|
||||
/// Failed to unlink memory file.
|
||||
#[error("can not unlink memory file")]
|
||||
UnlinkFile(#[source] nix::Error),
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_error_code() {
|
||||
let e = AddressSpaceError::InvalidRegionType;
|
||||
|
||||
assert_eq!(format!("{e}"), "invalid address space region type");
|
||||
assert_eq!(format!("{e:?}"), "InvalidRegionType");
|
||||
assert_eq!(
|
||||
format!(
|
||||
"{}",
|
||||
AddressSpaceError::InvalidMemorySourceType("test".to_string())
|
||||
),
|
||||
"invalid memory source type test"
|
||||
);
|
||||
}
|
||||
}
|
||||
1105
src/dragonball/src/dbs_address_space/src/memory/hybrid.rs
Normal file
1105
src/dragonball/src/dbs_address_space/src/memory/hybrid.rs
Normal file
File diff suppressed because it is too large
Load Diff
193
src/dragonball/src/dbs_address_space/src/memory/mod.rs
Normal file
193
src/dragonball/src/dbs_address_space/src/memory/mod.rs
Normal file
@@ -0,0 +1,193 @@
|
||||
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//! Structs to manage guest memory for virtual machines.
|
||||
//!
|
||||
//! The `vm-memory` crate only provides traits and structs to access normal guest memory,
|
||||
//! it doesn't support special guest memory like virtio-fs/virtio-pmem DAX window etc.
|
||||
//! So this crate provides `GuestMemoryManager` over `vm-memory` to provide uniform abstraction
|
||||
//! for all guest memory.
|
||||
//!
|
||||
//! It also provides interfaces to coordinate guest memory hotplug events.
|
||||
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use vm_memory::{GuestAddressSpace, GuestMemoryAtomic, GuestMemoryLoadGuard, GuestMemoryMmap};
|
||||
|
||||
mod raw_region;
|
||||
pub use raw_region::GuestRegionRaw;
|
||||
|
||||
mod hybrid;
|
||||
pub use hybrid::{GuestMemoryHybrid, GuestRegionHybrid};
|
||||
|
||||
/// Type of source to allocate memory for virtual machines.
|
||||
#[derive(Debug, Eq, PartialEq)]
|
||||
pub enum MemorySourceType {
|
||||
/// File on HugeTlbFs.
|
||||
FileOnHugeTlbFs,
|
||||
/// mmap() without flag `MAP_HUGETLB`.
|
||||
MmapAnonymous,
|
||||
/// mmap() with flag `MAP_HUGETLB`.
|
||||
MmapAnonymousHugeTlbFs,
|
||||
/// memfd() without flag `MFD_HUGETLB`.
|
||||
MemFdShared,
|
||||
/// memfd() with flag `MFD_HUGETLB`.
|
||||
MemFdOnHugeTlbFs,
|
||||
}
|
||||
|
||||
impl MemorySourceType {
|
||||
/// Check whether the memory source is huge page.
|
||||
pub fn is_hugepage(&self) -> bool {
|
||||
*self == Self::FileOnHugeTlbFs
|
||||
|| *self == Self::MmapAnonymousHugeTlbFs
|
||||
|| *self == Self::MemFdOnHugeTlbFs
|
||||
}
|
||||
|
||||
/// Check whether the memory source is anonymous memory.
|
||||
pub fn is_mmap_anonymous(&self) -> bool {
|
||||
*self == Self::MmapAnonymous || *self == Self::MmapAnonymousHugeTlbFs
|
||||
}
|
||||
}
|
||||
|
||||
impl FromStr for MemorySourceType {
|
||||
type Err = String;
|
||||
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
match s {
|
||||
"hugetlbfs" => Ok(MemorySourceType::FileOnHugeTlbFs),
|
||||
"memfd" => Ok(MemorySourceType::MemFdShared),
|
||||
"shmem" => Ok(MemorySourceType::MemFdShared),
|
||||
"hugememfd" => Ok(MemorySourceType::MemFdOnHugeTlbFs),
|
||||
"hugeshmem" => Ok(MemorySourceType::MemFdOnHugeTlbFs),
|
||||
"anon" => Ok(MemorySourceType::MmapAnonymous),
|
||||
"mmap" => Ok(MemorySourceType::MmapAnonymous),
|
||||
"hugeanon" => Ok(MemorySourceType::MmapAnonymousHugeTlbFs),
|
||||
"hugemmap" => Ok(MemorySourceType::MmapAnonymousHugeTlbFs),
|
||||
_ => Err(format!("unknown memory source type {s}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
struct GuestMemoryHotplugManager {}
|
||||
|
||||
/// The `GuestMemoryManager` manages all guest memory for virtual machines.
|
||||
///
|
||||
/// The `GuestMemoryManager` fulfills several different responsibilities.
|
||||
/// - First, it manages different types of guest memory, such as normal guest memory, virtio-fs
|
||||
/// DAX window and virtio-pmem DAX window etc. Different clients may want to access different
|
||||
/// types of memory. So the manager maintains two GuestMemory objects, one contains all guest
|
||||
/// memory, the other contains only normal guest memory.
|
||||
/// - Second, it coordinates memory/DAX window hotplug events, so clients may register hooks
|
||||
/// to receive hotplug notifications.
|
||||
#[allow(unused)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct GuestMemoryManager {
|
||||
default: GuestMemoryAtomic<GuestMemoryHybrid>,
|
||||
/// GuestMemory object hosts all guest memory.
|
||||
hybrid: GuestMemoryAtomic<GuestMemoryHybrid>,
|
||||
/// GuestMemory object for vIOMMU.
|
||||
iommu: GuestMemoryAtomic<GuestMemoryHybrid>,
|
||||
/// GuestMemory object hosts normal guest memory.
|
||||
normal: GuestMemoryAtomic<GuestMemoryMmap>,
|
||||
hotplug: Arc<GuestMemoryHotplugManager>,
|
||||
}
|
||||
|
||||
impl GuestMemoryManager {
|
||||
/// Create a new instance of `GuestMemoryManager`.
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Get a reference to the normal `GuestMemory` object.
|
||||
pub fn get_normal_guest_memory(&self) -> &GuestMemoryAtomic<GuestMemoryMmap> {
|
||||
&self.normal
|
||||
}
|
||||
|
||||
/// Try to downcast the `GuestAddressSpace` object to a `GuestMemoryManager` object.
|
||||
pub fn to_manager<AS: GuestAddressSpace>(_m: &AS) -> Option<&Self> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for GuestMemoryManager {
|
||||
fn default() -> Self {
|
||||
let hybrid = GuestMemoryAtomic::new(GuestMemoryHybrid::new());
|
||||
let iommu = GuestMemoryAtomic::new(GuestMemoryHybrid::new());
|
||||
let normal = GuestMemoryAtomic::new(GuestMemoryMmap::new());
|
||||
// By default, it provides to the `GuestMemoryHybrid` object containing all guest memory.
|
||||
let default = hybrid.clone();
|
||||
|
||||
GuestMemoryManager {
|
||||
default,
|
||||
hybrid,
|
||||
iommu,
|
||||
normal,
|
||||
hotplug: Arc::new(GuestMemoryHotplugManager::default()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl GuestAddressSpace for GuestMemoryManager {
|
||||
type M = GuestMemoryHybrid;
|
||||
type T = GuestMemoryLoadGuard<GuestMemoryHybrid>;
|
||||
|
||||
fn memory(&self) -> Self::T {
|
||||
self.default.memory()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_memory_source_type() {
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("hugetlbfs").unwrap(),
|
||||
MemorySourceType::FileOnHugeTlbFs
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("memfd").unwrap(),
|
||||
MemorySourceType::MemFdShared
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("shmem").unwrap(),
|
||||
MemorySourceType::MemFdShared
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("hugememfd").unwrap(),
|
||||
MemorySourceType::MemFdOnHugeTlbFs
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("hugeshmem").unwrap(),
|
||||
MemorySourceType::MemFdOnHugeTlbFs
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("anon").unwrap(),
|
||||
MemorySourceType::MmapAnonymous
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("mmap").unwrap(),
|
||||
MemorySourceType::MmapAnonymous
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("hugeanon").unwrap(),
|
||||
MemorySourceType::MmapAnonymousHugeTlbFs
|
||||
);
|
||||
assert_eq!(
|
||||
MemorySourceType::from_str("hugemmap").unwrap(),
|
||||
MemorySourceType::MmapAnonymousHugeTlbFs
|
||||
);
|
||||
assert!(MemorySourceType::from_str("test").is_err());
|
||||
}
|
||||
|
||||
#[ignore]
|
||||
#[test]
|
||||
fn test_to_manager() {
|
||||
let manager = GuestMemoryManager::new();
|
||||
let mgr = GuestMemoryManager::to_manager(&manager).unwrap();
|
||||
|
||||
assert_eq!(&manager as *const _, mgr as *const _);
|
||||
}
|
||||
}
|
||||
990
src/dragonball/src/dbs_address_space/src/memory/raw_region.rs
Normal file
990
src/dragonball/src/dbs_address_space/src/memory/raw_region.rs
Normal file
@@ -0,0 +1,990 @@
|
||||
// Copyright (C) 2022 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
use std::io::{Read, Write};
|
||||
use std::sync::atomic::Ordering;
|
||||
|
||||
use vm_memory::bitmap::{Bitmap, BS};
|
||||
use vm_memory::mmap::NewBitmap;
|
||||
use vm_memory::volatile_memory::compute_offset;
|
||||
use vm_memory::{
|
||||
guest_memory, volatile_memory, Address, AtomicAccess, Bytes, FileOffset, GuestAddress,
|
||||
GuestMemoryRegion, GuestUsize, MemoryRegionAddress, VolatileSlice,
|
||||
};
|
||||
|
||||
/// Guest memory region for virtio-fs DAX window.
|
||||
#[derive(Debug)]
|
||||
pub struct GuestRegionRaw<B = ()> {
|
||||
guest_base: GuestAddress,
|
||||
addr: *mut u8,
|
||||
size: usize,
|
||||
bitmap: B,
|
||||
}
|
||||
|
||||
impl<B: NewBitmap> GuestRegionRaw<B> {
|
||||
/// Create a `GuestRegionRaw` object from raw pointer.
|
||||
///
|
||||
/// # Safety
|
||||
/// Caller needs to ensure `addr` and `size` are valid with static lifetime.
|
||||
pub unsafe fn new(guest_base: GuestAddress, addr: *mut u8, size: usize) -> Self {
|
||||
let bitmap = B::with_len(size);
|
||||
|
||||
GuestRegionRaw {
|
||||
guest_base,
|
||||
addr,
|
||||
size,
|
||||
bitmap,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<B: Bitmap> Bytes<MemoryRegionAddress> for GuestRegionRaw<B> {
|
||||
type E = guest_memory::Error;
|
||||
|
||||
fn write(&self, buf: &[u8], addr: MemoryRegionAddress) -> guest_memory::Result<usize> {
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.write(buf, maddr)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn read(&self, buf: &mut [u8], addr: MemoryRegionAddress) -> guest_memory::Result<usize> {
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.read(buf, maddr)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn write_slice(&self, buf: &[u8], addr: MemoryRegionAddress) -> guest_memory::Result<()> {
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.write_slice(buf, maddr)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn read_slice(&self, buf: &mut [u8], addr: MemoryRegionAddress) -> guest_memory::Result<()> {
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.read_slice(buf, maddr)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn read_from<F>(
|
||||
&self,
|
||||
addr: MemoryRegionAddress,
|
||||
src: &mut F,
|
||||
count: usize,
|
||||
) -> guest_memory::Result<usize>
|
||||
where
|
||||
F: Read,
|
||||
{
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.read_from::<F>(maddr, src, count)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn read_exact_from<F>(
|
||||
&self,
|
||||
addr: MemoryRegionAddress,
|
||||
src: &mut F,
|
||||
count: usize,
|
||||
) -> guest_memory::Result<()>
|
||||
where
|
||||
F: Read,
|
||||
{
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.read_exact_from::<F>(maddr, src, count)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn write_to<F>(
|
||||
&self,
|
||||
addr: MemoryRegionAddress,
|
||||
dst: &mut F,
|
||||
count: usize,
|
||||
) -> guest_memory::Result<usize>
|
||||
where
|
||||
F: Write,
|
||||
{
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.write_to::<F>(maddr, dst, count)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn write_all_to<F>(
|
||||
&self,
|
||||
addr: MemoryRegionAddress,
|
||||
dst: &mut F,
|
||||
count: usize,
|
||||
) -> guest_memory::Result<()>
|
||||
where
|
||||
F: Write,
|
||||
{
|
||||
let maddr = addr.raw_value() as usize;
|
||||
self.as_volatile_slice()
|
||||
.unwrap()
|
||||
.write_all_to::<F>(maddr, dst, count)
|
||||
.map_err(Into::into)
|
||||
}
|
||||
|
||||
fn store<T: AtomicAccess>(
|
||||
&self,
|
||||
val: T,
|
||||
addr: MemoryRegionAddress,
|
||||
order: Ordering,
|
||||
) -> guest_memory::Result<()> {
|
||||
self.as_volatile_slice().and_then(|s| {
|
||||
s.store(val, addr.raw_value() as usize, order)
|
||||
.map_err(Into::into)
|
||||
})
|
||||
}
|
||||
|
||||
fn load<T: AtomicAccess>(
|
||||
&self,
|
||||
addr: MemoryRegionAddress,
|
||||
order: Ordering,
|
||||
) -> guest_memory::Result<T> {
|
||||
self.as_volatile_slice()
|
||||
.and_then(|s| s.load(addr.raw_value() as usize, order).map_err(Into::into))
|
||||
}
|
||||
}
|
||||
|
||||
impl<B: Bitmap> GuestMemoryRegion for GuestRegionRaw<B> {
|
||||
type B = B;
|
||||
|
||||
fn len(&self) -> GuestUsize {
|
||||
self.size as GuestUsize
|
||||
}
|
||||
|
||||
fn start_addr(&self) -> GuestAddress {
|
||||
self.guest_base
|
||||
}
|
||||
|
||||
fn bitmap(&self) -> &Self::B {
|
||||
&self.bitmap
|
||||
}
|
||||
|
||||
fn get_host_address(&self, addr: MemoryRegionAddress) -> guest_memory::Result<*mut u8> {
|
||||
// Not sure why wrapping_offset is not unsafe. Anyway this
|
||||
// is safe because we've just range-checked addr using check_address.
|
||||
self.check_address(addr)
|
||||
.ok_or(guest_memory::Error::InvalidBackendAddress)
|
||||
.map(|addr| self.addr.wrapping_offset(addr.raw_value() as isize))
|
||||
}
|
||||
|
||||
fn file_offset(&self) -> Option<&FileOffset> {
|
||||
None
|
||||
}
|
||||
|
||||
unsafe fn as_slice(&self) -> Option<&[u8]> {
|
||||
// This is safe because we mapped the area at addr ourselves, so this slice will not
|
||||
// overflow. However, it is possible to alias.
|
||||
Some(std::slice::from_raw_parts(self.addr, self.size))
|
||||
}
|
||||
|
||||
unsafe fn as_mut_slice(&self) -> Option<&mut [u8]> {
|
||||
// This is safe because we mapped the area at addr ourselves, so this slice will not
|
||||
// overflow. However, it is possible to alias.
|
||||
Some(std::slice::from_raw_parts_mut(self.addr, self.size))
|
||||
}
|
||||
|
||||
fn get_slice(
|
||||
&self,
|
||||
offset: MemoryRegionAddress,
|
||||
count: usize,
|
||||
) -> guest_memory::Result<VolatileSlice<BS<B>>> {
|
||||
let offset = offset.raw_value() as usize;
|
||||
let end = compute_offset(offset, count)?;
|
||||
if end > self.size {
|
||||
return Err(volatile_memory::Error::OutOfBounds { addr: end }.into());
|
||||
}
|
||||
|
||||
// Safe because we checked that offset + count was within our range and we only ever hand
|
||||
// out volatile accessors.
|
||||
Ok(unsafe {
|
||||
VolatileSlice::with_bitmap(
|
||||
(self.addr as usize + offset) as *mut _,
|
||||
count,
|
||||
self.bitmap.slice_at(offset),
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(target_os = "linux")]
|
||||
fn is_hugetlbfs(&self) -> Option<bool> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
extern crate vmm_sys_util;
|
||||
|
||||
use super::*;
|
||||
use crate::{GuestMemoryHybrid, GuestRegionHybrid};
|
||||
use std::sync::Arc;
|
||||
use vm_memory::{GuestAddressSpace, GuestMemory, VolatileMemory};
|
||||
|
||||
/*
|
||||
use crate::bitmap::tests::test_guest_memory_and_region;
|
||||
use crate::bitmap::AtomicBitmap;
|
||||
use crate::GuestAddressSpace;
|
||||
|
||||
use std::fs::File;
|
||||
use std::mem;
|
||||
use std::path::Path;
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
||||
type GuestMemoryMmap = super::GuestMemoryMmap<()>;
|
||||
type GuestRegionMmap = super::GuestRegionMmap<()>;
|
||||
type MmapRegion = super::MmapRegion<()>;
|
||||
*/
|
||||
|
||||
#[test]
|
||||
fn test_region_raw_new() {
|
||||
let mut buf = [0u8; 1024];
|
||||
let m =
|
||||
unsafe { GuestRegionRaw::<()>::new(GuestAddress(0x10_0000), &mut buf as *mut _, 1024) };
|
||||
|
||||
assert_eq!(m.start_addr(), GuestAddress(0x10_0000));
|
||||
assert_eq!(m.len(), 1024);
|
||||
}
|
||||
|
||||
/*
|
||||
fn check_guest_memory_mmap(
|
||||
maybe_guest_mem: Result<GuestMemoryMmap, Error>,
|
||||
expected_regions_summary: &[(GuestAddress, usize)],
|
||||
) {
|
||||
assert!(maybe_guest_mem.is_ok());
|
||||
|
||||
let guest_mem = maybe_guest_mem.unwrap();
|
||||
assert_eq!(guest_mem.num_regions(), expected_regions_summary.len());
|
||||
let maybe_last_mem_reg = expected_regions_summary.last();
|
||||
if let Some((region_addr, region_size)) = maybe_last_mem_reg {
|
||||
let mut last_addr = region_addr.unchecked_add(*region_size as u64);
|
||||
if last_addr.raw_value() != 0 {
|
||||
last_addr = last_addr.unchecked_sub(1);
|
||||
}
|
||||
assert_eq!(guest_mem.last_addr(), last_addr);
|
||||
}
|
||||
for ((region_addr, region_size), mmap) in expected_regions_summary
|
||||
.iter()
|
||||
.zip(guest_mem.regions.iter())
|
||||
{
|
||||
assert_eq!(region_addr, &mmap.guest_base);
|
||||
assert_eq!(region_size, &mmap.mapping.size());
|
||||
|
||||
assert!(guest_mem.find_region(*region_addr).is_some());
|
||||
}
|
||||
}
|
||||
|
||||
fn new_guest_memory_mmap(
|
||||
regions_summary: &[(GuestAddress, usize)],
|
||||
) -> Result<GuestMemoryMmap, Error> {
|
||||
GuestMemoryMmap::from_ranges(regions_summary)
|
||||
}
|
||||
|
||||
fn new_guest_memory_mmap_from_regions(
|
||||
regions_summary: &[(GuestAddress, usize)],
|
||||
) -> Result<GuestMemoryMmap, Error> {
|
||||
GuestMemoryMmap::from_regions(
|
||||
regions_summary
|
||||
.iter()
|
||||
.map(|(region_addr, region_size)| {
|
||||
GuestRegionMmap::new(MmapRegion::new(*region_size).unwrap(), *region_addr)
|
||||
.unwrap()
|
||||
})
|
||||
.collect(),
|
||||
)
|
||||
}
|
||||
|
||||
fn new_guest_memory_mmap_from_arc_regions(
|
||||
regions_summary: &[(GuestAddress, usize)],
|
||||
) -> Result<GuestMemoryMmap, Error> {
|
||||
GuestMemoryMmap::from_arc_regions(
|
||||
regions_summary
|
||||
.iter()
|
||||
.map(|(region_addr, region_size)| {
|
||||
Arc::new(
|
||||
GuestRegionMmap::new(MmapRegion::new(*region_size).unwrap(), *region_addr)
|
||||
.unwrap(),
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
)
|
||||
}
|
||||
|
||||
fn new_guest_memory_mmap_with_files(
|
||||
regions_summary: &[(GuestAddress, usize)],
|
||||
) -> Result<GuestMemoryMmap, Error> {
|
||||
let regions: Vec<(GuestAddress, usize, Option<FileOffset>)> = regions_summary
|
||||
.iter()
|
||||
.map(|(region_addr, region_size)| {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(*region_size as u64).unwrap();
|
||||
|
||||
(*region_addr, *region_size, Some(FileOffset::new(f, 0)))
|
||||
})
|
||||
.collect();
|
||||
|
||||
GuestMemoryMmap::from_ranges_with_files(®ions)
|
||||
}
|
||||
*/
|
||||
|
||||
#[test]
|
||||
fn slice_addr() {
|
||||
let mut buf = [0u8; 1024];
|
||||
let m =
|
||||
unsafe { GuestRegionRaw::<()>::new(GuestAddress(0x10_0000), &mut buf as *mut _, 1024) };
|
||||
|
||||
let s = m.get_slice(MemoryRegionAddress(2), 3).unwrap();
|
||||
assert_eq!(s.as_ptr(), &mut buf[2] as *mut _);
|
||||
}
|
||||
|
||||
/*
|
||||
#[test]
|
||||
fn test_address_in_range() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x400).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let guest_mem =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
|
||||
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
|
||||
for guest_mem in guest_mem_list.iter() {
|
||||
assert!(guest_mem.address_in_range(GuestAddress(0x200)));
|
||||
assert!(!guest_mem.address_in_range(GuestAddress(0x600)));
|
||||
assert!(guest_mem.address_in_range(GuestAddress(0xa00)));
|
||||
assert!(!guest_mem.address_in_range(GuestAddress(0xc00)));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_check_address() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x400).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let guest_mem =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
|
||||
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
|
||||
for guest_mem in guest_mem_list.iter() {
|
||||
assert_eq!(
|
||||
guest_mem.check_address(GuestAddress(0x200)),
|
||||
Some(GuestAddress(0x200))
|
||||
);
|
||||
assert_eq!(guest_mem.check_address(GuestAddress(0x600)), None);
|
||||
assert_eq!(
|
||||
guest_mem.check_address(GuestAddress(0xa00)),
|
||||
Some(GuestAddress(0xa00))
|
||||
);
|
||||
assert_eq!(guest_mem.check_address(GuestAddress(0xc00)), None);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_to_region_addr() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x400).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let guest_mem =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
|
||||
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
|
||||
for guest_mem in guest_mem_list.iter() {
|
||||
assert!(guest_mem.to_region_addr(GuestAddress(0x600)).is_none());
|
||||
let (r0, addr0) = guest_mem.to_region_addr(GuestAddress(0x800)).unwrap();
|
||||
let (r1, addr1) = guest_mem.to_region_addr(GuestAddress(0xa00)).unwrap();
|
||||
assert!(r0.as_ptr() == r1.as_ptr());
|
||||
assert_eq!(addr0, MemoryRegionAddress(0));
|
||||
assert_eq!(addr1, MemoryRegionAddress(0x200));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_host_address() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x400).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let guest_mem =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x400), (start_addr2, 0x400)]).unwrap();
|
||||
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x400, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x400, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
|
||||
for guest_mem in guest_mem_list.iter() {
|
||||
assert!(guest_mem.get_host_address(GuestAddress(0x600)).is_err());
|
||||
let ptr0 = guest_mem.get_host_address(GuestAddress(0x800)).unwrap();
|
||||
let ptr1 = guest_mem.get_host_address(GuestAddress(0xa00)).unwrap();
|
||||
assert_eq!(
|
||||
ptr0,
|
||||
guest_mem.find_region(GuestAddress(0x800)).unwrap().as_ptr()
|
||||
);
|
||||
assert_eq!(unsafe { ptr0.offset(0x200) }, ptr1);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_deref() {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr = GuestAddress(0x0);
|
||||
let guest_mem = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
|
||||
let guest_mem_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
|
||||
start_addr,
|
||||
0x400,
|
||||
Some(FileOffset::new(f, 0)),
|
||||
)])
|
||||
.unwrap();
|
||||
|
||||
let guest_mem_list = vec![guest_mem, guest_mem_backed_by_file];
|
||||
for guest_mem in guest_mem_list.iter() {
|
||||
let sample_buf = &[1, 2, 3, 4, 5];
|
||||
|
||||
assert_eq!(guest_mem.write(sample_buf, start_addr).unwrap(), 5);
|
||||
let slice = guest_mem
|
||||
.find_region(GuestAddress(0))
|
||||
.unwrap()
|
||||
.as_volatile_slice()
|
||||
.unwrap();
|
||||
|
||||
let buf = &mut [0, 0, 0, 0, 0];
|
||||
assert_eq!(slice.read(buf, 0).unwrap(), 5);
|
||||
assert_eq!(buf, sample_buf);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_read_u64() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x1000).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x1000).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x1000);
|
||||
let bad_addr = GuestAddress(0x2001);
|
||||
let bad_addr2 = GuestAddress(0x1ffc);
|
||||
let max_addr = GuestAddress(0x2000);
|
||||
|
||||
let gm =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x1000), (start_addr2, 0x1000)]).unwrap();
|
||||
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x1000, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x1000, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let gm_list = vec![gm, gm_backed_by_file];
|
||||
for gm in gm_list.iter() {
|
||||
let val1: u64 = 0xaa55_aa55_aa55_aa55;
|
||||
let val2: u64 = 0x55aa_55aa_55aa_55aa;
|
||||
assert_eq!(
|
||||
format!("{:?}", gm.write_obj(val1, bad_addr).err().unwrap()),
|
||||
format!("InvalidGuestAddress({:?})", bad_addr,)
|
||||
);
|
||||
assert_eq!(
|
||||
format!("{:?}", gm.write_obj(val1, bad_addr2).err().unwrap()),
|
||||
format!(
|
||||
"PartialBuffer {{ expected: {:?}, completed: {:?} }}",
|
||||
mem::size_of::<u64>(),
|
||||
max_addr.checked_offset_from(bad_addr2).unwrap()
|
||||
)
|
||||
);
|
||||
|
||||
gm.write_obj(val1, GuestAddress(0x500)).unwrap();
|
||||
gm.write_obj(val2, GuestAddress(0x1000 + 32)).unwrap();
|
||||
let num1: u64 = gm.read_obj(GuestAddress(0x500)).unwrap();
|
||||
let num2: u64 = gm.read_obj(GuestAddress(0x1000 + 32)).unwrap();
|
||||
assert_eq!(val1, num1);
|
||||
assert_eq!(val2, num2);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn write_and_read() {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(0x400).unwrap();
|
||||
|
||||
let mut start_addr = GuestAddress(0x1000);
|
||||
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
|
||||
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
|
||||
start_addr,
|
||||
0x400,
|
||||
Some(FileOffset::new(f, 0)),
|
||||
)])
|
||||
.unwrap();
|
||||
|
||||
let gm_list = vec![gm, gm_backed_by_file];
|
||||
for gm in gm_list.iter() {
|
||||
let sample_buf = &[1, 2, 3, 4, 5];
|
||||
|
||||
assert_eq!(gm.write(sample_buf, start_addr).unwrap(), 5);
|
||||
|
||||
let buf = &mut [0u8; 5];
|
||||
assert_eq!(gm.read(buf, start_addr).unwrap(), 5);
|
||||
assert_eq!(buf, sample_buf);
|
||||
|
||||
start_addr = GuestAddress(0x13ff);
|
||||
assert_eq!(gm.write(sample_buf, start_addr).unwrap(), 1);
|
||||
assert_eq!(gm.read(buf, start_addr).unwrap(), 1);
|
||||
assert_eq!(buf[0], sample_buf[0]);
|
||||
start_addr = GuestAddress(0x1000);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn read_to_and_write_from_mem() {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(0x400).unwrap();
|
||||
|
||||
let gm = GuestMemoryMmap::from_ranges(&[(GuestAddress(0x1000), 0x400)]).unwrap();
|
||||
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[(
|
||||
GuestAddress(0x1000),
|
||||
0x400,
|
||||
Some(FileOffset::new(f, 0)),
|
||||
)])
|
||||
.unwrap();
|
||||
|
||||
let gm_list = vec![gm, gm_backed_by_file];
|
||||
for gm in gm_list.iter() {
|
||||
let addr = GuestAddress(0x1010);
|
||||
let mut file = if cfg!(unix) {
|
||||
File::open(Path::new("/dev/zero")).unwrap()
|
||||
} else {
|
||||
File::open(Path::new("c:\\Windows\\system32\\ntoskrnl.exe")).unwrap()
|
||||
};
|
||||
gm.write_obj(!0u32, addr).unwrap();
|
||||
gm.read_exact_from(addr, &mut file, mem::size_of::<u32>())
|
||||
.unwrap();
|
||||
let value: u32 = gm.read_obj(addr).unwrap();
|
||||
if cfg!(unix) {
|
||||
assert_eq!(value, 0);
|
||||
} else {
|
||||
assert_eq!(value, 0x0090_5a4d);
|
||||
}
|
||||
|
||||
let mut sink = Vec::new();
|
||||
gm.write_all_to(addr, &mut sink, mem::size_of::<u32>())
|
||||
.unwrap();
|
||||
if cfg!(unix) {
|
||||
assert_eq!(sink, vec![0; mem::size_of::<u32>()]);
|
||||
} else {
|
||||
assert_eq!(sink, vec![0x4d, 0x5a, 0x90, 0x00]);
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn create_vec_with_regions() {
|
||||
let region_size = 0x400;
|
||||
let regions = vec![
|
||||
(GuestAddress(0x0), region_size),
|
||||
(GuestAddress(0x1000), region_size),
|
||||
];
|
||||
let mut iterated_regions = Vec::new();
|
||||
let gm = GuestMemoryMmap::from_ranges(®ions).unwrap();
|
||||
|
||||
for region in gm.iter() {
|
||||
assert_eq!(region.len(), region_size as GuestUsize);
|
||||
}
|
||||
|
||||
for region in gm.iter() {
|
||||
iterated_regions.push((region.start_addr(), region.len() as usize));
|
||||
}
|
||||
assert_eq!(regions, iterated_regions);
|
||||
|
||||
assert!(regions
|
||||
.iter()
|
||||
.map(|x| (x.0, x.1))
|
||||
.eq(iterated_regions.iter().copied()));
|
||||
|
||||
assert_eq!(gm.regions[0].guest_base, regions[0].0);
|
||||
assert_eq!(gm.regions[1].guest_base, regions[1].0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_memory() {
|
||||
let region_size = 0x400;
|
||||
let regions = vec![
|
||||
(GuestAddress(0x0), region_size),
|
||||
(GuestAddress(0x1000), region_size),
|
||||
];
|
||||
let mut iterated_regions = Vec::new();
|
||||
let gm = Arc::new(GuestMemoryMmap::from_ranges(®ions).unwrap());
|
||||
let mem = gm.memory();
|
||||
|
||||
for region in mem.iter() {
|
||||
assert_eq!(region.len(), region_size as GuestUsize);
|
||||
}
|
||||
|
||||
for region in mem.iter() {
|
||||
iterated_regions.push((region.start_addr(), region.len() as usize));
|
||||
}
|
||||
assert_eq!(regions, iterated_regions);
|
||||
|
||||
assert!(regions
|
||||
.iter()
|
||||
.map(|x| (x.0, x.1))
|
||||
.eq(iterated_regions.iter().copied()));
|
||||
|
||||
assert_eq!(gm.regions[0].guest_base, regions[0].0);
|
||||
assert_eq!(gm.regions[1].guest_base, regions[1].0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_access_cross_boundary() {
|
||||
let f1 = TempFile::new().unwrap().into_file();
|
||||
f1.set_len(0x1000).unwrap();
|
||||
let f2 = TempFile::new().unwrap().into_file();
|
||||
f2.set_len(0x1000).unwrap();
|
||||
|
||||
let start_addr1 = GuestAddress(0x0);
|
||||
let start_addr2 = GuestAddress(0x1000);
|
||||
let gm =
|
||||
GuestMemoryMmap::from_ranges(&[(start_addr1, 0x1000), (start_addr2, 0x1000)]).unwrap();
|
||||
let gm_backed_by_file = GuestMemoryMmap::from_ranges_with_files(&[
|
||||
(start_addr1, 0x1000, Some(FileOffset::new(f1, 0))),
|
||||
(start_addr2, 0x1000, Some(FileOffset::new(f2, 0))),
|
||||
])
|
||||
.unwrap();
|
||||
|
||||
let gm_list = vec![gm, gm_backed_by_file];
|
||||
for gm in gm_list.iter() {
|
||||
let sample_buf = &[1, 2, 3, 4, 5];
|
||||
assert_eq!(gm.write(sample_buf, GuestAddress(0xffc)).unwrap(), 5);
|
||||
let buf = &mut [0u8; 5];
|
||||
assert_eq!(gm.read(buf, GuestAddress(0xffc)).unwrap(), 5);
|
||||
assert_eq!(buf, sample_buf);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_retrieve_fd_backing_memory_region() {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(0x400).unwrap();
|
||||
|
||||
let start_addr = GuestAddress(0x0);
|
||||
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
|
||||
assert!(gm.find_region(start_addr).is_some());
|
||||
let region = gm.find_region(start_addr).unwrap();
|
||||
assert!(region.file_offset().is_none());
|
||||
|
||||
let gm = GuestMemoryMmap::from_ranges_with_files(&[(
|
||||
start_addr,
|
||||
0x400,
|
||||
Some(FileOffset::new(f, 0)),
|
||||
)])
|
||||
.unwrap();
|
||||
assert!(gm.find_region(start_addr).is_some());
|
||||
let region = gm.find_region(start_addr).unwrap();
|
||||
assert!(region.file_offset().is_some());
|
||||
}
|
||||
|
||||
// Windows needs a dedicated test where it will retrieve the allocation
|
||||
// granularity to determine a proper offset (other than 0) that can be
|
||||
// used for the backing file. Refer to Microsoft docs here:
|
||||
// https://docs.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-mapviewoffile
|
||||
#[test]
|
||||
#[cfg(unix)]
|
||||
fn test_retrieve_offset_from_fd_backing_memory_region() {
|
||||
let f = TempFile::new().unwrap().into_file();
|
||||
f.set_len(0x1400).unwrap();
|
||||
// Needs to be aligned on 4k, otherwise mmap will fail.
|
||||
let offset = 0x1000;
|
||||
|
||||
let start_addr = GuestAddress(0x0);
|
||||
let gm = GuestMemoryMmap::from_ranges(&[(start_addr, 0x400)]).unwrap();
|
||||
assert!(gm.find_region(start_addr).is_some());
|
||||
let region = gm.find_region(start_addr).unwrap();
|
||||
assert!(region.file_offset().is_none());
|
||||
|
||||
let gm = GuestMemoryMmap::from_ranges_with_files(&[(
|
||||
start_addr,
|
||||
0x400,
|
||||
Some(FileOffset::new(f, offset)),
|
||||
)])
|
||||
.unwrap();
|
||||
assert!(gm.find_region(start_addr).is_some());
|
||||
let region = gm.find_region(start_addr).unwrap();
|
||||
assert!(region.file_offset().is_some());
|
||||
assert_eq!(region.file_offset().unwrap().start(), offset);
|
||||
}
|
||||
*/
|
||||
|
||||
#[test]
|
||||
fn test_mmap_insert_region() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let start_addr2 = GuestAddress(0x10_0000);
|
||||
|
||||
let guest_mem = GuestMemoryHybrid::<()>::new();
|
||||
let mut raw_buf = [0u8; 0x1000];
|
||||
let raw_ptr = &mut raw_buf as *mut u8;
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, raw_ptr, 0x1000) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, raw_ptr, 0x1000) };
|
||||
let gm = &guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let mem_orig = gm.memory();
|
||||
assert_eq!(mem_orig.num_regions(), 2);
|
||||
|
||||
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0x8000), raw_ptr, 0x1000) };
|
||||
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
|
||||
let gm = gm.insert_region(mmap).unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0x4000), raw_ptr, 0x1000) };
|
||||
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
|
||||
let gm = gm.insert_region(mmap).unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0xc000), raw_ptr, 0x1000) };
|
||||
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
|
||||
let gm = gm.insert_region(mmap).unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::new(GuestAddress(0xc000), raw_ptr, 0x1000) };
|
||||
let mmap = Arc::new(GuestRegionHybrid::from_raw_region(reg));
|
||||
gm.insert_region(mmap).unwrap_err();
|
||||
|
||||
assert_eq!(mem_orig.num_regions(), 2);
|
||||
assert_eq!(gm.num_regions(), 5);
|
||||
|
||||
assert_eq!(gm.regions[0].start_addr(), GuestAddress(0x0000));
|
||||
assert_eq!(gm.regions[1].start_addr(), GuestAddress(0x4000));
|
||||
assert_eq!(gm.regions[2].start_addr(), GuestAddress(0x8000));
|
||||
assert_eq!(gm.regions[3].start_addr(), GuestAddress(0xc000));
|
||||
assert_eq!(gm.regions[4].start_addr(), GuestAddress(0x10_0000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mmap_remove_region() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let start_addr2 = GuestAddress(0x10_0000);
|
||||
|
||||
let guest_mem = GuestMemoryHybrid::<()>::new();
|
||||
let mut raw_buf = [0u8; 0x1000];
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x1000) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x1000) };
|
||||
let gm = &guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let mem_orig = gm.memory();
|
||||
assert_eq!(mem_orig.num_regions(), 2);
|
||||
|
||||
gm.remove_region(GuestAddress(0), 128).unwrap_err();
|
||||
gm.remove_region(GuestAddress(0x4000), 128).unwrap_err();
|
||||
let (gm, region) = gm.remove_region(GuestAddress(0x10_0000), 0x1000).unwrap();
|
||||
|
||||
assert_eq!(mem_orig.num_regions(), 2);
|
||||
assert_eq!(gm.num_regions(), 1);
|
||||
|
||||
assert_eq!(gm.regions[0].start_addr(), GuestAddress(0x0000));
|
||||
assert_eq!(region.start_addr(), GuestAddress(0x10_0000));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_guest_memory_mmap_get_slice() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let mut raw_buf = [0u8; 0x400];
|
||||
let region =
|
||||
unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
|
||||
|
||||
// Normal case.
|
||||
let slice_addr = MemoryRegionAddress(0x100);
|
||||
let slice_size = 0x200;
|
||||
let slice = region.get_slice(slice_addr, slice_size).unwrap();
|
||||
assert_eq!(slice.len(), slice_size);
|
||||
|
||||
// Empty slice.
|
||||
let slice_addr = MemoryRegionAddress(0x200);
|
||||
let slice_size = 0x0;
|
||||
let slice = region.get_slice(slice_addr, slice_size).unwrap();
|
||||
assert!(slice.is_empty());
|
||||
|
||||
// Error case when slice_size is beyond the boundary.
|
||||
let slice_addr = MemoryRegionAddress(0x300);
|
||||
let slice_size = 0x200;
|
||||
assert!(region.get_slice(slice_addr, slice_size).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_guest_memory_mmap_as_volatile_slice() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let mut raw_buf = [0u8; 0x400];
|
||||
let region =
|
||||
unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
|
||||
let region_size = 0x400;
|
||||
|
||||
// Test slice length.
|
||||
let slice = region.as_volatile_slice().unwrap();
|
||||
assert_eq!(slice.len(), region_size);
|
||||
|
||||
// Test slice data.
|
||||
let v = 0x1234_5678u32;
|
||||
let r = slice.get_ref::<u32>(0x200).unwrap();
|
||||
r.store(v);
|
||||
assert_eq!(r.load(), v);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_guest_memory_get_slice() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
|
||||
let guest_mem = GuestMemoryHybrid::<()>::new();
|
||||
let mut raw_buf = [0u8; 0x400];
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
|
||||
// Normal cases.
|
||||
let slice_size = 0x200;
|
||||
let slice = guest_mem
|
||||
.get_slice(GuestAddress(0x100), slice_size)
|
||||
.unwrap();
|
||||
assert_eq!(slice.len(), slice_size);
|
||||
|
||||
let slice_size = 0x400;
|
||||
let slice = guest_mem
|
||||
.get_slice(GuestAddress(0x800), slice_size)
|
||||
.unwrap();
|
||||
assert_eq!(slice.len(), slice_size);
|
||||
|
||||
// Empty slice.
|
||||
assert!(guest_mem
|
||||
.get_slice(GuestAddress(0x900), 0)
|
||||
.unwrap()
|
||||
.is_empty());
|
||||
|
||||
// Error cases, wrong size or base address.
|
||||
assert!(guest_mem.get_slice(GuestAddress(0), 0x500).is_err());
|
||||
assert!(guest_mem.get_slice(GuestAddress(0x600), 0x100).is_err());
|
||||
assert!(guest_mem.get_slice(GuestAddress(0xc00), 0x100).is_err());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_checked_offset() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let start_addr3 = GuestAddress(0xc00);
|
||||
|
||||
let guest_mem = GuestMemoryHybrid::<()>::new();
|
||||
let mut raw_buf = [0u8; 0x400];
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr3, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
guest_mem.checked_offset(start_addr1, 0x200),
|
||||
Some(GuestAddress(0x200))
|
||||
);
|
||||
assert_eq!(
|
||||
guest_mem.checked_offset(start_addr1, 0xa00),
|
||||
Some(GuestAddress(0xa00))
|
||||
);
|
||||
assert_eq!(
|
||||
guest_mem.checked_offset(start_addr2, 0x7ff),
|
||||
Some(GuestAddress(0xfff))
|
||||
);
|
||||
assert_eq!(guest_mem.checked_offset(start_addr2, 0xc00), None);
|
||||
assert_eq!(guest_mem.checked_offset(start_addr1, std::usize::MAX), None);
|
||||
|
||||
assert_eq!(guest_mem.checked_offset(start_addr1, 0x400), None);
|
||||
assert_eq!(
|
||||
guest_mem.checked_offset(start_addr1, 0x400 - 1),
|
||||
Some(GuestAddress(0x400 - 1))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_check_range() {
|
||||
let start_addr1 = GuestAddress(0);
|
||||
let start_addr2 = GuestAddress(0x800);
|
||||
let start_addr3 = GuestAddress(0xc00);
|
||||
|
||||
let guest_mem = GuestMemoryHybrid::<()>::new();
|
||||
let mut raw_buf = [0u8; 0x400];
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr1, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr2, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
let reg = unsafe { GuestRegionRaw::<()>::new(start_addr3, &mut raw_buf as *mut _, 0x400) };
|
||||
let guest_mem = guest_mem
|
||||
.insert_region(Arc::new(GuestRegionHybrid::from_raw_region(reg)))
|
||||
.unwrap();
|
||||
|
||||
assert!(guest_mem.check_range(start_addr1, 0x0));
|
||||
assert!(guest_mem.check_range(start_addr1, 0x200));
|
||||
assert!(guest_mem.check_range(start_addr1, 0x400));
|
||||
assert!(!guest_mem.check_range(start_addr1, 0xa00));
|
||||
assert!(guest_mem.check_range(start_addr2, 0x7ff));
|
||||
assert!(guest_mem.check_range(start_addr2, 0x800));
|
||||
assert!(!guest_mem.check_range(start_addr2, 0x801));
|
||||
assert!(!guest_mem.check_range(start_addr2, 0xc00));
|
||||
assert!(!guest_mem.check_range(start_addr1, usize::MAX));
|
||||
}
|
||||
}
|
||||
85
src/dragonball/src/dbs_address_space/src/numa.rs
Normal file
85
src/dragonball/src/dbs_address_space/src/numa.rs
Normal file
@@ -0,0 +1,85 @@
|
||||
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//! Types for NUMA information.
|
||||
|
||||
use vm_memory::{GuestAddress, GuestUsize};
|
||||
|
||||
/// Strategy of mbind() and don't lead to OOM.
|
||||
pub const MPOL_PREFERRED: u32 = 1;
|
||||
|
||||
/// Strategy of mbind()
|
||||
pub const MPOL_MF_MOVE: u32 = 2;
|
||||
|
||||
/// Type for recording numa ids of different devices
|
||||
pub struct NumaIdTable {
|
||||
/// vectors of numa id for each memory region
|
||||
pub memory: Vec<u32>,
|
||||
/// vectors of numa id for each cpu
|
||||
pub cpu: Vec<u32>,
|
||||
}
|
||||
|
||||
/// Record numa node memory information.
|
||||
#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)]
|
||||
pub struct NumaNodeInfo {
|
||||
/// Base address of the region in guest physical address space.
|
||||
pub base: GuestAddress,
|
||||
/// Size of the address region.
|
||||
pub size: GuestUsize,
|
||||
}
|
||||
|
||||
/// Record all region's info of a numa node.
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq)]
|
||||
pub struct NumaNode {
|
||||
region_infos: Vec<NumaNodeInfo>,
|
||||
vcpu_ids: Vec<u32>,
|
||||
}
|
||||
|
||||
impl NumaNode {
|
||||
/// get reference of region_infos in numa node.
|
||||
pub fn region_infos(&self) -> &Vec<NumaNodeInfo> {
|
||||
&self.region_infos
|
||||
}
|
||||
|
||||
/// get vcpu ids belonging to a numa node.
|
||||
pub fn vcpu_ids(&self) -> &Vec<u32> {
|
||||
&self.vcpu_ids
|
||||
}
|
||||
|
||||
/// add a new numa region info into this numa node.
|
||||
pub fn add_info(&mut self, info: &NumaNodeInfo) {
|
||||
self.region_infos.push(*info);
|
||||
}
|
||||
|
||||
/// add a group of vcpu ids belong to this numa node
|
||||
pub fn add_vcpu_ids(&mut self, vcpu_ids: &[u32]) {
|
||||
self.vcpu_ids.extend(vcpu_ids)
|
||||
}
|
||||
|
||||
/// create a new numa node struct
|
||||
pub fn new() -> NumaNode {
|
||||
NumaNode {
|
||||
region_infos: Vec::new(),
|
||||
vcpu_ids: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_create_numa_node() {
|
||||
let mut numa_node = NumaNode::new();
|
||||
let info = NumaNodeInfo {
|
||||
base: GuestAddress(0),
|
||||
size: 1024,
|
||||
};
|
||||
numa_node.add_info(&info);
|
||||
assert_eq!(*numa_node.region_infos(), vec![info]);
|
||||
let vcpu_ids = vec![0, 1, 2, 3];
|
||||
numa_node.add_vcpu_ids(&vcpu_ids);
|
||||
assert_eq!(*numa_node.vcpu_ids(), vcpu_ids);
|
||||
}
|
||||
}
|
||||
564
src/dragonball/src/dbs_address_space/src/region.rs
Normal file
564
src/dragonball/src/dbs_address_space/src/region.rs
Normal file
@@ -0,0 +1,564 @@
|
||||
// Copyright (C) 2021 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
use std::ffi::CString;
|
||||
use std::fs::{File, OpenOptions};
|
||||
use std::os::unix::io::FromRawFd;
|
||||
use std::path::Path;
|
||||
use std::str::FromStr;
|
||||
|
||||
use nix::sys::memfd;
|
||||
use vm_memory::{Address, FileOffset, GuestAddress, GuestUsize};
|
||||
|
||||
use crate::memory::MemorySourceType;
|
||||
use crate::memory::MemorySourceType::MemFdShared;
|
||||
use crate::AddressSpaceError;
|
||||
|
||||
/// Type of address space regions.
|
||||
///
|
||||
/// On physical machines, physical memory may have different properties, such as
|
||||
/// volatile vs non-volatile, read-only vs read-write, non-executable vs executable etc.
|
||||
/// On virtual machines, the concept of memory property may be extended to support better
|
||||
/// cooperation between the hypervisor and the guest kernel. Here address space region type means
|
||||
/// what the region will be used for by the guest OS, and different permissions and policies may
|
||||
/// be applied to different address space regions.
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
|
||||
pub enum AddressSpaceRegionType {
|
||||
/// Normal memory accessible by CPUs and IO devices.
|
||||
DefaultMemory,
|
||||
/// MMIO address region for Devices.
|
||||
DeviceMemory,
|
||||
/// DAX address region for virtio-fs/virtio-pmem.
|
||||
DAXMemory,
|
||||
}
|
||||
|
||||
/// Struct to maintain configuration information about a guest address region.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AddressSpaceRegion {
|
||||
/// Type of address space regions.
|
||||
pub ty: AddressSpaceRegionType,
|
||||
/// Base address of the region in virtual machine's physical address space.
|
||||
pub base: GuestAddress,
|
||||
/// Size of the address space region.
|
||||
pub size: GuestUsize,
|
||||
/// Host NUMA node ids assigned to this region.
|
||||
pub host_numa_node_id: Option<u32>,
|
||||
|
||||
/// File/offset tuple to back the memory allocation.
|
||||
file_offset: Option<FileOffset>,
|
||||
/// Mmap permission flags.
|
||||
perm_flags: i32,
|
||||
/// Mmap protection flags.
|
||||
prot_flags: i32,
|
||||
/// Hugepage madvise hint.
|
||||
///
|
||||
/// It needs 'advise' or 'always' policy in host shmem config.
|
||||
is_hugepage: bool,
|
||||
/// Hotplug hint.
|
||||
is_hotplug: bool,
|
||||
/// Anonymous memory hint.
|
||||
///
|
||||
/// It should be true for regions with the MADV_DONTFORK flag enabled.
|
||||
is_anon: bool,
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
impl AddressSpaceRegion {
|
||||
/// Create an address space region with default configuration.
|
||||
pub fn new(ty: AddressSpaceRegionType, base: GuestAddress, size: GuestUsize) -> Self {
|
||||
AddressSpaceRegion {
|
||||
ty,
|
||||
base,
|
||||
size,
|
||||
host_numa_node_id: None,
|
||||
file_offset: None,
|
||||
perm_flags: libc::MAP_SHARED,
|
||||
prot_flags: libc::PROT_READ | libc::PROT_WRITE,
|
||||
is_hugepage: false,
|
||||
is_hotplug: false,
|
||||
is_anon: false,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create an address space region with all configurable information.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `ty` - Type of the address region
|
||||
/// * `base` - Base address in VM to map content
|
||||
/// * `size` - Length of content to map
|
||||
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
|
||||
/// * `file_offset` - Optional file descriptor and offset to map content from
|
||||
/// * `perm_flags` - mmap permission flags
|
||||
/// * `prot_flags` - mmap protection flags
|
||||
/// * `is_hotplug` - Whether it's a region for hotplug.
|
||||
pub fn build(
|
||||
ty: AddressSpaceRegionType,
|
||||
base: GuestAddress,
|
||||
size: GuestUsize,
|
||||
host_numa_node_id: Option<u32>,
|
||||
file_offset: Option<FileOffset>,
|
||||
perm_flags: i32,
|
||||
prot_flags: i32,
|
||||
is_hotplug: bool,
|
||||
) -> Self {
|
||||
let mut region = Self::new(ty, base, size);
|
||||
|
||||
region.set_host_numa_node_id(host_numa_node_id);
|
||||
region.set_file_offset(file_offset);
|
||||
region.set_perm_flags(perm_flags);
|
||||
region.set_prot_flags(prot_flags);
|
||||
if is_hotplug {
|
||||
region.set_hotplug();
|
||||
}
|
||||
|
||||
region
|
||||
}
|
||||
|
||||
/// Create an address space region to map memory into the virtual machine.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `base` - Base address in VM to map content
|
||||
/// * `size` - Length of content to map
|
||||
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
|
||||
/// * `mem_type` - Memory mapping from, 'shmem' or 'hugetlbfs'
|
||||
/// * `mem_file_path` - Memory file path
|
||||
/// * `mem_prealloc` - Whether to enable pre-allocation of guest memory
|
||||
/// * `is_hotplug` - Whether it's a region for hotplug.
|
||||
pub fn create_default_memory_region(
|
||||
base: GuestAddress,
|
||||
size: GuestUsize,
|
||||
numa_node_id: Option<u32>,
|
||||
mem_type: &str,
|
||||
mem_file_path: &str,
|
||||
mem_prealloc: bool,
|
||||
is_hotplug: bool,
|
||||
) -> Result<AddressSpaceRegion, AddressSpaceError> {
|
||||
Self::create_memory_region(
|
||||
base,
|
||||
size,
|
||||
numa_node_id,
|
||||
mem_type,
|
||||
mem_file_path,
|
||||
mem_prealloc,
|
||||
libc::PROT_READ | libc::PROT_WRITE,
|
||||
is_hotplug,
|
||||
)
|
||||
}
|
||||
|
||||
/// Create an address space region to map memory from memfd/hugetlbfs into the virtual machine.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `base` - Base address in VM to map content
|
||||
/// * `size` - Length of content to map
|
||||
/// * `numa_node_id` - Optional NUMA node id to allocate memory from
|
||||
/// * `mem_type` - Memory mapping from, 'shmem' or 'hugetlbfs'
|
||||
/// * `mem_file_path` - Memory file path
|
||||
/// * `mem_prealloc` - Whether to enable pre-allocation of guest memory
|
||||
/// * `is_hotplug` - Whether it's a region for hotplug.
|
||||
/// * `prot_flags` - mmap protection flags
|
||||
pub fn create_memory_region(
|
||||
base: GuestAddress,
|
||||
size: GuestUsize,
|
||||
numa_node_id: Option<u32>,
|
||||
mem_type: &str,
|
||||
mem_file_path: &str,
|
||||
mem_prealloc: bool,
|
||||
prot_flags: i32,
|
||||
is_hotplug: bool,
|
||||
) -> Result<AddressSpaceRegion, AddressSpaceError> {
|
||||
let perm_flags = if mem_prealloc {
|
||||
libc::MAP_SHARED | libc::MAP_POPULATE
|
||||
} else {
|
||||
libc::MAP_SHARED
|
||||
};
|
||||
let source_type = MemorySourceType::from_str(mem_type)
|
||||
.map_err(|_e| AddressSpaceError::InvalidMemorySourceType(mem_type.to_string()))?;
|
||||
let mut reg = match source_type {
|
||||
MemorySourceType::MemFdShared | MemorySourceType::MemFdOnHugeTlbFs => {
|
||||
let fn_str = if source_type == MemFdShared {
|
||||
CString::new("shmem").expect("CString::new('shmem') failed")
|
||||
} else {
|
||||
CString::new("hugeshmem").expect("CString::new('hugeshmem') failed")
|
||||
};
|
||||
let filename = fn_str.as_c_str();
|
||||
let fd = memfd::memfd_create(filename, memfd::MemFdCreateFlag::empty())
|
||||
.map_err(AddressSpaceError::CreateMemFd)?;
|
||||
// Safe because we have just created the fd.
|
||||
let file: File = unsafe { File::from_raw_fd(fd) };
|
||||
file.set_len(size).map_err(AddressSpaceError::SetFileSize)?;
|
||||
Self::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
base,
|
||||
size,
|
||||
numa_node_id,
|
||||
Some(FileOffset::new(file, 0)),
|
||||
perm_flags,
|
||||
prot_flags,
|
||||
is_hotplug,
|
||||
)
|
||||
}
|
||||
MemorySourceType::MmapAnonymous | MemorySourceType::MmapAnonymousHugeTlbFs => {
|
||||
let mut perm_flags = libc::MAP_PRIVATE | libc::MAP_ANONYMOUS;
|
||||
if mem_prealloc {
|
||||
perm_flags |= libc::MAP_POPULATE
|
||||
}
|
||||
Self::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
base,
|
||||
size,
|
||||
numa_node_id,
|
||||
None,
|
||||
perm_flags,
|
||||
prot_flags,
|
||||
is_hotplug,
|
||||
)
|
||||
}
|
||||
MemorySourceType::FileOnHugeTlbFs => {
|
||||
let path = Path::new(mem_file_path);
|
||||
if let Some(parent_dir) = path.parent() {
|
||||
// Ensure that the parent directory is existed for the mem file path.
|
||||
std::fs::create_dir_all(parent_dir).map_err(AddressSpaceError::CreateDir)?;
|
||||
}
|
||||
let file = OpenOptions::new()
|
||||
.read(true)
|
||||
.write(true)
|
||||
.create(true)
|
||||
.open(mem_file_path)
|
||||
.map_err(AddressSpaceError::OpenFile)?;
|
||||
nix::unistd::unlink(mem_file_path).map_err(AddressSpaceError::UnlinkFile)?;
|
||||
file.set_len(size).map_err(AddressSpaceError::SetFileSize)?;
|
||||
let file_offset = FileOffset::new(file, 0);
|
||||
Self::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
base,
|
||||
size,
|
||||
numa_node_id,
|
||||
Some(file_offset),
|
||||
perm_flags,
|
||||
prot_flags,
|
||||
is_hotplug,
|
||||
)
|
||||
}
|
||||
};
|
||||
|
||||
if source_type.is_hugepage() {
|
||||
reg.set_hugepage();
|
||||
}
|
||||
if source_type.is_mmap_anonymous() {
|
||||
reg.set_anonpage();
|
||||
}
|
||||
|
||||
Ok(reg)
|
||||
}
|
||||
|
||||
/// Create an address region for device MMIO.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `base` - Base address in VM to map content
|
||||
/// * `size` - Length of content to map
|
||||
pub fn create_device_region(
|
||||
base: GuestAddress,
|
||||
size: GuestUsize,
|
||||
) -> Result<AddressSpaceRegion, AddressSpaceError> {
|
||||
Ok(Self::build(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
base,
|
||||
size,
|
||||
None,
|
||||
None,
|
||||
0,
|
||||
0,
|
||||
false,
|
||||
))
|
||||
}
|
||||
|
||||
/// Get type of the address space region.
|
||||
pub fn region_type(&self) -> AddressSpaceRegionType {
|
||||
self.ty
|
||||
}
|
||||
|
||||
/// Get size of region.
|
||||
pub fn len(&self) -> GuestUsize {
|
||||
self.size
|
||||
}
|
||||
|
||||
/// Get the inclusive start physical address of the region.
|
||||
pub fn start_addr(&self) -> GuestAddress {
|
||||
self.base
|
||||
}
|
||||
|
||||
/// Get the inclusive end physical address of the region.
|
||||
pub fn last_addr(&self) -> GuestAddress {
|
||||
debug_assert!(self.size > 0 && self.base.checked_add(self.size).is_some());
|
||||
GuestAddress(self.base.raw_value() + self.size - 1)
|
||||
}
|
||||
|
||||
/// Get mmap permission flags of the address space region.
|
||||
pub fn perm_flags(&self) -> i32 {
|
||||
self.perm_flags
|
||||
}
|
||||
|
||||
/// Set mmap permission flags for the address space region.
|
||||
pub fn set_perm_flags(&mut self, perm_flags: i32) {
|
||||
self.perm_flags = perm_flags;
|
||||
}
|
||||
|
||||
/// Get mmap protection flags of the address space region.
|
||||
pub fn prot_flags(&self) -> i32 {
|
||||
self.prot_flags
|
||||
}
|
||||
|
||||
/// Set mmap protection flags for the address space region.
|
||||
pub fn set_prot_flags(&mut self, prot_flags: i32) {
|
||||
self.prot_flags = prot_flags;
|
||||
}
|
||||
|
||||
/// Get host_numa_node_id flags
|
||||
pub fn host_numa_node_id(&self) -> Option<u32> {
|
||||
self.host_numa_node_id
|
||||
}
|
||||
|
||||
/// Set associated NUMA node ID to allocate memory from for this region.
|
||||
pub fn set_host_numa_node_id(&mut self, host_numa_node_id: Option<u32>) {
|
||||
self.host_numa_node_id = host_numa_node_id;
|
||||
}
|
||||
|
||||
/// Check whether the address space region is backed by a memory file.
|
||||
pub fn has_file(&self) -> bool {
|
||||
self.file_offset.is_some()
|
||||
}
|
||||
|
||||
/// Get optional file associated with the region.
|
||||
pub fn file_offset(&self) -> Option<&FileOffset> {
|
||||
self.file_offset.as_ref()
|
||||
}
|
||||
|
||||
/// Set associated file/offset pair for the region.
|
||||
pub fn set_file_offset(&mut self, file_offset: Option<FileOffset>) {
|
||||
self.file_offset = file_offset;
|
||||
}
|
||||
|
||||
/// Set the hotplug hint.
|
||||
pub fn set_hotplug(&mut self) {
|
||||
self.is_hotplug = true
|
||||
}
|
||||
|
||||
/// Get the hotplug hint.
|
||||
pub fn is_hotplug(&self) -> bool {
|
||||
self.is_hotplug
|
||||
}
|
||||
|
||||
/// Set hugepage hint for `madvise()`, only takes effect when the memory type is `shmem`.
|
||||
pub fn set_hugepage(&mut self) {
|
||||
self.is_hugepage = true
|
||||
}
|
||||
|
||||
/// Get the hugepage hint.
|
||||
pub fn is_hugepage(&self) -> bool {
|
||||
self.is_hugepage
|
||||
}
|
||||
|
||||
/// Set the anonymous memory hint.
|
||||
pub fn set_anonpage(&mut self) {
|
||||
self.is_anon = true
|
||||
}
|
||||
|
||||
/// Get the anonymous memory hint.
|
||||
pub fn is_anonpage(&self) -> bool {
|
||||
self.is_anon
|
||||
}
|
||||
|
||||
/// Check whether the address space region is valid.
|
||||
pub fn is_valid(&self) -> bool {
|
||||
self.size > 0 && self.base.checked_add(self.size).is_some()
|
||||
}
|
||||
|
||||
/// Check whether the address space region intersects with another one.
|
||||
pub fn intersect_with(&self, other: &AddressSpaceRegion) -> bool {
|
||||
// Treat invalid address region as intersecting always
|
||||
let end1 = match self.base.checked_add(self.size) {
|
||||
Some(addr) => addr,
|
||||
None => return true,
|
||||
};
|
||||
let end2 = match other.base.checked_add(other.size) {
|
||||
Some(addr) => addr,
|
||||
None => return true,
|
||||
};
|
||||
|
||||
!(end1 <= other.base || self.base >= end2)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::io::Write;
|
||||
use vmm_sys_util::tempfile::TempFile;
|
||||
|
||||
#[test]
|
||||
fn test_address_space_region_valid() {
|
||||
let reg1 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0xFFFFFFFFFFFFF000),
|
||||
0x2000,
|
||||
);
|
||||
assert!(!reg1.is_valid());
|
||||
let reg1 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0xFFFFFFFFFFFFF000),
|
||||
0x1000,
|
||||
);
|
||||
assert!(!reg1.is_valid());
|
||||
let reg1 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DeviceMemory,
|
||||
GuestAddress(0xFFFFFFFFFFFFE000),
|
||||
0x1000,
|
||||
);
|
||||
assert!(reg1.is_valid());
|
||||
assert_eq!(reg1.start_addr(), GuestAddress(0xFFFFFFFFFFFFE000));
|
||||
assert_eq!(reg1.len(), 0x1000);
|
||||
assert!(!reg1.has_file());
|
||||
assert!(reg1.file_offset().is_none());
|
||||
assert_eq!(reg1.perm_flags(), libc::MAP_SHARED);
|
||||
assert_eq!(reg1.prot_flags(), libc::PROT_READ | libc::PROT_WRITE);
|
||||
assert_eq!(reg1.region_type(), AddressSpaceRegionType::DeviceMemory);
|
||||
|
||||
let tmp_file = TempFile::new().unwrap();
|
||||
let mut f = tmp_file.into_file();
|
||||
let sample_buf = &[1, 2, 3, 4, 5];
|
||||
assert!(f.write_all(sample_buf).is_ok());
|
||||
let reg2 = AddressSpaceRegion::build(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1000),
|
||||
0x1000,
|
||||
None,
|
||||
Some(FileOffset::new(f, 0x0)),
|
||||
0x5a,
|
||||
0x5a,
|
||||
false,
|
||||
);
|
||||
assert_eq!(reg2.region_type(), AddressSpaceRegionType::DefaultMemory);
|
||||
assert!(reg2.is_valid());
|
||||
assert_eq!(reg2.start_addr(), GuestAddress(0x1000));
|
||||
assert_eq!(reg2.len(), 0x1000);
|
||||
assert!(reg2.has_file());
|
||||
assert!(reg2.file_offset().is_some());
|
||||
assert_eq!(reg2.perm_flags(), 0x5a);
|
||||
assert_eq!(reg2.prot_flags(), 0x5a);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_address_space_region_intersect() {
|
||||
let reg1 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1000),
|
||||
0x1000,
|
||||
);
|
||||
let reg2 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x2000),
|
||||
0x1000,
|
||||
);
|
||||
let reg3 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1000),
|
||||
0x1001,
|
||||
);
|
||||
let reg4 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0x1100),
|
||||
0x100,
|
||||
);
|
||||
let reg5 = AddressSpaceRegion::new(
|
||||
AddressSpaceRegionType::DefaultMemory,
|
||||
GuestAddress(0xFFFFFFFFFFFFF000),
|
||||
0x2000,
|
||||
);
|
||||
|
||||
assert!(!reg1.intersect_with(®2));
|
||||
assert!(!reg2.intersect_with(®1));
|
||||
|
||||
// intersect with self
|
||||
assert!(reg1.intersect_with(®1));
|
||||
|
||||
// intersect with others
|
||||
assert!(reg3.intersect_with(®2));
|
||||
assert!(reg2.intersect_with(®3));
|
||||
assert!(reg1.intersect_with(®4));
|
||||
assert!(reg4.intersect_with(®1));
|
||||
assert!(reg1.intersect_with(®5));
|
||||
assert!(reg5.intersect_with(®1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_create_device_region() {
|
||||
let reg = AddressSpaceRegion::create_device_region(GuestAddress(0x10000), 0x1000).unwrap();
|
||||
assert_eq!(reg.region_type(), AddressSpaceRegionType::DeviceMemory);
|
||||
assert_eq!(reg.start_addr(), GuestAddress(0x10000));
|
||||
assert_eq!(reg.len(), 0x1000);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_create_default_memory_region() {
|
||||
AddressSpaceRegion::create_default_memory_region(
|
||||
GuestAddress(0x100000),
|
||||
0x100000,
|
||||
None,
|
||||
"invalid",
|
||||
"invalid",
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap_err();
|
||||
|
||||
let reg = AddressSpaceRegion::create_default_memory_region(
|
||||
GuestAddress(0x100000),
|
||||
0x100000,
|
||||
None,
|
||||
"shmem",
|
||||
"",
|
||||
false,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
|
||||
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
|
||||
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
|
||||
assert_eq!(reg.len(), 0x100000);
|
||||
assert!(reg.file_offset().is_some());
|
||||
|
||||
let reg = AddressSpaceRegion::create_default_memory_region(
|
||||
GuestAddress(0x100000),
|
||||
0x100000,
|
||||
None,
|
||||
"hugeshmem",
|
||||
"",
|
||||
true,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
|
||||
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
|
||||
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
|
||||
assert_eq!(reg.len(), 0x100000);
|
||||
assert!(reg.file_offset().is_some());
|
||||
|
||||
let reg = AddressSpaceRegion::create_default_memory_region(
|
||||
GuestAddress(0x100000),
|
||||
0x100000,
|
||||
None,
|
||||
"mmap",
|
||||
"",
|
||||
true,
|
||||
false,
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(reg.region_type(), AddressSpaceRegionType::DefaultMemory);
|
||||
assert_eq!(reg.start_addr(), GuestAddress(0x100000));
|
||||
assert_eq!(reg.last_addr(), GuestAddress(0x1fffff));
|
||||
assert_eq!(reg.len(), 0x100000);
|
||||
assert!(reg.file_offset().is_none());
|
||||
|
||||
// TODO: test hugetlbfs
|
||||
}
|
||||
}
|
||||
14
src/dragonball/src/dbs_allocator/Cargo.toml
Normal file
14
src/dragonball/src/dbs_allocator/Cargo.toml
Normal file
@@ -0,0 +1,14 @@
|
||||
[package]
|
||||
name = "dbs-allocator"
|
||||
version = "0.1.1"
|
||||
authors = ["Liu Jiang <gerry@linux.alibaba.com>"]
|
||||
description = "a resource allocator for virtual machine manager"
|
||||
license = "Apache-2.0"
|
||||
edition = "2018"
|
||||
homepage = "https://github.com/openanolis/dragonball-sandbox"
|
||||
repository = "https://github.com/openanolis/dragonball-sandbox"
|
||||
keywords = ["dragonball"]
|
||||
readme = "README.md"
|
||||
|
||||
[dependencies]
|
||||
thiserror = "1.0"
|
||||
1
src/dragonball/src/dbs_allocator/LICENSE
Symbolic link
1
src/dragonball/src/dbs_allocator/LICENSE
Symbolic link
@@ -0,0 +1 @@
|
||||
../../LICENSE
|
||||
106
src/dragonball/src/dbs_allocator/README.md
Normal file
106
src/dragonball/src/dbs_allocator/README.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# dbs-allocator
|
||||
|
||||
## Design
|
||||
|
||||
The resource manager in the `Dragonball Sandbox` needs to manage and allocate different kinds of resource for the
|
||||
sandbox (virtual machine), such as memory-mapped I/O address space, port I/O address space, legacy IRQ numbers,
|
||||
MSI/MSI-X vectors, device instance id, etc. The `dbs-allocator` crate is designed to help the resource manager
|
||||
to track and allocate these types of resources.
|
||||
|
||||
Main components are:
|
||||
- *Constraints*: struct to declare constraints for resource allocation.
|
||||
```rust
|
||||
#[derive(Copy, Clone, Debug)]
|
||||
pub struct Constraint {
|
||||
/// Size of resource to allocate.
|
||||
pub size: u64,
|
||||
/// Lower boundary for resource allocation.
|
||||
pub min: u64,
|
||||
/// Upper boundary for resource allocation.
|
||||
pub max: u64,
|
||||
/// Alignment for allocated resource.
|
||||
pub align: u64,
|
||||
/// Policy for resource allocation.
|
||||
pub policy: AllocPolicy,
|
||||
}
|
||||
```
|
||||
- `IntervalTree`: An interval tree implementation specialized for VMM resource management.
|
||||
```rust
|
||||
pub struct IntervalTree<T> {
|
||||
pub(crate) root: Option<Node<T>>,
|
||||
}
|
||||
|
||||
pub fn allocate(&mut self, constraint: &Constraint) -> Option<Range>
|
||||
pub fn free(&mut self, key: &Range) -> Option<T>
|
||||
pub fn insert(&mut self, key: Range, data: Option<T>) -> Self
|
||||
pub fn update(&mut self, key: &Range, data: T) -> Option<T>
|
||||
pub fn delete(&mut self, key: &Range) -> Option<T>
|
||||
pub fn get(&self, key: &Range) -> Option<NodeState<&T>>
|
||||
```
|
||||
|
||||
## Usage
|
||||
The concept of Interval Tree may seem complicated, but using dbs-allocator to do resource allocation and release is simple and straightforward.
|
||||
You can following these steps to allocate your VMM resource.
|
||||
```rust
|
||||
// 1. To start with, we should create an interval tree for some specific resouces and give maximum address/id range as root node. The range here could be address range, id range, etc.
|
||||
|
||||
let mut resources_pool = IntervalTree::new();
|
||||
resources_pool.insert(Range::new(MIN_RANGE, MAX_RANGE), None);
|
||||
|
||||
// 2. Next, create a constraint with the size for your resource, you could also assign the maximum, minimum and alignment for the constraint. Then we could use the constraint to allocate the resource in the range we previously decided. Interval Tree will give you the appropriate range.
|
||||
let mut constraint = Constraint::new(SIZE);
|
||||
let mut resources_range = self.resources_pool.allocate(&constraint);
|
||||
|
||||
// 3. Then we could use the resource range to let other crates like vm-pci / vm-device to create and maintain the device
|
||||
let mut device = Device::create(resources_range, ..)
|
||||
```
|
||||
|
||||
## Example
|
||||
We will show examples for allocating an unused PCI device ID from the PCI device ID pool and allocating memory address using dbs-allocator
|
||||
```rust
|
||||
use dbs_allocator::{Constraint, IntervalTree, Range};
|
||||
|
||||
// Init a dbs-allocator IntervalTree
|
||||
let mut pci_device_pool = IntervalTree::new();
|
||||
|
||||
// Init PCI device id pool with the range 0 to 255
|
||||
pci_device_pool.insert(Range::new(0x0u8, 0xffu8), None);
|
||||
|
||||
// Construct a constraint with size 1 and alignment 1 to ask for an ID.
|
||||
let mut constraint = Constraint::new(1u64).align(1u64);
|
||||
|
||||
// Get an ID from the pci_device_pool
|
||||
let mut id = pci_device_pool.allocate(&constraint).map(|e| e.min as u8);
|
||||
|
||||
// Pass the ID generated from dbs-allocator to vm-pci specified functions to create pci devices
|
||||
let mut pci_device = PciDevice::new(id as u8, ..);
|
||||
|
||||
```
|
||||
|
||||
```rust
|
||||
use dbs_allocator::{Constraint, IntervalTree, Range};
|
||||
|
||||
// Init a dbs-allocator IntervalTree
|
||||
let mut mem_pool = IntervalTree::new();
|
||||
|
||||
// Init memory address from GUEST_MEM_START to GUEST_MEM_END
|
||||
mem_pool.insert(Range::new(GUEST_MEM_START, GUEST_MEM_END), None);
|
||||
|
||||
// Construct a constraint with size, maximum addr and minimum address of memory region to ask for an memory allocation range.
|
||||
let constraint = Constraint::new(region.len())
|
||||
.min(region.start_addr().raw_value())
|
||||
.max(region.last_addr().raw_value());
|
||||
|
||||
// Get the memory allocation range from the pci_device_pool
|
||||
let mem_range = mem_pool.allocate(&constraint).unwrap();
|
||||
|
||||
// Update the mem_range in IntervalTree with memory region info
|
||||
mem_pool.update(&mem_range, region);
|
||||
|
||||
// After allocation, we can use the memory range to do mapping and other memory related work.
|
||||
...
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.
|
||||
1297
src/dragonball/src/dbs_allocator/src/interval_tree.rs
Normal file
1297
src/dragonball/src/dbs_allocator/src/interval_tree.rs
Normal file
File diff suppressed because it is too large
Load Diff
164
src/dragonball/src/dbs_allocator/src/lib.rs
Normal file
164
src/dragonball/src/dbs_allocator/src/lib.rs
Normal file
@@ -0,0 +1,164 @@
|
||||
// Copyright (C) 2019, 2022 Alibaba Cloud. All rights reserved.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//! Data structures and algorithms to support resource allocation and management.
|
||||
//!
|
||||
//! The `dbs-allocator` crate provides data structures and algorithms to manage and allocate
|
||||
//! integer identifiable resources. The resource manager in virtual machine monitor (VMM) may
|
||||
//! manage and allocate resources for virtual machines by using:
|
||||
//! - [Constraint]: Struct to declare constraints for resource allocation.
|
||||
//! - [IntervalTree]: An interval tree implementation specialized for VMM resource management.
|
||||
|
||||
#![deny(missing_docs)]
|
||||
|
||||
pub mod interval_tree;
|
||||
pub use interval_tree::{IntervalTree, NodeState, Range};
|
||||
|
||||
/// Error codes for resource allocation operations.
|
||||
#[derive(thiserror::Error, Debug, Eq, PartialEq)]
|
||||
pub enum Error {
|
||||
/// Invalid boundary for resource allocation.
|
||||
#[error("invalid boundary constraint: min ({0}), max ({1})")]
|
||||
InvalidBoundary(u64, u64),
|
||||
}
|
||||
|
||||
/// Specialized version of [`std::result::Result`] for resource allocation operations.
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
|
||||
/// Resource allocation policies.
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
|
||||
pub enum AllocPolicy {
|
||||
/// Default resource allocation policy.
|
||||
Default,
|
||||
/// Return the first available resource matching the allocation constraints.
|
||||
FirstMatch,
|
||||
}
|
||||
|
||||
/// Struct to declare resource allocation constraints.
|
||||
#[derive(Copy, Clone, Debug)]
|
||||
pub struct Constraint {
|
||||
/// Size of resource to allocate.
|
||||
pub size: u64,
|
||||
/// Lower boundary for resource allocation.
|
||||
pub min: u64,
|
||||
/// Upper boundary for resource allocation.
|
||||
pub max: u64,
|
||||
/// Alignment for allocated resource.
|
||||
pub align: u64,
|
||||
/// Policy for resource allocation.
|
||||
pub policy: AllocPolicy,
|
||||
}
|
||||
|
||||
impl Constraint {
|
||||
/// Create a new instance of [`Constraint`] with default settings.
|
||||
pub fn new<T>(size: T) -> Self
|
||||
where
|
||||
u64: From<T>,
|
||||
{
|
||||
Constraint {
|
||||
size: u64::from(size),
|
||||
min: 0,
|
||||
max: u64::MAX,
|
||||
align: 1,
|
||||
policy: AllocPolicy::Default,
|
||||
}
|
||||
}
|
||||
|
||||
/// Set the lower boundary constraint for resource allocation.
|
||||
pub fn min<T>(mut self, min: T) -> Self
|
||||
where
|
||||
u64: From<T>,
|
||||
{
|
||||
self.min = u64::from(min);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the upper boundary constraint for resource allocation.
|
||||
pub fn max<T>(mut self, max: T) -> Self
|
||||
where
|
||||
u64: From<T>,
|
||||
{
|
||||
self.max = u64::from(max);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the alignment constraint for allocated resource.
|
||||
pub fn align<T>(mut self, align: T) -> Self
|
||||
where
|
||||
u64: From<T>,
|
||||
{
|
||||
self.align = u64::from(align);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set the resource allocation policy.
|
||||
pub fn policy(mut self, policy: AllocPolicy) -> Self {
|
||||
self.policy = policy;
|
||||
self
|
||||
}
|
||||
|
||||
/// Validate the resource allocation constraints.
|
||||
pub fn validate(&self) -> Result<()> {
|
||||
if self.max < self.min {
|
||||
return Err(Error::InvalidBoundary(self.min, self.max));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
#[test]
|
||||
fn test_set_min() {
|
||||
let constraint = Constraint::new(2_u64).min(1_u64);
|
||||
assert_eq!(constraint.min, 1_u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_max() {
|
||||
let constraint = Constraint::new(2_u64).max(100_u64);
|
||||
assert_eq!(constraint.max, 100_u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_align() {
|
||||
let constraint = Constraint::new(2_u64).align(8_u64);
|
||||
assert_eq!(constraint.align, 8_u64);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_policy() {
|
||||
let mut constraint = Constraint::new(2_u64).policy(AllocPolicy::FirstMatch);
|
||||
assert_eq!(constraint.policy, AllocPolicy::FirstMatch);
|
||||
constraint = constraint.policy(AllocPolicy::Default);
|
||||
assert_eq!(constraint.policy, AllocPolicy::Default);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_consistently_change_constraint() {
|
||||
let constraint = Constraint::new(2_u64)
|
||||
.min(1_u64)
|
||||
.max(100_u64)
|
||||
.align(8_u64)
|
||||
.policy(AllocPolicy::FirstMatch);
|
||||
assert_eq!(constraint.min, 1_u64);
|
||||
assert_eq!(constraint.max, 100_u64);
|
||||
assert_eq!(constraint.align, 8_u64);
|
||||
assert_eq!(constraint.policy, AllocPolicy::FirstMatch);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_invalid_boundary() {
|
||||
// Normal case.
|
||||
let constraint = Constraint::new(2_u64).max(1000_u64).min(999_u64);
|
||||
assert!(constraint.validate().is_ok());
|
||||
|
||||
// Error case.
|
||||
let constraint = Constraint::new(2_u64).max(999_u64).min(1000_u64);
|
||||
assert_eq!(
|
||||
constraint.validate(),
|
||||
Err(Error::InvalidBoundary(1000u64, 999u64))
|
||||
);
|
||||
}
|
||||
}
|
||||
26
src/dragonball/src/dbs_arch/Cargo.toml
Normal file
26
src/dragonball/src/dbs_arch/Cargo.toml
Normal file
@@ -0,0 +1,26 @@
|
||||
[package]
|
||||
name = "dbs-arch"
|
||||
version = "0.2.3"
|
||||
authors = ["Alibaba Dragonball Team"]
|
||||
license = "Apache-2.0 AND BSD-3-Clause"
|
||||
edition = "2018"
|
||||
description = "A collection of CPU architecture specific constants and utilities."
|
||||
homepage = "https://github.com/openanolis/dragonball-sandbox"
|
||||
repository = "https://github.com/openanolis/dragonball-sandbox"
|
||||
keywords = ["dragonball", "secure-sandbox", "arch", "ARM64", "x86"]
|
||||
readme = "README.md"
|
||||
|
||||
[dependencies]
|
||||
memoffset = "0.6"
|
||||
kvm-bindings = { version = "0.6.0", features = ["fam-wrappers"] }
|
||||
kvm-ioctls = "0.12.0"
|
||||
thiserror = "1"
|
||||
vm-memory = { version = "0.9" }
|
||||
vmm-sys-util = "0.11.0"
|
||||
libc = ">=0.2.39"
|
||||
|
||||
[dev-dependencies]
|
||||
vm-memory = { version = "0.9", features = ["backend-mmap"] }
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
1
src/dragonball/src/dbs_arch/LICENSE
Symbolic link
1
src/dragonball/src/dbs_arch/LICENSE
Symbolic link
@@ -0,0 +1 @@
|
||||
../../LICENSE
|
||||
29
src/dragonball/src/dbs_arch/README.md
Normal file
29
src/dragonball/src/dbs_arch/README.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# dbs-arch
|
||||
|
||||
## Design
|
||||
|
||||
The `dbs-arch` crate is a collection of CPU architecture specific constants and utilities to hide CPU architecture details away from the Dragonball Sandbox or other VMMs.
|
||||
Also, we have provided x86_64 CPUID support in this crate, for more details you could look at [this document](docs/x86_64_cpuid.md)
|
||||
|
||||
## Supported Architectures
|
||||
|
||||
- AMD64 (x86_64)
|
||||
- ARM64 (aarch64)
|
||||
|
||||
## Submodule List
|
||||
|
||||
This repository contains the following submodules:
|
||||
| Name | Arch| Description |
|
||||
| --- | --- | --- |
|
||||
| [x86_64::cpuid](src/x86_64/cpuid/) | x86_64 |Facilities to process CPUID information. |
|
||||
| [x86_64::msr](src/x86_64/msr.rs) | x86_64 | Constants and functions for Model Specific Registers |
|
||||
| [aarch64::gic](src/aarch64/gic) | aarch64 | Structures to manage GICv2/GICv3/ITS devices for ARM64 |
|
||||
| [aarch64::regs](src/aarch64/regs.rs) | aarch64 | Constants and functions to configure and manage CPU registers |
|
||||
|
||||
## Acknowledgement
|
||||
|
||||
Part of the code is derived from the [Firecracker](https://github.com/firecracker-microvm/firecracker) project.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0), Version 2.0.
|
||||
1
src/dragonball/src/dbs_arch/THIRD-PARTY
Symbolic link
1
src/dragonball/src/dbs_arch/THIRD-PARTY
Symbolic link
@@ -0,0 +1 @@
|
||||
../../THIRD-PARTY
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user