Compare commits

..

124 Commits

Author SHA1 Message Date
Fabiano Fidêncio
1af292c9e6 Merge pull request #3585 from snir911/2.3.2-branch-bump
# Kata Containers 2.3.2
2022-02-03 08:05:22 +01:00
Snir Sheriber
67947b5f05 release: Kata Containers 2.3.2
- stable-2.3 | workflows: Use base instead of head ref for kata-deploy-test
- stable-2.3-backports
- [backport from main] agent: fix the issue of missing create a new session for container
- stable-2.3 - kata-deploy: validate conf file can be created
- stable-2.3 | kata-monitor: increase delay before syncing with the container manager
- stable-2.3 | versions: Upgrade to Cloud Hypervisor v21.0
- stable-2.3: backport lint fixes from main
- stable-2.3 | runtime: -Wl,--s390-pgste for s390x
- stable-2.3 | kata-manager: Retrieve static tarball
- stable-2.3 | ci: Pass function arguments in static-checks.sh

977f1f5b workflows: Use base instead of head ref for kata-deploy-test
99ed596a workflows: Fix typo in kata-deploy-push action
13b7d93b workflows: Ensure a label change re-triggers the actions
b8463224 workflows: Ensure force-skip-ci skips all actions
8c8571f4 workflows: Use the correct branch ref on test kata-deploy
620bb97e runtime: Provide protection for shared data
770d4acf tools: Fix groupname if it differs from username
cedb01d2 runtime: close span before return from function in case of error
a661e538 agent: fix the issue of missing create a new session for container
bed0f3c8 kata-deploy: validate conf file can be created
786c667e kata-monitor: increase delay before syncing with the container manager
e3b00f39 runtime: -Wl,--s390-pgste for s390x
3260adc4 virtcontainers: clh: Re-generate the client code
cc64461f versions: Upgrade to Cloud Hypervisor v21.0
f2c6cd08 ci: Pass function arguments in static-checks.sh
78afa10a agent: resolve unused variables in tests
a8298676 agent: remove unused field in mount handling
87f9a690 agent: drop unused fields from network
fc012a2b agent: clear cargo test warnings
63c5a8aa uevent: Fix clippy issue in test code
d1530afa kata-manager: Retrieve static tarball

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-02-01 20:02:37 +02:00
Fabiano Fidêncio
f2cbfad8b0 Merge pull request #3580 from fidencio/wip/stable-2.3-fix-kata-deploy-ref-branch
stable-2.3 | workflows: Use base instead of head ref for kata-deploy-test
2022-02-01 18:23:33 +01:00
Fabiano Fidêncio
977f1f5bb6 workflows: Use base instead of head ref for kata-deploy-test
Although I've done tests on my own fork using `head_ref` and those
worked, it seems those only worked as the PR was coming from exactly the
same repository as the target one.

Let's switch to base_ref, instead, which we for sure have as part of our
repo.

The downside of this is that we run the test with the last merged PR,
rather than with the "to-be-approved" PR, but that's a limitation we've
always had.

Fixes: #3482

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 3924470c8f)
2022-02-01 17:05:31 +01:00
Fabiano Fidêncio
e9aaefb135 Merge pull request #3575 from snir911/stable-2.3-backports232
stable-2.3-backports
2022-01-31 20:19:11 +01:00
Fabiano Fidêncio
99ed596ae4 workflows: Fix typo in kata-deploy-push action
A `:` was missed when d87ab14fa7 was
introduced.

Fixes: #3485

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-01-31 12:45:05 +02:00
Fabiano Fidêncio
13b7d93b4f workflows: Ensure a label change re-triggers the actions
This is needed in order to ensure that, for instance, if `force-skip-ci`
label is either added or removed later, the jobs related to the actions
will be restarted and accordingly checked.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-01-31 12:44:37 +02:00
Fabiano Fidêncio
b8463224c8 workflows: Ensure force-skip-ci skips all actions
Before this change it was only applied to the static-checks, but if
we're already taking the extreme path of skipping the CI, we better
ensure we skip all the actions and not just a few of them.

Fixes: #3471

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-01-31 12:44:33 +02:00
Fabiano Fidêncio
8c8571f4ba workflows: Use the correct branch ref on test kata-deploy
The action used for testing kata-deploy is entirely based on the action
used to build the kata-deploy tarball, but while the latter is able to
use the correct branch, the former always uses `main`.

This happens as the `issue_comment`, from GitHub actions, passed the
"default branch" as the GITHUB_REF.

As we're not the first ones to face such a issue, I've decided to take
one of the approaches suggested at one of the checkout's issues,
https://github.com/actions/checkout/issues/331, and take advantage of a
new action provided by the community, which will get the PR where the
comment was made, give us that ref, and that then can be used with the
checkout action, resulting on what we originally wanted.

Fixes: #3443

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-01-31 12:33:12 +02:00
liangxianlong
620bb97e3f runtime: Provide protection for shared data
The k.reqHandlers should be protected by locks when used

Fixes #3440

Signed-off-by: liangxianlong <liang.xianlong@zte.com.cn>
2022-01-31 12:32:55 +02:00
Sebastian Hasler
770d4acf8b tools: Fix groupname if it differs from username
The script `tools/packaging/static-build/qemu/build-base-qemu.sh`
previously failed on systems where the user's groupname differs from the
username

Fixes: #3461

Signed-off-by: Sebastian Hasler <sebastian.hasler@stuvus.uni-stuttgart.de>
2022-01-31 12:32:47 +02:00
bin
cedb01d295 runtime: close span before return from function in case of error
Return before closing span will cause invalid spans, so span should
be closed before function return.

Fixes: #3424

Signed-off-by: bin <bin@hyper.sh>
2022-01-31 12:32:14 +02:00
Peng Tao
1ccc95fba1 Merge pull request #3563 from lifupan/stable-2.3-backport-3063
[backport from main] agent: fix the issue of missing create a new session for container
2022-01-29 14:24:17 +08:00
Fupan Li
a661e53892 agent: fix the issue of missing create a new session for container
When the container didn't had a tty console, it would be in a same
process group with the kata-agent, which wasn't expected. Thus,
create a new session for the container process.

Fixes: #3063

Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>
2022-01-28 09:44:04 +08:00
snir911
5475d7a7e9 Merge pull request #3561 from snir911/stable-2.3-backport-3433
stable-2.3 - kata-deploy: validate conf file can be created
2022-01-27 19:28:11 +02:00
Snir Sheriber
bed0f3c801 kata-deploy: validate conf file can be created
As containerd doesn't exist at cleanup

Fixes: #3429
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-01-27 17:08:48 +02:00
Peng Tao
04426d65ba Merge pull request #3554 from fgiudici/stable-2.3_monitor_sync
stable-2.3 | kata-monitor: increase delay before syncing with the container manager
2022-01-27 16:49:37 +08:00
Fabiano Fidêncio
3e1955effd Merge pull request #3535 from likebreath/0121/backport_clh_v21.0
stable-2.3 | versions: Upgrade to Cloud Hypervisor v21.0
2022-01-27 08:02:15 +01:00
Eric Ernst
52dd41dacb Merge pull request #3532 from egernst/stable-backport-lints
stable-2.3: backport lint fixes from main
2022-01-25 16:37:32 -08:00
Francesco Giudici
786c667e60 kata-monitor: increase delay before syncing with the container manager
When we detect a new kata sandbox from the sbs fs, we add that to the
sandbox cache to retrieve metrics.
We also schedule a sync with the container manager, which we consider
the source of truth: if the kata pod is not yet ready the container
manager will not report it and we will drop it from our cache.
We will add it back only when we re-sync, i.e., when we get an event
from the sbs fs (which means a kata pod has been terminated or a new one
has been started).

Since we use the sync with the container manager to remove pods from the
cache, we can wait some more before syncing (and so reduce the chance to
miss a kata pod just because it was not ready yet).

Let's raise the waiting time before starting the sync timer.

Fixes: #3550

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
2022-01-25 18:18:10 +01:00
Jakob Naucke
cf5a79cfe1 Merge pull request #3528 from Jakob-Naucke/backport-pgste
stable-2.3 | runtime: -Wl,--s390-pgste for s390x
2022-01-24 15:00:44 +01:00
Jakob Naucke
e3b00f398b runtime: -Wl,--s390-pgste for s390x
for linking. Required for basic KVM checks on some kernels (e.g. the
one RHEL is currently shipping), cf.
6621441db5/target/s390x/kvm/meson.build (L15-L16).

Must also be applied to netmon in backport.

Fixes: #3469
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
Co-authored-by: Amulya Meka <amulmek1@in.ibm.com>
2022-01-24 12:36:58 +01:00
Jakob Naucke
67950aefd5 Merge pull request #3330 from Jakob-Naucke/backport-kata-manager-static
stable-2.3 | kata-manager: Retrieve static tarball
2022-01-24 12:02:02 +01:00
Jakob Naucke
bd4ab0c4d5 Merge pull request #3526 from Jakob-Naucke/static-args
stable-2.3 | ci: Pass function arguments in static-checks.sh
2022-01-24 11:39:11 +01:00
Bo Chen
3260adc4a1 virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v21.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 2d799cbfa3)
2022-01-21 13:11:20 -08:00
Bo Chen
cc64461fc8 versions: Upgrade to Cloud Hypervisor v21.0
Highlights from the Cloud Hypervisor release v21.0: 1) Efficient Local
Live Migration (for Live Upgrade); 2) Recommended Kernel is Now 5.15; 3)
Bug fixes on OpenAPI yaml spec file, avoid deadlock for live-migration,
etc.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v21.0

Fixes: #3519

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 7e15e99d5f)
2022-01-21 13:11:20 -08:00
Jakob Naucke
f2c6cd0808 ci: Pass function arguments in static-checks.sh
e.g. when called from the tests repo

Fixes: #3525
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2022-01-21 14:57:16 +01:00
Eric Ernst
78afa10ab9 agent: resolve unused variables in tests
A few tests have unused or unread variables. Let's clean these up...

Fixes: #3530
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2022-01-16 16:03:14 -08:00
Eric Ernst
a829867674 agent: remove unused field in mount handling
In our parsing of mountinfo, majority of the fields are unused.
Let's stop saving these.

Fixes: #3180

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2022-01-16 13:26:46 -08:00
Eric Ernst
87f9a69035 agent: drop unused fields from network
We don't utilize routes or inteface vectors. Let's drop them.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2022-01-16 13:26:42 -08:00
bin
fc012a2bab agent: clear cargo test warnings
Function parameters in test config is not used. This
commit will add under score before variable name
in test config.

Fixes: #3091

Signed-off-by: bin <bin@hyper.sh>
2022-01-16 13:19:59 -08:00
James O. D. Hunt
63c5a8aa53 uevent: Fix clippy issue in test code
Remove a bare `return` from a test function. This looks wrong but isn't
because the callers are all tests that just wait for a state change
caused by this test function.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-01-16 12:21:14 -08:00
Peng Tao
365e358115 Merge pull request #3402 from snir911/2.3.1-branch-bump
# Kata Containers 2.3.1
2022-01-11 16:56:05 +08:00
Snir Sheriber
a2e524f356 release: Kata Containers 2.3.1
- stable-2.3 | kata-deploy: fix tar command in dockerfile
- stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.2
- stable-2.3 Missing backports
- stable-2.3 | docs: Fix kernel configs README spelling errors
- docs: Fix outdated links
- stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.1
- Backport osbuilder: Revert to using apk.static for Alpine
- stable-2.3 | runtime: only call stopVirtiofsd when shared_fs is virtio-fs
- Backport versions: Use Ubuntu initrd for non-musl archs
- stable-2.3 | Upgrade to Cloud Hypervisor v20.0 and Openapi-generator v5.3.0
- stable-2.3 | packaging: Fix missing commit message in building kata-runtime
- stable-2.3 | runtime: enable vhost-net for rootless hypervisor
- [backport] agent: create directories for watchable-bind mounts
- runtime: enable FUSE_DAX kernel config for DAX

dfbe74c4 kata-deploy: fix tar command in dockerfile
9e7eed7c versions: Upgrade to Cloud Hypervisor v20.2
53cf1dd0 tools/packaging: add copyright to kata-monitor's Dockerfile
a4dee6a5 packaging: delint tests dockerfiles
fd87b60c packaging: delint kata-deploy dockerfiles
2cb4f7ba ci/openshift-ci: delint dockerfiles
993dcc94 osbuilder: delint dockerfiles
bbd7cc2f packaging: delint kata-monitor dockerfiles
9837ec72 packaging: delint static-build dockerfiles
8785106f packaging/qemu: Use QEMU script to update submodules
a915f082 packaging/qemu: Use partial git clone
ec3faab8 security: Update rust crate versions
1f61be84 osbuilder: Add protoc to the alpine container
d2d8f9ac osbuilder: avoid to copy versions.txt which already deprecated
ca30eee3 kata-manager: Retrieve static tarball
0217abce kata-deploy: Deal with empty containerd conf file
572b25dd osbuilder: be runtime consistent also with podman build
84e69ecb agent: user container ID as watchable storage key for hashmap
77b6cfbd docs: Fix kernel configs README spelling errors
24085c95 docs: Fix outdated k8s link
514bf74f docs: Replicate branch rename on runtime-spec
77a2502a cri-o: Update links for the CRI-O github page
6413ecf4 docs: Backport source reorganization links
a0bed72d versions: Upgrade to Cloud Hypervisor v20.1
d03e05e8 versions: Use fixed, minor version for Alpine
0f7db91c osbuilder: Revert to using apk.static for Alpine
271d67a8 runtime: only call stopVirtiofsd when shared_fs is virtio-fs
7c15335d versions: Use Ubuntu initrd for non-musl archs
15080f20 virtcontainers: clh: Upgrade to openapi-generator v5.3.0
c2b8eb3c virtcontainers: clh: Re-generate the client code
fe0fbab5 versions: Upgrade to Cloud Hypervisor v20.0
be5468fd packaging: Fix missing commit message in building kata-runtime
18bb9a5d runtime: enable vhost-net for rootless hypervisor
3458073d agent: create directories for watchable-bind mounts
0e91503c runtime: enable FUSE_DAX kernel config for DAX

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-01-06 20:51:21 +02:00
snir911
3d4dedefda Merge pull request #3396 from snir911/stable-2.3-fix-kata-deploy
stable-2.3 | kata-deploy: fix tar command in dockerfile
2022-01-06 20:36:36 +02:00
snir911
919fc56daa Merge pull request #3397 from likebreath/0105/backport_clh_v20.2
stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.2
2022-01-06 11:22:41 +02:00
Snir Sheriber
dfbe74c489 kata-deploy: fix tar command in dockerfile
tar params are passed wrongly

Fixes: #3394
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-01-06 08:26:36 +02:00
Bo Chen
9e7eed7c4b versions: Upgrade to Cloud Hypervisor v20.2
This is a bug release from Cloud Hypervisor addressing the following
issues: 1) Don't error out when setting up the SIGWINCH handler (for
console resize) when this fails due to older kernel; 2) Seccomp rules
were refined to remove syscalls that are now unused; 3) Fix reboot on
older host kernels when SIGWINCH handler was not initialised; 4) Fix
virtio-vsock blocking issue.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.2

Fixes: #3383

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 1f581a0405)
2022-01-05 10:52:53 -08:00
Archana Shinde
a0bb8c5599 Merge pull request #3368 from snir911/backports-2.3
stable-2.3 Missing backports
2022-01-04 06:42:42 -08:00
Wainer dos Santos Moschetta
53cf1dd042 tools/packaging: add copyright to kata-monitor's Dockerfile
(added dependency at backport)

The kata-monitor's Dockerfile was added by Eric Ernst on commit 2f1cb7995f
but for some reason the static checker did not catch the file misses the copyright statement
at the time it was added. But it is now complaining about it. So this assign the copyright to
him to make the static-checker happy.

Fixes #3329
github.com/kata-containers/tests#4310
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 15:31:09 +02:00
Wainer dos Santos Moschetta
a4dee6a591 packaging: delint tests dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:53:19 +02:00
Wainer dos Santos Moschetta
fd87b60c7a packaging: delint kata-deploy dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:53 +02:00
Wainer dos Santos Moschetta
2cb4f7ba70 ci/openshift-ci: delint dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:47 +02:00
Wainer dos Santos Moschetta
993dcc94ff osbuilder: delint dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:43 +02:00
Wainer dos Santos Moschetta
bbd7cc2f93 packaging: delint kata-monitor dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:39 +02:00
Wainer dos Santos Moschetta
9837ec728c packaging: delint static-build dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:33 +02:00
Wainer dos Santos Moschetta
8785106f6c packaging/qemu: Use QEMU script to update submodules
Currently QEMU's submodules are git cloned but there is the scripts/git-submodule.sh
which is meant for that. Let's use that script.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:25 +02:00
Wainer dos Santos Moschetta
a915f08266 packaging/qemu: Use partial git clone
The static build of QEMU takes a good amount of time on cloning the
source tree because we do a full git clone. In order to speed up that
operation this changed the Dockerfile so that it is carried out a
partial clone by using --depth=1 argument.

Fixes #3291
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:17 +02:00
Snir Sheriber
ec3faab892 security: Update rust crate versions
backporting b1f4e945b3 original commit msg (modified):

Update the rust dependencies that have upstream security fixes. Issues
fixed by this change:

- [`RUSTSEC-2020-0002`](https://rustsec.org/advisories/RUSTSEC-2020-0002) (`prost` crate)
- [`RUSTSEC-2020-0036`](https://rustsec.org/advisories/RUSTSEC-2020-0036) (`failure` crate)
- [`RUSTSEC-2021-0073`](https://rustsec.org/advisories/RUSTSEC-2021-0073) (`prost-types` crate)
- [`RUSTSEC-2021-0119`](https://rustsec.org/advisories/RUSTSEC-2021-0119) (`nix` crate)

This change also includes:

- Minor code changes for the new version of `prometheus` for the agent.

Fixes: #3296.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-12-29 16:58:14 +02:00
Fabiano Fidêncio
1f61be842d osbuilder: Add protoc to the alpine container
It seems the lack of protoc in the alpine containers is causing issues
with some of our CIs, such as the VFIO one.

Fixes: #3323

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-12-29 14:40:24 +02:00
zhanghj
d2d8f9ac65 osbuilder: avoid to copy versions.txt which already deprecated
Currently the versions.txt in rootfs-builder dir is already removed,
so avoid to copy it in list of helper files.

Fixes: #3267

Signed-off-by: zhanghj <zhanghj.lc@inspur.com>
2021-12-29 14:39:34 +02:00
Jakob Naucke
ca30eee3e2 kata-manager: Retrieve static tarball
In `utils/kata-manager.sh`, we download the first asset listed for the
release, which used to be the static x86_64 tarball. If that happened to
not match the system architecture, we would abort. Besides that logic
being invalid for !x86_64 (despite not distributing other tarballs at
the moment), the first asset listed is also not the static tarball any
more, it is the vendored source tarball. Retrieve all _static_ tarballs
and select the appropriate one depending on architecture.

Fixes: #3254
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-29 14:39:25 +02:00
Fabiano Fidêncio
0217abce24 kata-deploy: Deal with empty containerd conf file
As containerd can properly run without having a existent
`/etc/containerd/config.toml` file (it'd run using the default
cobnfiguration), let's explicitly create the file in those cases.

This will avoid issues on ammending runtime classes to a non-existent
file.

Fixes: #3229

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Tested-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-29 14:39:14 +02:00
Snir Sheriber
572b25dd35 osbuilder: be runtime consistent also with podman build
Use the same runtime used for podman run also for the podman build cmd
Additionally remove "docker" from the docker_run_args variable

Fixes: #3239
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-12-29 14:38:32 +02:00
bin
84e69ecb22 agent: user container ID as watchable storage key for hashmap
Use sandbox ID as the key will cause the failed containers' storage
leak.

Fixes: #3172

Signed-off-by: bin <bin@hyper.sh>
2021-12-29 14:38:18 +02:00
Archana Shinde
57a6d46376 Merge pull request #3347 from Jakob-Naucke/backport-spell-kernel-readme
stable-2.3 | docs: Fix kernel configs README spelling errors
2021-12-23 08:56:52 -08:00
Jakob Naucke
77b6cfbd15 docs: Fix kernel configs README spelling errors
- `fragments` in backticks
- s/perfoms/performs/

Fixes: #3338
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-23 15:54:10 +01:00
Jakob Naucke
d1530afa19 kata-manager: Retrieve static tarball
In `utils/kata-manager.sh`, we download the first asset listed for the
release, which used to be the static x86_64 tarball. If that happened to
not match the system architecture, we would abort. Besides that logic
being invalid for !x86_64 (despite not distributing other tarballs at
the moment), the first asset listed is also not the static tarball any
more, it is the vendored source tarball. Retrieve all _static_ tarballs
and select the appropriate one depending on architecture.

Fixes: #3254
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-23 13:21:54 +01:00
Peng Tao
0e1cb124b7 Merge pull request #3335 from Jakob-Naucke/backport-src-reorg
docs: Fix outdated links
2021-12-23 11:40:55 +08:00
Jakob Naucke
24085c9553 docs: Fix outdated k8s link
in virtcontainers readme

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 19:42:47 +01:00
Jakob Naucke
514bf74f8f docs: Replicate branch rename on runtime-spec
renamed branch `master` to `main`

Fixes: #3336
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 18:18:46 +01:00
Fabiano Fidêncio
77a2502a0f cri-o: Update links for the CRI-O github page
The links are either pointing to the not-used-anymore `master` branch,
or to the kubernetes-incubator page.

Let's always point to the CRI-O github page, using the `main`branch.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-12-22 18:18:46 +01:00
Jakob Naucke
6413ecf459 docs: Backport source reorganization links
#3244 moved directories that were referred to with links to `main`,
which affects stable.

Fixes: #3334
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 17:59:41 +01:00
Fabiano Fidêncio
a31b5b9ee8 Merge pull request #3269 from likebreath/1214/backport_clh_v20.1
stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.1
2021-12-15 00:18:56 +01:00
Bo Chen
a0bed72d49 versions: Upgrade to Cloud Hypervisor v20.1
This is a bug release from Cloud Hypervisor addressing the following
issues: 1) Networking performance regression with virtio-net; 2) Limit
file descriptors sent in vfio-user support; 3) Fully advertise PCI MMIO
config regions in ACPI tables; 4) Set the TSS and KVM identity maps so
they don't overlap with firmware RAM; 5) Correctly update the DeviceTree
on restore.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.1

Fixes: #3262

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit bbfb10e169)
2021-12-14 11:06:08 -08:00
Fabiano Fidêncio
d61bcb8a44 Merge pull request #3247 from Jakob-Naucke/backport-apk-static
Backport osbuilder: Revert to using apk.static for Alpine
2021-12-10 12:10:59 +01:00
Jakob Naucke
d03e05e803 versions: Use fixed, minor version for Alpine
- Set Alpine guest rootfs to 3.13 on all instances.
- Specify a minor version rather than patch level as the Alpine
  repositories use that.

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-09 16:47:43 +01:00
Jakob Naucke
0f7db91c0f osbuilder: Revert to using apk.static for Alpine
#2399 partially reverted #418, missing on returning to bootstrapping a
rootfs with `apk.static` instead of copying the entire root, which can
result in drastically larger (more than 10x) images. Revert this as well
(requires some updates to URL building).

Fixes: #3216
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-09 16:47:43 +01:00
Julio Montes
25ee73ceb3 Merge pull request #3230 from liubin/backport/3220
stable-2.3 | runtime: only call stopVirtiofsd when shared_fs is virtio-fs
2021-12-08 08:32:04 -06:00
Fabiano Fidêncio
64ae76e967 Merge pull request #3224 from Jakob-Naucke/backport-ppc64le-s390x-ubuntu-initrd
Backport versions: Use Ubuntu initrd for non-musl archs
2021-12-08 09:05:13 +01:00
bin
271d67a831 runtime: only call stopVirtiofsd when shared_fs is virtio-fs
If shared_fs is set to virtio-9p, the virtiofsd is not started,
so there is no need to stop it.

Fixes: #3219

Signed-off-by: bin <bin@hyper.sh>
2021-12-08 11:30:35 +08:00
Julio Montes
f42c7d5125 Merge pull request #3215 from likebreath/1206/backport_clh
stable-2.3 | Upgrade to Cloud Hypervisor v20.0 and Openapi-generator v5.3.0
2021-12-07 07:51:21 -06:00
Jakob Naucke
7c15335dc9 versions: Use Ubuntu initrd for non-musl archs
ppc64le & s390x have no (well supported) musl target for Rust,
therefore, the agent must use glibc and cannot use Alpine. Specify
Ubuntu as the distribution to be used for initrd.

Fixes: #3212
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-07 12:15:16 +01:00
Bo Chen
15080f20e7 virtcontainers: clh: Upgrade to openapi-generator v5.3.0
The latest release of openapi-generator v5.3.0 contains the fix for
`dropping err` bug [1]. This patch also re-generated the client code of
Cloud Hypervisor to have the bug fixed.

[1] https://github.com/OpenAPITools/openapi-generator/pull/10275

Fixes: #3201

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 995300260e)
2021-12-06 18:41:39 -08:00
Bo Chen
c2b8eb3c2c virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v19.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 4756a04b2d)
2021-12-06 18:38:48 -08:00
Bo Chen
fe0fbab574 versions: Upgrade to Cloud Hypervisor v20.0
Highlights from the Cloud Hypervisor release v20.0: 1) Multiple PCI
segments support (now support up to 496 PCI devices); 2) CPU pinning; 3)
Improved VFIO support; 4) Safer code; 5) Extended documentation; 6) Bug
fixes.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.0

Fixes: #3178

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 0bf4d2578a)
2021-12-06 18:38:48 -08:00
GabyCT
89f9672f56 Merge pull request #3205 from Bevisy/stable-2.3-3196
stable-2.3 | packaging: Fix missing commit message in building kata-runtime
2021-12-06 10:26:17 -06:00
Fabiano Fidêncio
0a32a1793d Merge pull request #3203 from fengwang666/my_2.3_pr_backport
stable-2.3 | runtime: enable vhost-net for rootless hypervisor
2021-12-06 17:08:33 +01:00
Binbin Zhang
be5468fda7 packaging: Fix missing commit message in building kata-runtime
add `git` package to the shim-v2 build image

Fixes: #3196
Backport PR: #3197

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
2021-12-06 11:04:18 +08:00
Feng Wang
18bb9a5d9b runtime: enable vhost-net for rootless hypervisor
vhost-net is disabled in the rootless kata runtime feature, which has been abandoned since kata 2.0.
I reused the rootless flag for nonroot hypervisor and would like to enable vhost-net.

Fixes #3182

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit b3bcb7b251)
2021-12-03 11:28:40 -08:00
Bin Liu
f068057073 Merge pull request #3184 from liubin/backport/3140
[backport] agent: create directories for watchable-bind mounts
2021-12-03 21:24:14 +08:00
bin
3458073d09 agent: create directories for watchable-bind mounts
In function `update_target`, if the updated source is a directory,
we should create the corresponding directory.

Fixes: #3140

Signed-off-by: bin <bin@hyper.sh>
2021-12-03 14:32:08 +08:00
Bin Liu
f9c09ad5bc Merge pull request #3177 from fengwang666/my_2.3_pr_backport
runtime: enable FUSE_DAX kernel config for DAX
2021-12-03 13:32:18 +08:00
Feng Wang
0e91503cd4 runtime: enable FUSE_DAX kernel config for DAX
Otherwise DAX device cannot be set up.

Fixes #3165

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit 6105e3ee85)
2021-12-02 09:22:26 -08:00
Fabiano Fidêncio
185f96d170 Merge pull request #3150 from fidencio/2.3.0-branch-bump
# Kata Containers 2.3.0
2021-11-29 22:27:21 +01:00
Fabiano Fidêncio
9bc543f5db release: Kata Containers 2.3.0
- stable-2.3 | osbuilder: fix missing cpio package when building rootfs-initrd image
- stable-2.3 | osbuilder: add coreutils to guest rootfs
- stable-2.3 | backport kata-deploy fixes / improvements
- stable-2.3 | tools/osbuilder: build QAT kernel in fedora 34
- backport: fix symlink handling in agent watcher
- stable-2.3: add VFIO kernel dependencies for ppc64le
- [stable] runtime: Update containerd to 1.5.8
- stable-2.3: disable libudev when building static QEMU
- stable-2.3: virtcontainers: fix failing template test on ppc64le
- stable-2.3: cgroups systemd fix
- stable-2.3:remove non used actions
- stable-2.3 | versions: bump golang to 1.17.x

198e0d16 release: Adapt kata-deploy for 2.3.0
df34e919 osbuilder: fix missing cpio package when building rootfs-initrd image
f61e31cd osbuilder: add coreutils to guest rootfs
cb7891e0 tools/osbuilder: build QAT kernel in fedora 34
2667e028 workflows: only allow org members to run `/test_kata_deploy`
3542cba8 workflows: Add back the checks for running test-kata-deploy
117b9202 kata-deploy: Ensure we test HEAD with `/test_kata_deploy`
db9cd107 watcher: tests: ensure there is 20ms delay between fs writes
a51a1f6d watchers: handle symlinked directories, dir removal
5bc1c209 watchers: don't dereference symlinks when copying files
34a1b539 stable-2.3: add VFIO kernel dependencies for ppc64le
8a705f74 runtime: Update containerd to 1.5.8
ac5ab86e qemu: fix snap build by disabling libudev
d22ec599 virtcontainers: fix failing template test on ppc64le
f9bde321 workflows: Remove non-used main.yaml
b8215119 cgroups: Fix systemd cgroup support
a9d5377b cgroups: pass vhost-vsock device to cgroup
ea83ff1f runtime: remove prefix when cgroups are managed by systemd
91003c27 versions: bump golang to 1.17.x

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-29 20:08:39 +01:00
Fabiano Fidêncio
198e0d1666 release: Adapt kata-deploy for 2.3.0
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy / kata-cleanup: change from "latest" to "rc0"
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-29 20:08:39 +01:00
Fabiano Fidêncio
bf183c5f7f Merge pull request #3148 from fidencio/wip/stable-2.3-fix-cpio-missing-cpio-package
stable-2.3 | osbuilder: fix missing cpio package when building rootfs-initrd image
2021-11-29 20:07:16 +01:00
Binbin Zhang
df34e91978 osbuilder: fix missing cpio package when building rootfs-initrd image
1. install cpio package before building rootfs-initrd image
2. add `pipefaili;errexit` check to the scripts

Fixes: #3144

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
(cherry picked from commit 8ee67aae4f)
2021-11-29 18:29:02 +01:00
Fabiano Fidêncio
5995efc0a6 Merge pull request #3143 from bergwolf/coreutils-2.3
stable-2.3 | osbuilder: add coreutils to guest rootfs
2021-11-29 12:31:38 +01:00
Fabiano Fidêncio
000f878417 Merge pull request #3141 from fidencio/wip/kata-deploy-backports
stable-2.3 | backport kata-deploy fixes / improvements
2021-11-29 12:11:21 +01:00
Fabiano Fidêncio
a6a76bb092 Merge pull request #3142 from fidencio/wip/stable-2.3-backports-before-a-release
stable-2.3 | tools/osbuilder: build QAT kernel in fedora 34
2021-11-29 12:11:13 +01:00
Peng Tao
f61e31cd84 osbuilder: add coreutils to guest rootfs
So that the debug console is more useful. In the meantime, remove
iptables as it is not used by kata-agent any more.

Fixes: #3138
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-11-29 16:53:04 +08:00
Julio Montes
cb7891e0b4 tools/osbuilder: build QAT kernel in fedora 34
kernel compiled in fedora 35 (latest) is not working, following error
is reported:

```
qemu-system-x86_64: Error loading uncompressed kernel without PVH ELF
Note
```

Build QAT kernel in fedora 34 container to fix it

fixes #3135

Signed-off-by: Julio Montes <julio.montes@intel.com>
(cherry picked from commit 857501d8dd)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-29 08:24:31 +01:00
Fabiano Fidêncio
2667e0286a workflows: only allow org members to run /test_kata_deploy
Let's take advantage of the "is-organization-member" action and only
allow members who are part of the `kata-containers` organization to
trigger `/test_kata_deploy`.

One caveat with this approach is that for the user to be considered as
part of an organization, they **must** have their "Organization
Visibility" configured as Public (and I think the default is Private).

This was found out and suggested by @jcvenegas!

Fixes: #3130

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 5e7c1a290f)
2021-11-29 08:04:46 +01:00
Fabiano Fidêncio
3542cba8f3 workflows: Add back the checks for running test-kata-deploy
Commit 3c9ae7f made /test_kata_deploy run
against HEAD, but it also mistakenly removed all the checks that ensure
/test_kata_deploy only runs when explicitly called.

Mea culpa on this, and let's add the tests back.

Fixes: #3101

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit a7c08aa4b6)
2021-11-29 08:04:41 +01:00
Fabiano Fidêncio
117b920230 kata-deploy: Ensure we test HEAD with /test_kata_deploy
Is the past few releases we ended up hitting issues that could be easily
avoided if `/test_kata_deploy` would use HEAD instead of a specific
tarball.

By the end of the day, we want to ensure kata-deploy works, but before
we cut a release we also want to ensure that the binaries used in that
release are in a good shape.  If we don't do that we end up either
having to roll a release back, or to cut a second release in a really
short time (and that's time consuming).

Note: there's code duplication here that could and should be avoided,b
but I sincerely would prefer treating it in a different PR.

Fixes: #3001

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 3c9ae7fb4b)
2021-11-29 08:02:56 +01:00
Eric Ernst
5694749ce5 Merge pull request #3087 from egernst/fix-symlinks-backport
backport: fix symlink handling in agent watcher
2021-11-19 15:31:48 -08:00
Eric Ernst
db9cd1078f watcher: tests: ensure there is 20ms delay between fs writes
We noticed s390x test failures on several of the watcher unit tests.

Discovered that on s390 in particular, if we update a file in quick
sucecssion, the time stampe on the file would not be unique between the
writes. Through testing, we observe that a 20 millisecond delay is very
reliable for being able to observe the timestamp update. Let's ensure we
have this delay between writes for our tests so our tests are more
reliable.

In "the real world" we'll be polling for changes every 2 seconds, and
frequency of filesystem updates will be on order of minutes and days,
rather that microseconds.

Fixes: #2946

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-11-19 13:04:26 -08:00
Eric Ernst
a51a1f6d06 watchers: handle symlinked directories, dir removal
- Even a directory could be a symlink - check for this. This is very
common when using configmaps/secrets
- Add unit test to better mimic a configmap, configmap update
- We would never remove directories before. Let's ensure that these are
added to the watched_list, and verify in unit tests
- Update unit tests which exercise maximum number of files per entry. There's a change
in behavior now that we consider directories/symlinks watchable as well.
For these tests, it means we support one less file in a watchable mount.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-11-19 13:04:26 -08:00
Eric Ernst
5bc1c209b2 watchers: don't dereference symlinks when copying files
The current implementation just copies the file, dereferencing any
simlinks in the process. This results in symlinks no being preserved,
and a change in layout relative to the mount that we are making
watchable.

What we want is something like "cp -d"

This isn't available in a crate, so let's go ahead and introduce a copy
function which will create a symlink with same relative path if the
source file is a symlink. Regular files are handled with the standard
fs::copy.

Introduce a unit test to verify symlinks are now handled appropriately.

Fixes: #2950

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-11-19 13:04:24 -08:00
Fabiano Fidêncio
b2851ffc9c Merge pull request #3082 from Amulyam24/kernel_vfio
stable-2.3: add VFIO kernel dependencies for ppc64le
2021-11-19 17:26:23 +01:00
Fabiano Fidêncio
45eafafdf3 Merge pull request #3076 from c3d/backport/3074-containerd-update
[stable] runtime: Update containerd to 1.5.8
2021-11-19 10:39:15 +01:00
Amulyam24
34a1b5396a stable-2.3: add VFIO kernel dependencies for ppc64le
Recently added VFIO kernel configs require addtional
dependencies on pcc64le.

Fixes: #2991

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
2021-11-19 11:29:10 +05:30
Greg Kurz
f1cd3b6300 Merge pull request #3070 from gkurz/backport-snap-udev
stable-2.3: disable libudev when building static QEMU
2021-11-18 22:18:41 +01:00
Greg Kurz
e0b74bb413 Merge pull request #3072 from gkurz/backport-template-test
stable-2.3: virtcontainers: fix failing template test on ppc64le
2021-11-18 21:29:02 +01:00
Christophe de Dinechin
8a705f74b5 runtime: Update containerd to 1.5.8
Release 1.5.8 of containerd contains fixes for two low-severity advisories:

[GHSA-5j5w-g665-5m35](https://github.com/opencontainers/distribution-spec/security/advisories/GHSA-mc8v-mgrf-8f4m)
[GHSA-77vh-xpmg-72qh](https://github.com/opencontainers/image-spec/security/advisories/GHSA-77vh-xpmg-72qh)

Fixes: #3074

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-11-18 19:30:36 +01:00
Amulyam24
ac5ab86ebd qemu: fix snap build by disabling libudev
While building snap, static qemu is considered. Disable libudev
as it doesn't have static libraries on most of the distros of all
archs.

Backport-from: #3003
Fixes: #3002

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
(cherry picked from commit 112ea25859)
Signed-off-by: Greg Kurz <groug@kaod.org>
2021-11-18 17:50:58 +01:00
Amulyam24
d22ec59920 virtcontainers: fix failing template test on ppc64le
If a file/directory doesn't exist, os.Stat() returns an
error. Assert the returned value with os.IsNotExist() to
prevent it from failing.

Backport-from: #2921
Fixes: #2920

Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
(cherry picked from commit d5a18173b9)
Signed-off-by: Greg Kurz <groug@kaod.org>
2021-11-18 16:05:18 +01:00
snir911
440657b36d Merge pull request #3037 from snir911/stable-fix-cgroups
stable-2.3: cgroups systemd fix
2021-11-15 12:19:58 +02:00
snir911
0c00a9d463 Merge pull request #3039 from snir911/stable-2.3-remove-non-used-actions
stable-2.3:remove non used actions
2021-11-15 11:09:33 +02:00
Fabiano Fidêncio
f9bde321e9 workflows: Remove non-used main.yaml
The main.yaml workflow was created and used only on 1.x.  We inherited
it, but we didn't remove it after deprecating the 1.x repos.

While here, let's also update the reference to the `main.yaml` file,
and point to `release.yaml` (the file that's actually used for 2.x).

Fixes: #3033

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-14 10:33:19 +02:00
Snir Sheriber
b821511992 cgroups: Fix systemd cgroup support
As github.com/containerd/cgroups doesn't support scope
units which are essential in some cases lets create
the cgroups manually and load it trough the cgroups
api
This is currently done only when there's single sandbox
cgroup (sandbox_cgroup_only=true), otherwise we set it
as static cgroup path as it used to be (until a proper
soultion for overhead cgroup under systemd will be
suggested)

Backport-from: #2959
Fixes: #2868
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-11-14 09:41:35 +02:00
Snir Sheriber
a9d5377bd9 cgroups: pass vhost-vsock device to cgroup
for the sandbox cgroup

Backport-from: #2959
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-11-14 09:41:22 +02:00
Snir Sheriber
ea83ff1fc3 runtime: remove prefix when cgroups are managed by systemd
as done previously in 9949daf4dc

Backport-from: #2959
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-11-14 09:37:24 +02:00
Fabiano Fidêncio
03f7a5e49b Merge pull request #3026 from fidencio/wip/stable-2.3-backport-golang-bump
stable-2.3 | versions: bump golang to 1.17.x
2021-11-13 00:08:12 +01:00
Fabiano Fidêncio
91003c2751 versions: bump golang to 1.17.x
According to https://endoflife.date/go golang 1.15 is not supported
anymore.  Let's remove it from out tests, add 1.17.x, and bump the
newest version known to work when building kata to 1.17.3.

Fixes: #3016

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 395638c4bc)
2021-11-11 22:27:59 +01:00
Eric Ernst
57ffe14940 Merge pull request #3021 from ManaSugi/fix-yq-for-2.3
stable-2.3 | release: Use ${GOPATH}/bin/yq for upload-libseccomp-tarball action
2021-11-11 11:39:02 -08:00
Manabu Sugimoto
5e9b807ba0 release: Use ${GOPATH}/bin/yq for upload-libseccomp-tarball action
We need to explicitly call `${GOPATH}/bin/yq` that is installed by
`ci/install_yq.sh`.

Fixes: #3014

Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
(cherry picked from commit 3430723594)
2021-11-11 23:46:37 +09:00
Fabiano Fidêncio
de6fe98ec0 Merge pull request #3010 from fidencio/2.3.0-rc1-branch-bump
# Kata Containers 2.3.0-rc1
2021-11-10 21:44:58 +01:00
Fabiano Fidêncio
de0eea5f44 release: Kata Containers 2.3.0-rc1
- stable-2.3 | runtime: Revert "runtime: use containerd package instead of cri-containerd

96b66d2c docs: Fix typo
62a51d51 runtime: Revert "runtime: use containerd package instead of cri-containerd"

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-10 19:01:14 +01:00
Fabiano Fidêncio
73d7929c10 Merge pull request #3008 from fidencio/wip/backport-crioption-fix
stable-2.3 | runtime: Revert "runtime: use containerd package instead of cri-containerd
2021-11-10 17:10:29 +01:00
James O. D. Hunt
96b66d2cb4 docs: Fix typo
Correct a typo identified by the static checker's spell checker.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
(cherry picked from commit b09dd7a883)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-10 15:58:34 +01:00
Peng Tao
62a51d51a2 runtime: Revert "runtime: use containerd package instead of cri-containerd"
This reverts commit 76f16fd1a7 to bring
back cri-containerd crioptions parsing so that kata works with older
containerd versions like v1.3.9 and v1.4.6.

Fixes: #2999
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
(cherry picked from commit eacfcdec19)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-11-10 13:42:38 +01:00
3245 changed files with 85694 additions and 312999 deletions

View File

@@ -1,40 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2022 Red Hat
#
# SPDX-License-Identifier: Apache-2.0
#
script_dir=$(dirname "$(readlink -f "$0")")
parent_dir=$(realpath "${script_dir}/../..")
cidir="${parent_dir}/ci"
source "${cidir}/lib.sh"
cargo_deny_file="${script_dir}/action.yaml"
cat cargo-deny-skeleton.yaml.in > "${cargo_deny_file}"
changed_files_status=$(run_get_pr_changed_file_details)
changed_files_status=$(echo "$changed_files_status" | grep "Cargo\.toml$" || true)
changed_files=$(echo "$changed_files_status" | awk '{print $NF}' || true)
if [ -z "$changed_files" ]; then
cat >> "${cargo_deny_file}" << EOF
- run: echo "No Cargo.toml files to check"
shell: bash
EOF
fi
for path in $changed_files
do
cat >> "${cargo_deny_file}" << EOF
- name: ${path}
continue-on-error: true
shell: bash
run: |
pushd $(dirname ${path})
cargo deny check
popd
EOF
done

View File

@@ -1,30 +0,0 @@
#
# Copyright (c) 2022 Red Hat
#
# SPDX-License-Identifier: Apache-2.0
#
name: 'Cargo Crates Check'
description: 'Checks every Cargo.toml file using cargo-deny'
env:
CARGO_TERM_COLOR: always
runs:
using: "composite"
steps:
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
- name: Cache
uses: Swatinem/rust-cache@v2
- name: Install Cargo deny
shell: bash
run: |
which cargo
cargo install --locked cargo-deny || true

View File

@@ -1,100 +0,0 @@
name: Add backport label
on:
pull_request:
types:
- opened
- synchronize
- reopened
- edited
- labeled
- unlabeled
jobs:
check-issues:
if: ${{ github.event.label.name != 'auto-backport' }}
runs-on: ubuntu-latest
steps:
- name: Checkout code to allow hub to communicate with the project
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v3
- name: Install hub extension script
run: |
pushd $(mktemp -d) &>/dev/null
git clone --single-branch --depth 1 "https://github.com/kata-containers/.github" && cd .github/scripts
sudo install hub-util.sh /usr/local/bin
popd &>/dev/null
- name: Determine whether to add label
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CONTAINS_AUTO_BACKPORT: ${{ contains(github.event.pull_request.labels.*.name, 'auto-backport') }}
id: add_label
run: |
pr=${{ github.event.pull_request.number }}
linked_issue_urls=$(hub-util.sh \
list-issues-for-pr "$pr" |\
grep -v "^\#" |\
cut -d';' -f3 || true)
[ -z "$linked_issue_urls" ] && {
echo "::error::No linked issues for PR $pr"
exit 1
}
has_bug=false
for issue_url in $(echo "$linked_issue_urls")
do
issue=$(echo "$issue_url"| awk -F\/ '{print $NF}' || true)
[ -z "$issue" ] && {
echo "::error::Cannot determine issue number from $issue_url for PR $pr"
exit 1
}
labels=$(hub-util.sh list-labels-for-issue "$issue")
label_names=$(echo $labels | jq -r '.[].name' || true)
if [[ "$label_names" =~ "bug" ]]; then
has_bug=true
break
fi
done
has_backport_needed_label=${{ contains(github.event.pull_request.labels.*.name, 'needs-backport') }}
has_no_backport_needed_label=${{ contains(github.event.pull_request.labels.*.name, 'no-backport-needed') }}
echo "::set-output name=add_backport_label::false"
if [ $has_backport_needed_label = true ] || [ $has_bug = true ]; then
if [[ $has_no_backport_needed_label = false ]]; then
echo "::set-output name=add_backport_label::true"
fi
fi
# Do not spam comment, only if auto-backport label is going to be newly added.
echo "::set-output name=auto_backport_added::$CONTAINS_AUTO_BACKPORT"
- name: Add comment
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && steps.add_label.outputs.add_backport_label == 'true' && steps.add_label.outputs.auto_backport_added == 'false' }}
uses: actions/github-script@v6
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'This issue has been marked for auto-backporting. Add label(s) backport-to-BRANCHNAME to backport to them'
})
# Allow label to be removed by adding no-backport-needed label
- name: Remove auto-backport label
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && steps.add_label.outputs.add_backport_label == 'false' }}
uses: andymckay/labeler@e6c4322d0397f3240f0e7e30a33b5c5df2d39e90
with:
remove-labels: "auto-backport"
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Add auto-backport label
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && steps.add_label.outputs.add_backport_label == 'true' }}
uses: andymckay/labeler@e6c4322d0397f3240f0e7e30a33b5c5df2d39e90
with:
add-labels: "auto-backport"
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,40 +0,0 @@
# Copyright (c) 2022 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
name: Add PR sizing label
on:
pull_request_target:
types:
- opened
- reopened
- synchronize
jobs:
add-pr-size-label:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v1
- name: Install PR sizing label script
run: |
# Clone into a temporary directory to avoid overwriting
# any existing github directory.
pushd $(mktemp -d) &>/dev/null
git clone --single-branch --depth 1 "https://github.com/kata-containers/.github" && cd .github/scripts
sudo install pr-add-size-label.sh /usr/local/bin
popd &>/dev/null
- name: Add PR sizing label
env:
GITHUB_TOKEN: ${{ secrets.KATA_GITHUB_ACTIONS_PR_SIZE_TOKEN }}
run: |
pr=${{ github.event.number }}
# Removing man-db, workflow kept failing, fixes: #4480
sudo apt -y remove --purge man-db
sudo apt -y install diffstat patchutils
pr-add-size-label.sh -p "$pr"

View File

@@ -1,29 +0,0 @@
on:
pull_request_target:
types: ["labeled", "closed"]
jobs:
backport:
name: Backport PR
runs-on: ubuntu-latest
if: |
github.event.pull_request.merged == true
&& contains(github.event.pull_request.labels.*.name, 'auto-backport')
&& (
(github.event.action == 'labeled' && github.event.label.name == 'auto-backport')
|| (github.event.action == 'closed')
)
steps:
- name: Backport Action
uses: sqren/backport-github-action@v8.9.2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
auto_backport_label_prefix: backport-to-
- name: Info log
if: ${{ success() }}
run: cat /home/runner/.backport/backport.info.log
- name: Debug log
if: ${{ failure() }}
run: cat /home/runner/.backport/backport.debug.log

View File

@@ -1,26 +0,0 @@
name: Cargo Crates Check Runner
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
jobs:
cargo-deny-runner:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v3
- name: Generate Action
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: bash cargo-deny-generator.sh
working-directory: ./.github/cargo-deny-composite-action/
env:
GOPATH: ${{ runner.workspace }}/kata-containers
- name: Run Action
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: ./.github/cargo-deny-composite-action

View File

@@ -1,159 +0,0 @@
name: CI | Publish CC runtime payload for amd64
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
asset:
- cc-cloud-hypervisor
- cc-kernel
- cc-qemu
- cc-rootfs-image
- cc-virtiofsd
- cc-sev-kernel
- cc-sev-ovmf
- cc-sev-rootfs-initrd
- cc-tdx-kernel
- cc-tdx-rootfs-image
- cc-tdx-qemu
- cc-tdx-td-shim
- cc-tdx-tdvf
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
with:
fetch-depth: 0 # This is needed in order to keep the commit ids history
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: yes
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_tdx.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/root_hash_tdx.txt
retention-days: 1
if-no-files-found: ignore
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: ubuntu-latest
needs: build-asset
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: Get root_hash_tdx.txt
uses: actions/download-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
PUSH_TO_REGISTRY: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,149 +0,0 @@
name: CI | Publish CC runtime payload for s390x
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: s390x
strategy:
matrix:
asset:
- cc-kernel
- cc-qemu
- cc-rootfs-image
- cc-virtiofsd
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
with:
fetch-depth: 0 # This is needed in order to keep the commit ids history
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
sudo chown -R $(id -u):$(id -g) "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: yes
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: s390x
needs: build-asset
steps:
- name: Login to Kata Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
PUSH_TO_REGISTRY: yes
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: s390x
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball-s390x
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: s390x
steps:
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-s390x
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz "quay.io/confidential-containers/runtime-payload-ci" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,39 +0,0 @@
name: CI | Publish Kata Containers payload for Confidential Containers
on:
push:
branches:
- CCv0
jobs:
build-assets-amd64:
uses: ./.github/workflows/cc-payload-after-push-amd64.yaml
with:
target-arch: amd64
secrets: inherit
build-assets-s390x:
uses: ./.github/workflows/cc-payload-after-push-s390x.yaml
with:
target-arch: s390x
secrets: inherit
publish:
runs-on: ubuntu-latest
needs: [build-assets-amd64, build-assets-s390x]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Push multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-amd64 \
--amend quay.io/confidential-containers/runtime-payload-ci:kata-containers-s390x
docker manifest push quay.io/confidential-containers/runtime-payload-ci:kata-containers-latest

View File

@@ -1,142 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers (amd64)
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
asset:
- cc-cloud-hypervisor
- cc-kernel
- cc-qemu
- cc-rootfs-image
- cc-virtiofsd
- cc-sev-kernel
- cc-sev-ovmf
- cc-sev-rootfs-initrd
- cc-tdx-kernel
- cc-tdx-rootfs-image
- cc-tdx-qemu
- cc-tdx-td-shim
- cc-tdx-tdvf
steps:
- uses: actions/checkout@v3
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_tdx.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/root_hash_tdx.txt
retention-days: 1
if-no-files-found: ignore
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: ubuntu-latest
needs: build-asset
steps:
- uses: actions/checkout@v3
- name: Get root_hash_tdx.txt
uses: actions/download-artifact@v3
with:
name: root_hash_tdx.txt
path: tools/osbuilder/
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: Login to quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz \
"quay.io/confidential-containers/runtime-payload" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,134 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers (s390x)
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
jobs:
build-asset:
runs-on: s390x
strategy:
matrix:
asset:
- cc-kernel
- cc-qemu
- cc-rootfs-image
- cc-virtiofsd
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
retention-days: 1
if-no-files-found: error
- name: store-artifact root_hash_vanilla.txt
uses: actions/upload-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/root_hash_vanilla.txt
retention-days: 1
if-no-files-found: ignore
build-asset-cc-shim-v2:
runs-on: s390x
needs: build-asset
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: Get root_hash_vanilla.txt
uses: actions/download-artifact@v3
with:
name: root_hash_vanilla.txt-s390x
path: tools/osbuilder/
- name: Build cc-shim-v2
run: |
make cc-shim-v2-tarball
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
- name: store-artifact cc-shim-v2
uses: actions/upload-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-build/kata-static-cc-shim-v2.tar.xz
retention-days: 1
if-no-files-found: error
create-kata-tarball:
runs-on: s390x
needs: [build-asset, build-asset-cc-shim-v2]
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-artifacts
uses: actions/download-artifact@v3
with:
name: kata-artifacts-s390x
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v3
with:
name: kata-static-tarball-s390x
path: kata-static.tar.xz
retention-days: 1
if-no-files-found: error
kata-payload:
needs: create-kata-tarball
runs-on: s390x
steps:
- name: Login to quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Adjust a permission for repo
run: |
sudo chown -R $USER:$USER $GITHUB_WORKSPACE
- uses: actions/checkout@v3
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-s390x
- name: build-and-push-kata-payload
id: build-and-push-kata-payload
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
$(pwd)/kata-static.tar.xz \
"quay.io/confidential-containers/runtime-payload" \
"kata-containers-${{ inputs.target-arch }}"

View File

@@ -1,39 +0,0 @@
name: Publish Kata Containers payload for Confidential Containers
on:
push:
tags:
- 'CC\-[0-9]+.[0-9]+.[0-9]+'
jobs:
build-assets-amd64:
uses: ./.github/workflows/cc-payload-amd64.yaml
with:
target-arch: amd64
secrets: inherit
build-assets-s390x:
uses: ./.github/workflows/cc-payload-s390x.yaml
with:
target-arch: s390x
secrets: inherit
publish:
runs-on: ubuntu-latest
needs: [build-assets-amd64, build-assets-s390x]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Login to Confidential Containers quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.COCO_QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.COCO_QUAY_DEPLOYER_PASSWORD }}
- name: Push multi-arch manifest
run: |
docker manifest create quay.io/confidential-containers/runtime-payload:kata-containers-latest \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-amd64 \
--amend quay.io/confidential-containers/runtime-payload:kata-containers-s390x
docker manifest push quay.io/confidential-containers/runtime-payload:kata-containers-latest

View File

@@ -5,12 +5,14 @@ on:
- opened
- reopened
- synchronize
- labeled
- unlabeled
env:
error_msg: |+
See the document below for help on formatting commits for the project.
https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md#patch-format
https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#patch-format
jobs:
commit-message-check:
@@ -20,15 +22,9 @@ jobs:
- name: Get PR Commits
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@v1.2.0
uses: tim-actions/get-pr-commits@v1.0.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
# Filter out revert commits
# The format of a revert commit is as follows:
#
# Revert "<original-subject-line>"
#
filter_out_pattern: '^Revert "'
- name: DCO Check
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
@@ -47,7 +43,7 @@ jobs:
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^.{0,75}(\n.*)*$|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
pattern: '^.{0,75}(\n.*)*$'
error: 'Subject too long (max 75)'
post_error: ${{ env.error_msg }}
@@ -63,8 +59,7 @@ jobs:
# the entire commit message.
#
# - Body lines *can* be longer than the maximum if they start
# with a non-alphabetic character or if there is no whitespace in
# the line.
# with a non-alphabetic character.
#
# This allows stack traces, log files snippets, emails, long URLs,
# etc to be specified. Some of these naturally "work" as they start
@@ -75,8 +70,8 @@ jobs:
#
# - A SoB comment can be any length (as it is unreasonable to penalise
# people with long names/email addresses :)
pattern: '^.+(\n([a-zA-Z].{0,150}|[^a-zA-Z\n].*|[^\s\n]*|Signed-off-by:.*|))+$'
error: 'Body line too long (max 150)'
pattern: '^.+(\n([a-zA-Z].{0,149}|[^a-zA-Z\n].*|Signed-off-by:.*|))+$'
error: 'Body line too long (max 72)'
post_error: ${{ env.error_msg }}
- name: Check Fixes
@@ -95,6 +90,6 @@ jobs:
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:|^Merge pull request (?:kata-containers)?#[\d]+ from.*'
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:'
error: 'Failed to find subsystem in subject'
post_error: ${{ env.error_msg }}

View File

@@ -1,21 +0,0 @@
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
name: Darwin tests
jobs:
test:
runs-on: macos-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.19.3
- name: Checkout code
uses: actions/checkout@v2
- name: Build utils
run: ./ci/darwin-test.sh

View File

@@ -1,129 +0,0 @@
on:
issue_comment:
types: [created, edited]
name: deploy-ccv0-demo
jobs:
check-comment-and-membership:
runs-on: ubuntu-latest
if: |
github.event.issue.pull_request
&& github.event_name == 'issue_comment'
&& github.event.action == 'created'
&& startsWith(github.event.comment.body, '/deploy-ccv0-demo')
steps:
- name: Check membership
uses: kata-containers/is-organization-member@1.0.1
id: is_organization_member
with:
organization: kata-containers
username: ${{ github.event.comment.user.login }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: Fail if not member
run: |
result=${{ steps.is_organization_member.outputs.result }}
if [ $result == false ]; then
user=${{ github.event.comment.user.login }}
echo Either ${user} is not part of the kata-containers organization
echo or ${user} has its Organization Visibility set to Private at
echo https://github.com/orgs/kata-containers/people?query=${user}
echo
echo Ensure you change your Organization Visibility to Public and
echo trigger the test again.
exit 1
fi
build-asset:
runs-on: ubuntu-latest
needs: check-comment-and-membership
strategy:
matrix:
asset:
- cloud-hypervisor
- firecracker
- kernel
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
steps:
- uses: actions/checkout@v2
- name: Install docker
run: |
curl -fsSL https://test.docker.com -o test-docker.sh
sh test-docker.sh
- name: Prepare confidential container rootfs
if: ${{ matrix.asset == 'rootfs-initrd' }}
run: |
pushd include_rootfs/etc
curl -LO https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json
mkdir kata-containers
envsubst < docs/how-to/data/confidential-agent-config.toml.in > kata-containers/agent.toml
popd
env:
AA_KBC_PARAMS: offline_fs_kbc::null
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
AA_KBC: offline_fs_kbc
INCLUDE_ROOTFS: include_rootfs
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v2
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: build-asset
steps:
- uses: actions/checkout@v2
- name: get-artifacts
uses: actions/download-artifact@v2
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: kata-static-tarball
path: kata-static.tar.xz
kata-deploy:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: get-kata-tarball
uses: actions/download-artifact@v2
with:
name: kata-static-tarball
- name: build-and-push-kata-deploy-ci
id: build-and-push-kata-deploy-ci
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE
git checkout $tag
pkg_sha=$(git rev-parse HEAD)
popd
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/confidential-containers/runtime-payload:$pkg_sha $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
docker push quay.io/confidential-containers/runtime-payload:$pkg_sha
mkdir -p packaging/kata-deploy
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
echo "::set-output name=PKG_SHA::${pkg_sha}"

View File

@@ -1,37 +0,0 @@
on:
schedule:
- cron: '0 23 * * 0'
name: Docs URL Alive Check
jobs:
test:
runs-on: ubuntu-20.04
# don't run this action on forks
if: github.repository_owner == 'kata-containers'
env:
target_branch: ${{ github.base_ref }}
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.19.3
env:
GOPATH: ${{ runner.workspace }}/kata-containers
- name: Set env
run: |
echo "GOPATH=${{ github.workspace }}" >> $GITHUB_ENV
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
- name: Checkout code
uses: actions/checkout@v2
with:
fetch-depth: 0
path: ./src/github.com/${{ github.repository }}
- name: Setup
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && ./ci/setup.sh
env:
GOPATH: ${{ runner.workspace }}/kata-containers
# docs url alive check
- name: Docs URL Alive Check
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make docs-url-alive-check

View File

@@ -7,9 +7,9 @@ on:
- edited
- reopened
- synchronize
paths:
- tools/**
- versions.yaml
- labeled
- unlabeled
push:
jobs:
build-asset:
@@ -18,14 +18,13 @@ jobs:
matrix:
asset:
- kernel
- kernel-experimental
- shim-v2
- qemu
- cloud-hypervisor
- firecracker
- rootfs-image
- rootfs-initrd
- virtiofsd
- nydus
steps:
- uses: actions/checkout@v2
- name: Install docker

View File

@@ -1,10 +1,4 @@
on:
workflow_dispatch: # this is used to trigger the workflow on non-main branches
inputs:
pr:
description: 'PR number from the selected branch to test'
type: string
required: true
issue_comment:
types: [created, edited]
@@ -18,20 +12,19 @@ jobs:
&& github.event_name == 'issue_comment'
&& github.event.action == 'created'
&& startsWith(github.event.comment.body, '/test_kata_deploy')
|| github.event_name == 'workflow_dispatch'
steps:
- name: Check membership on comment or dispatch
- name: Check membership
uses: kata-containers/is-organization-member@1.0.1
id: is_organization_member
with:
organization: kata-containers
username: ${{ github.event.comment.user.login || github.event.sender.login }}
username: ${{ github.event.comment.user.login }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: Fail if not member
run: |
result=${{ steps.is_organization_member.outputs.result }}
if [ $result == false ]; then
user=${{ github.event.comment.user.login || github.event.sender.login }}
user=${{ github.event.comment.user.login }}
echo Either ${user} is not part of the kata-containers organization
echo or ${user} has its Organization Visibility set to Private at
echo https://github.com/orgs/kata-containers/people?query=${user}
@@ -50,27 +43,23 @@ jobs:
- cloud-hypervisor
- firecracker
- kernel
- nydus
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
- virtiofsd
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
if [ ${{ github.event_name }} == 'issue_comment' ]; then
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
else # workflow_dispatch
ref="refs/pull/${{ github.event.inputs.pr }}/merge"
fi
echo "reference for PR: " ${ref} "event:" ${{ github.event_name }}
echo "##[set-output name=pr-ref;]${ref}"
# As Github action event `issue_comment` does not provide the right ref
# (commit/branch) to be tested, let's use this third part action to work
# this limitation around.
- name: resolve pr refs
id: refs
uses: kata-containers/resolve-pr-refs@v0.0.3
with:
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
ref: ${{ steps.refs.outputs.base_ref }}
- name: Install docker
run: |
curl -fsSL https://test.docker.com -o test-docker.sh
@@ -97,19 +86,17 @@ jobs:
runs-on: ubuntu-latest
needs: build-asset
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
if [ ${{ github.event_name }} == 'issue_comment' ]; then
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
else # workflow_dispatch
ref="refs/pull/${{ github.event.inputs.pr }}/merge"
fi
echo "reference for PR: " ${ref} "event:" ${{ github.event_name }}
echo "##[set-output name=pr-ref;]${ref}"
# As Github action event `issue_comment` does not provide the right ref
# (commit/branch) to be tested, let's use this third part action to work
# this limitation around.
- name: resolve pr refs
id: refs
uses: kata-containers/resolve-pr-refs@v0.0.3
with:
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
ref: ${{ steps.refs.outputs.base_ref }}
- name: get-artifacts
uses: actions/download-artifact@v2
with:
@@ -128,19 +115,17 @@ jobs:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
if [ ${{ github.event_name }} == 'issue_comment' ]; then
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
else # workflow_dispatch
ref="refs/pull/${{ github.event.inputs.pr }}/merge"
fi
echo "reference for PR: " ${ref} "event:" ${{ github.event_name }}
echo "##[set-output name=pr-ref;]${ref}"
# As Github action event `issue_comment` does not provide the right ref
# (commit/branch) to be tested, let's use this third part action to work
# this limitation around.
- name: resolve pr refs
id: refs
uses: kata-containers/resolve-pr-refs@v0.0.3
with:
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
ref: ${{ steps.refs.outputs.base_ref }}
- name: get-kata-tarball
uses: actions/download-artifact@v2
with:
@@ -148,14 +133,18 @@ jobs:
- name: build-and-push-kata-deploy-ci
id: build-and-push-kata-deploy-ci
run: |
PR_SHA=$(git log --format=format:%H -n1)
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE
git checkout $tag
pkg_sha=$(git rev-parse HEAD)
popd
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/kata-containers/kata-deploy-ci:$PR_SHA $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/kata-containers/kata-deploy-ci:$pkg_sha $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
docker push quay.io/kata-containers/kata-deploy-ci:$PR_SHA
docker push quay.io/kata-containers/kata-deploy-ci:$pkg_sha
mkdir -p packaging/kata-deploy
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
echo "::set-output name=PKG_SHA::${PR_SHA}"
echo "::set-output name=PKG_SHA::${pkg_sha}"
- name: test-kata-deploy-ci-in-aks
uses: ./packaging/kata-deploy/action
with:

View File

@@ -10,6 +10,8 @@ on:
types:
- opened
- reopened
- labeled
- unlabeled
jobs:
move-linked-issues-to-in-progress:

View File

@@ -1,8 +1,8 @@
name: Publish Kata release artifacts
name: Publish Kata 2.x release artifacts
on:
push:
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
- '2.*'
jobs:
build-asset:
@@ -13,12 +13,10 @@ jobs:
- cloud-hypervisor
- firecracker
- kernel
- nydus
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
- virtiofsd
steps:
- uses: actions/checkout@v2
- name: Install docker
@@ -28,7 +26,6 @@ jobs:
- name: Build ${{ matrix.asset }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-copy-yq-installer.sh
./tools/packaging/kata-deploy/local-build/kata-deploy-binaries-in-docker.sh --build="${KATA_ASSET}"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
@@ -143,10 +140,13 @@ jobs:
- uses: actions/checkout@v2
- name: generate-and-upload-tarball
run: |
pushd $GITHUB_WORKSPACE/src/agent
cargo vendor >> .cargo/config
popd
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="kata-containers-$tag-vendor.tar.gz"
pushd $GITHUB_WORKSPACE
bash -c "tools/packaging/release/generate_vendor.sh ${tarball}"
tar -cvzf "${tarball}" src/agent/.cargo/config src/agent/vendor
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd

View File

@@ -1,12 +1,8 @@
name: Release Kata in snapcraft store
name: Release Kata 2.x in snapcraft store
on:
push:
tags:
- '[0-9]+.[0-9]+.[0-9]+*'
env:
SNAPCRAFT_STORE_CREDENTIALS: ${{ secrets.snapcraft_token }}
- '2.*'
jobs:
release-snap:
runs-on: ubuntu-20.04
@@ -17,21 +13,12 @@ jobs:
fetch-depth: 0
- name: Install Snapcraft
run: |
# Required to avoid snapcraft install failure
sudo chown root:root /
# "--classic" is needed for the GitHub action runner
# environment.
sudo snap install snapcraft --classic
# Allow other parts to access snap binaries
echo /snap/bin >> "$GITHUB_PATH"
uses: samuelmeuli/action-snapcraft@v1
with:
snapcraft_token: ${{ secrets.snapcraft_token }}
- name: Build snap
run: |
# Removing man-db, workflow kept failing, fixes: #4480
sudo apt -y remove --purge man-db
sudo apt-get install -y git git-extras
kata_url="https://github.com/kata-containers/kata-containers"
latest_version=$(git ls-remote --tags ${kata_url} | egrep -o "refs.*" | egrep -v "\-alpha|\-rc|{}" | egrep -o "[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+" | sort -V -r | head -1)
@@ -39,7 +26,7 @@ jobs:
# Check semantic versioning format (x.y.z) and if the current tag is the latest tag
if echo "${current_version}" | grep -q "^[[:digit:]]\+\.[[:digit:]]\+\.[[:digit:]]\+$" && echo -e "$latest_version\n$current_version" | sort -C -V; then
# Current version is the latest version, build it
snapcraft snap --debug --destructive-mode
snapcraft -d snap --destructive-mode
fi
- name: Upload snap

View File

@@ -6,7 +6,8 @@ on:
- synchronize
- reopened
- edited
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
- labeled
- unlabeled
jobs:
test:
@@ -20,18 +21,9 @@ jobs:
- name: Install Snapcraft
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
# Required to avoid snapcraft install failure
sudo chown root:root /
# "--classic" is needed for the GitHub action runner
# environment.
sudo snap install snapcraft --classic
# Allow other parts to access snap binaries
echo /snap/bin >> "$GITHUB_PATH"
uses: samuelmeuli/action-snapcraft@v1
- name: Build snap
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
snapcraft snap --debug --destructive-mode
snapcraft -d snap --destructive-mode

View File

@@ -1,33 +0,0 @@
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
paths-ignore: [ '**.md', '**.png', '**.jpg', '**.jpeg', '**.svg', '/docs/**' ]
name: Static checks dragonball
jobs:
test-dragonball:
runs-on: self-hosted
env:
RUST_BACKTRACE: "1"
steps:
- uses: actions/checkout@v3
- name: Set env
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
echo "GOPATH=${{ github.workspace }}" >> $GITHUB_ENV
- name: Install Rust
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
./ci/install_rust.sh
PATH=$PATH:"$HOME/.cargo/bin"
- name: Run Unit Test
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd src/dragonball
cargo version
rustc --version
sudo -E env PATH=$PATH LIBC=gnu SUPPORT_VIRTUALIZATION=true make test

View File

@@ -5,19 +5,17 @@ on:
- edited
- reopened
- synchronize
- labeled
- unlabeled
name: Static checks
jobs:
static-checks:
runs-on: ubuntu-20.04
test:
strategy:
matrix:
cmd:
- "make vendor"
- "make static-checks"
- "make check"
- "make test"
- "sudo -E PATH=\"$PATH\" make test"
go-version: [1.16.x, 1.17.x]
os: [ubuntu-20.04]
runs-on: ${{ matrix.os }}
env:
TRAVIS: "true"
TRAVIS_BRANCH: ${{ github.base_ref }}
@@ -26,16 +24,11 @@ jobs:
RUST_BACKTRACE: "1"
target_branch: ${{ github.base_ref }}
steps:
- name: Checkout code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v3
with:
fetch-depth: 0
path: ./src/github.com/${{ github.repository }}
- name: Install Go
uses: actions/setup-go@v3
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/setup-go@v2
with:
go-version: 1.19.3
go-version: ${{ matrix.go-version }}
env:
GOPATH: ${{ runner.workspace }}/kata-containers
- name: Setup GOPATH
@@ -50,6 +43,12 @@ jobs:
run: |
echo "GOPATH=${{ github.workspace }}" >> $GITHUB_ENV
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
- name: Checkout code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v2
with:
fetch-depth: 0
path: ./src/github.com/${{ github.repository }}
- name: Setup travis references
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
@@ -69,7 +68,6 @@ jobs:
rustup target add x86_64-unknown-linux-musl
rustup component add rustfmt clippy
- name: Setup seccomp
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
libseccomp_install_dir=$(mktemp -d -t libseccomp.XXXXXXXXXX)
gperf_install_dir=$(mktemp -d -t gperf.XXXXXXXXXX)
@@ -77,7 +75,24 @@ jobs:
echo "Set environment variables for the libseccomp crate to link the libseccomp library statically"
echo "LIBSECCOMP_LINK_TYPE=static" >> $GITHUB_ENV
echo "LIBSECCOMP_LIB_PATH=${libseccomp_install_dir}/lib" >> $GITHUB_ENV
- name: Run check
# Check whether the vendored code is up-to-date & working as the first thing
- name: Check vendored code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && ${{ matrix.cmd }}
cd ${GOPATH}/src/github.com/${{ github.repository }} && make vendor
- name: Static Checks
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make static-checks
- name: Run Compiler Checks
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make check
- name: Run Unit Tests
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make test
- name: Run Unit Tests As Root User
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && sudo -E PATH="$PATH" make test

5
.gitignore vendored
View File

@@ -4,12 +4,9 @@
**/*.rej
**/target
**/.vscode
**/.idea
**/.fleet
pkg/logging/Cargo.lock
src/agent/src/version.rs
src/agent/kata-agent.service
src/agent/protocols/src/*.rs
!src/agent/protocols/src/lib.rs
build
src/tools/log-parser/kata-log-parser

View File

@@ -2,4 +2,4 @@
## This repo is part of [Kata Containers](https://katacontainers.io)
For details on how to contribute to the Kata Containers project, please see the main [contributing document](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md).
For details on how to contribute to the Kata Containers project, please see the main [contributing document](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).

View File

@@ -1,3 +1,94 @@
# Glossary
See the [project glossary hosted in the wiki](https://github.com/kata-containers/kata-containers/wiki/Glossary).
[A](#a), [B](#b), [C](#c), [D](#d), [E](#e), [F](#f), [G](#g), [H](#h), [I](#i), [J](#j), [K](#k), [L](#l), [M](#m), [N](#n), [O](#o), [P](#p), [Q](#q), [R](#r), [S](#s), [T](#t), [U](#u), [V](#v), [W](#w), [X](#x), [Y](#y), [Z](#z)
## A
### Auto Scaling
a method used in cloud computing, whereby the amount of computational resources in a server farm, typically measured in terms of the number of active servers, which vary automatically based on the load on the farm.
## B
## C
### Container Security Solutions
The process of implementing security tools and policies that will give you the assurance that everything in your container is running as intended, and only as intended.
### Container Software
A standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
### Container Runtime Interface
A plugin interface which enables Kubelet to use a wide variety of container runtimes, without the need to recompile.
### Container Virtualization
A container is a virtual runtime environment that runs on top of a single operating system (OS) kernel and emulates an operating system rather than the underlying hardware.
## D
## E
## F
## G
## H
## I
### Infrastructure Architecture
A structured and modern approach for supporting an organization and facilitating innovation within an enterprise.
## J
## K
### Kata Containers
Kata containers is an open source project delivering increased container security and Workload isolation through an implementation of lightweight virtual machines.
## L
## M
## N
## O
## P
### Pod Containers
A Group of one or more containers , with shared storage/network, and a specification for how to run the containers.
### Private Cloud
A computing model that offers a proprietary environment dedicated to a single business entity.
### Public Cloud
Computing services offered by third-party providers over the public Internet, making them available to anyone who wants to use or purchase them.
## Q
## R
## S
### Serverless Containers
An architecture in which code is executed on-demand. Serverless workloads are typically in the cloud, but on-premises serverless platforms exist, too.
## T
## U
## V
### Virtual Machine Monitor
Computer software, firmware or hardware that creates and runs virtual machines.
### Virtual Machine Software
A software program or operating system that not only exhibits the behavior of a separate computer, but is also capable of performing tasks such as running applications and programs like a separate computer.
## W
## X
## Y
## Z

View File

@@ -6,28 +6,22 @@
# List of available components
COMPONENTS =
COMPONENTS += libs
COMPONENTS += agent
COMPONENTS += dragonball
COMPONENTS += runtime
COMPONENTS += runtime-rs
COMPONENTS += trace-forwarder
# List of available tools
TOOLS =
TOOLS += agent-ctl
TOOLS += kata-ctl
TOOLS += log-parser
TOOLS += runk
TOOLS += trace-forwarder
STANDARD_TARGETS = build check clean install static-checks-build test vendor
default: all
STANDARD_TARGETS = build check clean install test vendor
include utils.mk
include ./tools/packaging/kata-deploy/local-build/Makefile
all: build
# Create the rules
$(eval $(call create_all_rules,$(COMPONENTS),$(TOOLS),$(STANDARD_TARGETS)))
@@ -37,18 +31,7 @@ generate-protocols:
make -C src/agent generate-protocols
# Some static checks rely on generated source files of components.
static-checks: static-checks-build
static-checks: build
bash ci/static-checks.sh
docs-url-alive-check:
bash ci/docs-url-alive-check.sh
.PHONY: \
all \
binary-tarball \
default \
install-binary-tarball \
static-checks \
docs-url-alive-check
.PHONY: all default static-checks binary-tarball install-binary-tarball

View File

@@ -17,74 +17,16 @@ standard implementation of lightweight Virtual Machines (VMs) that feel and
perform like containers, but provide the workload isolation and security
advantages of VMs.
## License
The code is licensed under the Apache 2.0 license.
See [the license file](LICENSE) for further details.
## Platform support
Kata Containers currently runs on 64-bit systems supporting the following
technologies:
| Architecture | Virtualization technology |
|-|-|
| `x86_64`, `amd64` | [Intel](https://www.intel.com) VT-x, AMD SVM |
| `aarch64` ("`arm64`")| [ARM](https://www.arm.com) Hyp |
| `ppc64le` | [IBM](https://www.ibm.com) Power |
| `s390x` | [IBM](https://www.ibm.com) Z & LinuxONE SIE |
### Hardware requirements
The [Kata Containers runtime](src/runtime) provides a command to
determine if your host system is capable of running and creating a
Kata Container:
```bash
$ kata-runtime check
```
> **Notes:**
>
> - This command runs a number of checks including connecting to the
> network to determine if a newer release of Kata Containers is
> available on GitHub. If you do not wish this to check to run, add
> the `--no-network-checks` option.
>
> - By default, only a brief success / failure message is printed.
> If more details are needed, the `--verbose` flag can be used to display the
> list of all the checks performed.
>
> - If the command is run as the `root` user additional checks are
> run (including checking if another incompatible hypervisor is running).
> When running as `root`, network checks are automatically disabled.
## Getting started
See the [installation documentation](docs/install).
## Documentation
See the [official documentation](docs) including:
- [Installation guides](docs/install)
- [Developer guide](docs/Developer-Guide.md)
- [Design documents](docs/design)
- [Architecture overview](docs/design/architecture)
- [Architecture 3.0 overview](docs/design/architecture_3.0/)
## Configuration
Kata Containers uses a single
[configuration file](src/runtime/README.md#configuration)
which contains a number of sections for various parts of the Kata
Containers system including the [runtime](src/runtime), the
[agent](src/agent) and the [hypervisor](#hypervisors).
## Hypervisors
See the [hypervisors document](docs/hypervisors.md) and the
[Hypervisor specific configuration details](src/runtime/README.md#hypervisor-specific-configuration).
See the [official documentation](docs)
(including [installation guides](docs/install),
[the developer guide](docs/Developer-Guide.md),
[design documents](docs/design) and more).
## Community
@@ -106,8 +48,6 @@ Please raise an issue
## Developers
See the [developer guide](docs/Developer-Guide.md).
### Components
### Main components
@@ -117,9 +57,7 @@ The table below lists the core parts of the project:
| Component | Type | Description |
|-|-|-|
| [runtime](src/runtime) | core | Main component run by a container manager and providing a containerd shimv2 runtime implementation. |
| [runtime-rs](src/runtime-rs) | core | The Rust version runtime. |
| [agent](src/agent) | core | Management process running inside the virtual machine / POD that sets up the container environment. |
| [`dragonball`](src/dragonball) | core | An optional built-in VMM brings out-of-the-box Kata Containers experience with optimizations on container workloads |
| [documentation](docs) | documentation | Documentation common to all components (such as design and install documentation). |
| [tests](https://github.com/kata-containers/tests) | tests | Excludes unit tests which live with the main code. |
@@ -132,10 +70,8 @@ The table below lists the remaining parts of the project:
| [packaging](tools/packaging) | infrastructure | Scripts and metadata for producing packaged binaries<br/>(components, hypervisors, kernel and rootfs). |
| [kernel](https://www.kernel.org) | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored [here](tools/packaging/kernel). |
| [osbuilder](tools/osbuilder) | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
| [`agent-ctl`](src/tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
| [`kata-ctl`](src/tools/kata-ctl) | utility | Tool that provides advanced commands and debug facilities. |
| [`trace-forwarder`](src/tools/trace-forwarder) | utility | Agent tracing helper. |
| [`runk`](src/tools/runk) | utility | Standard OCI container runtime based on the agent. |
| [`agent-ctl`](tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
| [`trace-forwarder`](src/trace-forwarder) | utility | Agent tracing helper. |
| [`ci`](https://github.com/kata-containers/ci) | CI | Continuous Integration configuration files and scripts. |
| [`katacontainers.io`](https://github.com/kata-containers/www.katacontainers.io) | Source for the [`katacontainers.io`](https://www.katacontainers.io) site. |
@@ -143,9 +79,13 @@ The table below lists the remaining parts of the project:
Kata Containers is now
[available natively for most distributions](docs/install/README.md#packaged-installation-methods).
However, packaging scripts and metadata are still used to generate [snap](snap/local) and GitHub releases. See
However, packaging scripts and metadata are still used to generate snap and GitHub releases. See
the [components](#components) section for further details.
## Glossary of Terms
See the [glossary of terms](https://github.com/kata-containers/kata-containers/wiki/Glossary) related to Kata Containers.
See the [glossary of terms](Glossary.md) related to Kata Containers.
---
[kernel]: https://www.kernel.org
[github-katacontainers.io]: https://github.com/kata-containers/www.katacontainers.io

View File

@@ -1 +1 @@
3.1.0-alpha1
2.3.2

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bash
#
# Copyright (c) 2022 Apple Inc.
#
# SPDX-License-Identifier: Apache-2.0
set -e
cidir=$(dirname "$0")
runtimedir=$cidir/../src/runtime
build_working_packages() {
# working packages:
device_api=$runtimedir/pkg/device/api
device_config=$runtimedir/pkg/device/config
device_drivers=$runtimedir/pkg/device/drivers
device_manager=$runtimedir/pkg/device/manager
rc_pkg_dir=$runtimedir/pkg/resourcecontrol/
utils_pkg_dir=$runtimedir/virtcontainers/utils
# broken packages :( :
#katautils=$runtimedir/pkg/katautils
#oci=$runtimedir/pkg/oci
#vc=$runtimedir/virtcontainers
pkgs=(
"$device_api"
"$device_config"
"$device_drivers"
"$device_manager"
"$utils_pkg_dir"
"$rc_pkg_dir")
for pkg in "${pkgs[@]}"; do
echo building "$pkg"
pushd "$pkg" &>/dev/null
go build
go test
popd &>/dev/null
done
}
build_working_packages

View File

@@ -1,6 +1,5 @@
#!/bin/bash
#
# Copyright (c) 2021 Easystack Inc.
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
@@ -9,4 +8,4 @@ set -e
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
run_docs_url_alive_check
run_go_test

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright (c) 2019 Intel Corporation
#

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright 2021 Sony Group Corporation
#
@@ -19,31 +19,29 @@ source "${tests_repo_dir}/.ci/lib.sh"
# fail. So let's ensure they are unset here.
unset PREFIX DESTDIR
arch=${ARCH:-$(uname -m)}
arch=$(uname -m)
workdir="$(mktemp -d --tmpdir build-libseccomp.XXXXX)"
# Variables for libseccomp
libseccomp_version="${LIBSECCOMP_VERSION:-""}"
if [ -z "${libseccomp_version}" ]; then
libseccomp_version=$(get_version "externals.libseccomp.version")
fi
libseccomp_url="${LIBSECCOMP_URL:-""}"
if [ -z "${libseccomp_url}" ]; then
libseccomp_url=$(get_version "externals.libseccomp.url")
fi
# Currently, specify the libseccomp version directly without using `versions.yaml`
# because the current Snap workflow is incomplete.
# After solving the issue, replace this code by using the `versions.yaml`.
# libseccomp_version=$(get_version "externals.libseccomp.version")
# libseccomp_url=$(get_version "externals.libseccomp.url")
libseccomp_version="2.5.1"
libseccomp_url="https://github.com/seccomp/libseccomp"
libseccomp_tarball="libseccomp-${libseccomp_version}.tar.gz"
libseccomp_tarball_url="${libseccomp_url}/releases/download/v${libseccomp_version}/${libseccomp_tarball}"
cflags="-O2"
# Variables for gperf
gperf_version="${GPERF_VERSION:-""}"
if [ -z "${gperf_version}" ]; then
gperf_version=$(get_version "externals.gperf.version")
fi
gperf_url="${GPERF_URL:-""}"
if [ -z "${gperf_url}" ]; then
gperf_url=$(get_version "externals.gperf.url")
fi
# Currently, specify the gperf version directly without using `versions.yaml`
# because the current Snap workflow is incomplete.
# After solving the issue, replace this code by using the `versions.yaml`.
# gperf_version=$(get_version "externals.gperf.version")
# gperf_url=$(get_version "externals.gperf.url")
gperf_version="3.1"
gperf_url="https://ftp.gnu.org/gnu/gperf"
gperf_tarball="gperf-${gperf_version}.tar.gz"
gperf_tarball_url="${gperf_url}/${gperf_tarball}"
@@ -72,9 +70,7 @@ build_and_install_gperf() {
curl -sLO "${gperf_tarball_url}"
tar -xf "${gperf_tarball}"
pushd "gperf-${gperf_version}"
# gperf is a build time dependency of libseccomp and not to be used in the target.
# Unset $CC since that might point to a cross compiler.
CC= ./configure --prefix="${gperf_install_dir}"
./configure --prefix="${gperf_install_dir}"
make
make install
export PATH=$PATH:"${gperf_install_dir}"/bin
@@ -88,7 +84,7 @@ build_and_install_libseccomp() {
curl -sLO "${libseccomp_tarball_url}"
tar -xf "${libseccomp_tarball}"
pushd "libseccomp-${libseccomp_version}"
./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static
make
make install
popd

24
ci/install_musl.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/bash
# Copyright (c) 2020 Ant Group
#
# SPDX-License-Identifier: Apache-2.0
#
set -e
install_aarch64_musl() {
local arch=$(uname -m)
if [ "${arch}" == "aarch64" ]; then
local musl_tar="${arch}-linux-musl-native.tgz"
local musl_dir="${arch}-linux-musl-native"
pushd /tmp
if curl -sLO --fail https://musl.cc/${musl_tar}; then
tar -zxf ${musl_tar}
mkdir -p /usr/local/musl/
cp -r ${musl_dir}/* /usr/local/musl/
fi
popd
fi
}
install_aarch64_musl

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Copyright (c) 2019 Ant Financial
#
# SPDX-License-Identifier: Apache-2.0

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright (c) 2018 Intel Corporation
#

View File

@@ -43,16 +43,6 @@ function install_yq() {
"aarch64")
goarch=arm64
;;
"arm64")
# If we're on an apple silicon machine, just assign amd64.
# The version of yq we use doesn't have a darwin arm build,
# but Rosetta can come to the rescue here.
if [ $goos == "Darwin" ]; then
goarch=amd64
else
goarch=arm64
fi
;;
"ppc64le")
goarch=ppc64le
;;
@@ -74,7 +64,7 @@ function install_yq() {
fi
## NOTE: ${var,,} => gives lowercase value of var
local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos}_${goarch}"
local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos,,}_${goarch}"
curl -o "${yq_path}" -LSsf "${yq_url}"
[ $? -ne 0 ] && die "Download ${yq_url} failed"
chmod +x "${yq_path}"

View File

@@ -18,13 +18,6 @@ clone_tests_repo()
{
if [ -d "$tests_repo_dir" ]; then
[ -n "${CI:-}" ] && return
# git config --global --add safe.directory will always append
# the target to .gitconfig without checking the existence of
# the target, so it's better to check it before adding the target repo.
local sd="$(git config --global --get safe.directory ${tests_repo_dir} || true)"
if [ -z "${sd}" ]; then
git config --global --add safe.directory ${tests_repo_dir}
fi
pushd "${tests_repo_dir}"
git checkout "${branch}"
git pull
@@ -46,21 +39,8 @@ run_static_checks()
bash "$tests_repo_dir/.ci/static-checks.sh" "$@"
}
run_docs_url_alive_check()
run_go_test()
{
clone_tests_repo
# Make sure we have the targeting branch
git remote set-branches --add origin "${branch}"
git fetch -a
bash "$tests_repo_dir/.ci/static-checks.sh" --docs --all "github.com/kata-containers/kata-containers"
}
run_get_pr_changed_file_details()
{
clone_tests_repo
# Make sure we have the targeting branch
git remote set-branches --add origin "${branch}"
git fetch -a
source "$tests_repo_dir/.ci/lib.sh"
get_pr_changed_file_details
bash "$tests_repo_dir/.ci/go-test.sh"
}

View File

@@ -4,7 +4,7 @@
#
# This is the build root image for Kata Containers on OpenShift CI.
#
FROM quay.io/centos/centos:stream8
FROM registry.centos.org/centos:8
RUN yum -y update && \
yum -y install \

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright (c) 2019 Ant Financial
#

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright (c) 2018 Intel Corporation
#

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#

View File

@@ -1,33 +0,0 @@
targets = [
{ triple = "x86_64-apple-darwin" },
{ triple = "x86_64-unknown-linux-gnu" },
{ triple = "x86_64-unknown-linux-musl" },
]
[advisories]
vulnerability = "deny"
unsound = "deny"
unmaintained = "deny"
ignore = ["RUSTSEC-2020-0071"]
[bans]
multiple-versions = "allow"
deny = [
{ name = "cmake" },
{ name = "openssl-sys" },
]
[licenses]
unlicensed = "deny"
allow-osi-fsf-free = "neither"
copyleft = "allow"
# We want really high confidence when inferring licenses from text
confidence-threshold = 0.93
allow = ["0BSD", "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "CC0-1.0", "ISC", "MIT", "MPL-2.0"]
private = { ignore = true}
exceptions = []
[sources]
unknown-registry = "allow"
unknown-git = "allow"

View File

@@ -33,41 +33,51 @@ You need to install the following to build Kata Containers components:
- `make`.
- `gcc` (required for building the shim and runtime).
# Build and install Kata Containers
## Build and install the Kata Containers runtime
# Build and install the Kata Containers runtime
```bash
$ git clone https://github.com/kata-containers/kata-containers.git
$ pushd kata-containers/src/runtime
$ make && sudo -E "PATH=$PATH" make install
$ sudo mkdir -p /etc/kata-containers/
$ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
$ popd
```
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/runtime
$ make && sudo -E PATH=$PATH make install
```
The build will create the following:
- runtime binary: `/usr/local/bin/kata-runtime` and `/usr/local/bin/containerd-shim-kata-v2`
- configuration file: `/usr/share/defaults/kata-containers/configuration.toml` and `/etc/kata-containers/configuration.toml`
- configuration file: `/usr/share/defaults/kata-containers/configuration.toml`
# Check hardware requirements
You can check if your system is capable of creating a Kata Container by running the following:
```
$ sudo kata-runtime check
```
If your system is *not* able to run Kata Containers, the previous command will error out and explain why.
## Configure to use initrd or rootfs image
Kata containers can run with either an initrd image or a rootfs image.
If you want to test with `initrd`, make sure you have uncommented `initrd = /usr/share/kata-containers/kata-containers-initrd.img`
in your configuration file, commenting out the `image` line in
`/etc/kata-containers/configuration.toml`. For example:
If you want to test with `initrd`, make sure you have `initrd = /usr/share/kata-containers/kata-containers-initrd.img`
in your configuration file, commenting out the `image` line:
```bash
`/usr/share/defaults/kata-containers/configuration.toml` and comment out the `image` line with the following. For example:
```
$ sudo mkdir -p /etc/kata-containers/
$ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
$ sudo sed -i 's/^\(image =.*\)/# \1/g' /etc/kata-containers/configuration.toml
$ sudo sed -i 's/^# \(initrd =.*\)/\1/g' /etc/kata-containers/configuration.toml
```
You can create the initrd image as shown in the [create an initrd image](#create-an-initrd-image---optional) section.
If you want to test with a rootfs `image`, make sure you have uncommented `image = /usr/share/kata-containers/kata-containers.img`
If you want to test with a rootfs `image`, make sure you have `image = /usr/share/kata-containers/kata-containers.img`
in your configuration file, commenting out the `initrd` line. For example:
```bash
```
$ sudo mkdir -p /etc/kata-containers/
$ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
$ sudo sed -i 's/^\(initrd =.*\)/# \1/g' /etc/kata-containers/configuration.toml
```
The rootfs image is created as shown in the [create a rootfs image](#create-a-rootfs-image) section.
@@ -80,38 +90,19 @@ rootfs `image`(100MB+).
Enable seccomp as follows:
```bash
```
$ sudo sed -i '/^disable_guest_seccomp/ s/true/false/' /etc/kata-containers/configuration.toml
```
This will pass container seccomp profiles to the kata agent.
## Enable SELinux on the guest
> **Note:**
>
> - To enable SELinux on the guest, SELinux MUST be also enabled on the host.
> - You MUST create and build a rootfs image for SELinux in advance.
> See [Create a rootfs image](#create-a-rootfs-image) and [Build a rootfs image](#build-a-rootfs-image).
> - SELinux on the guest is supported in only a rootfs image currently, so
> you cannot enable SELinux with the agent init (`AGENT_INIT=yes`) yet.
Enable guest SELinux in Enforcing mode as follows:
```
$ sudo sed -i '/^disable_guest_selinux/ s/true/false/g' /etc/kata-containers/configuration.toml
```
The runtime automatically will set `selinux=1` to the kernel parameters and `xattr` option to
`virtiofsd` when `disable_guest_selinux` is set to `false`.
If you want to enable SELinux in Permissive mode, add `enforcing=0` to the kernel parameters.
## Enable full debug
Enable full debug as follows:
```bash
```
$ sudo mkdir -p /etc/kata-containers/
$ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
$ sudo sed -i -e 's/^# *\(enable_debug\).*=.*$/\1 = true/g' /etc/kata-containers/configuration.toml
$ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.log=debug initcall_debug"/g' /etc/kata-containers/configuration.toml
```
@@ -125,7 +116,7 @@ detailed below.
The Kata logs appear in the `containerd` log files, along with logs from `containerd` itself.
For more information about `containerd` debug, please see the
[`containerd` documentation](https://github.com/containerd/containerd/blob/main/docs/getting-started.md).
[`containerd` documentation](https://github.com/containerd/containerd/blob/master/docs/getting-started.md).
#### Enabling full `containerd` debug
@@ -184,7 +175,7 @@ and offers possible workarounds and fixes.
it stores. When messages are suppressed, it is noted in the logs. This can be checked
for by looking for those notifications, such as:
```bash
```sh
$ sudo journalctl --since today | fgrep Suppressed
Jun 29 14:51:17 mymachine systemd-journald[346]: Suppressed 4150 messages from /system.slice/docker.service
```
@@ -209,7 +200,7 @@ RateLimitBurst=0
Restart `systemd-journald` for the changes to take effect:
```bash
```sh
$ sudo systemctl restart systemd-journald
```
@@ -221,26 +212,25 @@ $ sudo systemctl restart systemd-journald
>
> - You should only do this step if you are testing with the latest version of the agent.
The agent is built with a statically linked `musl.` The default `libc` used is `musl`, but on `ppc64le` and `s390x`, `gnu` should be used. To configure this:
The rust-agent is built with a static linked `musl.` To configure this:
```bash
$ export ARCH="$(uname -m)"
$ if [ "$ARCH" = "ppc64le" -o "$ARCH" = "s390x" ]; then export LIBC=gnu; else export LIBC=musl; fi
$ [ "${ARCH}" == "ppc64le" ] && export ARCH=powerpc64le
$ rustup target add "${ARCH}-unknown-linux-${LIBC}"
```
rustup target add x86_64-unknown-linux-musl
sudo ln -s /usr/bin/g++ /bin/musl-g++
```
To build the agent:
```bash
$ make -C kata-containers/src/agent
```
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/agent && make
```
The agent is built with seccomp capability by default.
If you want to build the agent without the seccomp capability, you need to run `make` with `SECCOMP=no` as follows.
```bash
$ make -C kata-containers/src/agent SECCOMP=no
```
$ make -C $GOPATH/src/github.com/kata-containers/kata-containers/src/agent SECCOMP=no
```
> **Note:**
@@ -248,6 +238,13 @@ $ make -C kata-containers/src/agent SECCOMP=no
> - If you enable seccomp in the main configuration file but build the agent without seccomp capability,
> the runtime exits conservatively with an error message.
## Get the osbuilder
```
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder
```
## Create a rootfs image
### Create a local rootfs
@@ -255,32 +252,24 @@ As a prerequisite, you need to install Docker. Otherwise, you will not be
able to run the `rootfs.sh` script with `USE_DOCKER=true` as expected in
the following example.
```bash
$ export distro="ubuntu" # example
$ export ROOTFS_DIR="$(realpath kata-containers/tools/osbuilder/rootfs-builder/rootfs)"
$ sudo rm -rf "${ROOTFS_DIR}"
$ pushd kata-containers/tools/osbuilder/rootfs-builder
$ script -fec 'sudo -E USE_DOCKER=true ./rootfs.sh "${distro}"'
$ popd
```
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs
$ sudo rm -rf ${ROOTFS_DIR}
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true ./rootfs.sh ${distro}'
```
You MUST choose a distribution (e.g., `ubuntu`) for `${distro}`.
You can get a supported distributions list in the Kata Containers by running the following.
```bash
$ ./kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh -l
```
$ ./rootfs.sh -l
```
If you want to build the agent without seccomp capability, you need to run the `rootfs.sh` script with `SECCOMP=no` as follows.
```bash
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh "${distro}"'
```
If you want to enable SELinux on the guest, you MUST choose `centos` and run the `rootfs.sh` script with `SELINUX=yes` as follows.
```
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SELINUX=yes ./rootfs.sh centos'
$ script -fec 'sudo -E GOPATH=$GOPATH AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh ${distro}'
```
> **Note:**
@@ -296,32 +285,18 @@ $ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SELINUX=yes ./rootfs.sh ce
>
> - You should only do this step if you are testing with the latest version of the agent.
```bash
$ sudo install -o root -g root -m 0550 -t "${ROOTFS_DIR}/usr/bin" "${ROOTFS_DIR}/../../../../src/agent/target/x86_64-unknown-linux-musl/release/kata-agent"
$ sudo install -o root -g root -m 0440 "${ROOTFS_DIR}/../../../../src/agent/kata-agent.service" "${ROOTFS_DIR}/usr/lib/systemd/system/"
$ sudo install -o root -g root -m 0440 "${ROOTFS_DIR}/../../../../src/agent/kata-containers.target" "${ROOTFS_DIR}/usr/lib/systemd/system/"
```
$ sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/usr/bin ../../../src/agent/target/x86_64-unknown-linux-musl/release/kata-agent
$ sudo install -o root -g root -m 0440 ../../../src/agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/
$ sudo install -o root -g root -m 0440 ../../../src/agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/
```
### Build a rootfs image
```bash
$ pushd kata-containers/tools/osbuilder/image-builder
$ script -fec 'sudo -E USE_DOCKER=true ./image_builder.sh "${ROOTFS_DIR}"'
$ popd
```
If you want to enable SELinux on the guest, you MUST run the `image_builder.sh` script with `SELINUX=yes`
to label the guest image as follows.
To label the image on the host, you need to make sure that SELinux is enabled (`selinuxfs` is mounted) on the host
and the rootfs MUST be created by running the `rootfs.sh` with `SELINUX=yes`.
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
$ script -fec 'sudo -E USE_DOCKER=true ./image_builder.sh ${ROOTFS_DIR}'
```
$ script -fec 'sudo -E USE_DOCKER=true SELINUX=yes ./image_builder.sh ${ROOTFS_DIR}'
```
Currently, the `image_builder.sh` uses `chcon` as an interim solution in order to apply `container_runtime_exec_t`
to the `kata-agent`. Hence, if you run `restorecon` to the guest image after running the `image_builder.sh`,
the `kata-agent` needs to be labeled `container_runtime_exec_t` again by yourself.
> **Notes:**
>
@@ -336,26 +311,21 @@ the `kata-agent` needs to be labeled `container_runtime_exec_t` again by yoursel
### Install the rootfs image
```bash
$ pushd kata-containers/tools/osbuilder/image-builder
$ commit="$(git log --format=%h -1 HEAD)"
$ date="$(date +%Y-%m-%d-%T.%N%z)"
```
$ commit=$(git log --format=%h -1 HEAD)
$ date=$(date +%Y-%m-%d-%T.%N%z)
$ image="kata-containers-${date}-${commit}"
$ sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
$ (cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
$ popd
```
## Create an initrd image - OPTIONAL
### Create a local rootfs for initrd image
```bash
$ export distro="ubuntu" # example
$ export ROOTFS_DIR="$(realpath kata-containers/tools/osbuilder/rootfs-builder/rootfs)"
$ sudo rm -rf "${ROOTFS_DIR}"
$ pushd kata-containers/tools/osbuilder/rootfs-builder/
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true ./rootfs.sh "${distro}"'
$ popd
```
$ export ROOTFS_DIR="${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs"
$ sudo rm -rf ${ROOTFS_DIR}
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ script -fec 'sudo -E GOPATH=$GOPATH AGENT_INIT=yes USE_DOCKER=true ./rootfs.sh ${distro}'
```
`AGENT_INIT` controls if the guest image uses the Kata agent as the guest `init` process. When you create an initrd image,
always set `AGENT_INIT` to `yes`.
@@ -363,14 +333,14 @@ always set `AGENT_INIT` to `yes`.
You MUST choose a distribution (e.g., `ubuntu`) for `${distro}`.
You can get a supported distributions list in the Kata Containers by running the following.
```bash
$ ./kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh -l
```
$ ./rootfs.sh -l
```
If you want to build the agent without seccomp capability, you need to run the `rootfs.sh` script with `SECCOMP=no` as follows.
```bash
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh "${distro}"'
```
$ script -fec 'sudo -E GOPATH=$GOPATH AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh ${distro}'
```
> **Note:**
@@ -379,31 +349,28 @@ $ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh "${
Optionally, add your custom agent binary to the rootfs with the following commands. The default `$LIBC` used
is `musl`, but on ppc64le and s390x, `gnu` should be used. Also, Rust refers to ppc64le as `powerpc64le`:
```bash
$ export ARCH="$(uname -m)"
$ [ "${ARCH}" == "ppc64le" ] || [ "${ARCH}" == "s390x" ] && export LIBC=gnu || export LIBC=musl
$ [ "${ARCH}" == "ppc64le" ] && export ARCH=powerpc64le
$ sudo install -o root -g root -m 0550 -T "${ROOTFS_DIR}/../../../../src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent" "${ROOTFS_DIR}/sbin/init"
```
$ export ARCH=$(uname -m)
$ [ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
$ [ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
$ sudo install -o root -g root -m 0550 -T ../../../src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent ${ROOTFS_DIR}/sbin/init
```
### Build an initrd image
```bash
$ pushd kata-containers/tools/osbuilder/initrd-builder
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true ./initrd_builder.sh "${ROOTFS_DIR}"'
$ popd
```
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true ./initrd_builder.sh ${ROOTFS_DIR}'
```
### Install the initrd image
```bash
$ pushd kata-containers/tools/osbuilder/initrd-builder
$ commit="$(git log --format=%h -1 HEAD)"
$ date="$(date +%Y-%m-%d-%T.%N%z)"
```
$ commit=$(git log --format=%h -1 HEAD)
$ date=$(date +%Y-%m-%d-%T.%N%z)
$ image="kata-containers-initrd-${date}-${commit}"
$ sudo install -o root -g root -m 0640 -D kata-containers-initrd.img "/usr/share/kata-containers/${image}"
$ (cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd.img)
$ popd
```
# Install guest kernel images
@@ -422,43 +389,43 @@ Kata Containers makes use of upstream QEMU branch. The exact version
and repository utilized can be found by looking at the [versions file](../versions.yaml).
Find the correct version of QEMU from the versions file:
```bash
$ source kata-containers/tools/packaging/scripts/lib.sh
$ qemu_version="$(get_from_kata_deps "assets.hypervisor.qemu.version")"
$ echo "${qemu_version}"
```
$ source ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/scripts/lib.sh
$ qemu_version=$(get_from_kata_deps "assets.hypervisor.qemu.version")
$ echo ${qemu_version}
```
Get source from the matching branch of QEMU:
```bash
$ git clone -b "${qemu_version}" https://github.com/qemu/qemu.git
$ your_qemu_directory="$(realpath qemu)"
```
$ go get -d github.com/qemu/qemu
$ cd ${GOPATH}/src/github.com/qemu/qemu
$ git checkout ${qemu_version}
$ your_qemu_directory=${GOPATH}/src/github.com/qemu/qemu
```
There are scripts to manage the build and packaging of QEMU. For the examples below, set your
environment as:
```bash
$ packaging_dir="$(realpath kata-containers/tools/packaging)"
```
$ go get -d github.com/kata-containers/kata-containers
$ packaging_dir="${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging"
```
Kata often utilizes patches for not-yet-upstream and/or backported fixes for components,
including QEMU. These can be found in the [packaging/QEMU directory](../tools/packaging/qemu/patches),
and it's *recommended* that you apply them. For example, suppose that you are going to build QEMU
version 5.2.0, do:
```bash
$ "$packaging_dir/scripts/apply_patches.sh" "$packaging_dir/qemu/patches/5.2.x/"
```
$ cd $your_qemu_directory
$ $packaging_dir/scripts/apply_patches.sh $packaging_dir/qemu/patches/5.2.x/
```
To build utilizing the same options as Kata, you should make use of the `configure-hypervisor.sh` script. For example:
```bash
$ pushd "$your_qemu_directory"
$ "$packaging_dir/scripts/configure-hypervisor.sh" kata-qemu > kata.cfg
$ eval ./configure "$(cat kata.cfg)"
$ make -j $(nproc --ignore=1)
# Optional
$ sudo -E make install
$ popd
```
If you do not want to install the respective QEMU version, the configuration file can be modified to point to the correct binary. In `/etc/kata-containers/configuration.toml`, change `path = "/path/to/qemu/build/qemu-system-x86_64"` to point to the correct QEMU binary.
$ cd $your_qemu_directory
$ $packaging_dir/scripts/configure-hypervisor.sh kata-qemu > kata.cfg
$ eval ./configure "$(cat kata.cfg)"
$ make -j $(nproc)
$ sudo -E make install
```
See the [static-build script for QEMU](../tools/packaging/static-build/qemu/build-static-qemu.sh) for a reference on how to get, setup, configure and build QEMU for Kata.
@@ -470,33 +437,11 @@ See the [static-build script for QEMU](../tools/packaging/static-build/qemu/buil
> under upstream review for supporting NVDIMM on aarch64.
>
You could build the custom `qemu-system-aarch64` as required with the following command:
```bash
$ git clone https://github.com/kata-containers/tests.git
$ script -fec 'sudo -E tests/.ci/install_qemu.sh'
```
## Build `virtiofsd`
When using the file system type virtio-fs (default), `virtiofsd` is required
```bash
$ pushd kata-containers/tools/packaging/static-build/virtiofsd
$ ./build.sh
$ popd
$ go get -d github.com/kata-containers/tests
$ script -fec 'sudo -E ${GOPATH}/src/github.com/kata-containers/tests/.ci/install_qemu.sh'
```
Modify `/etc/kata-containers/configuration.toml` and update value `virtio_fs_daemon = "/path/to/kata-containers/tools/packaging/static-build/virtiofsd/virtiofsd/virtiofsd"` to point to the binary.
# Check hardware requirements
You can check if your system is capable of creating a Kata Container by running the following:
```bash
$ sudo kata-runtime check
```
If your system is *not* able to run Kata Containers, the previous command will error out and explain why.
# Run Kata Containers with Containerd
Refer to the [How to use Kata Containers and Containerd](how-to/containerd-kata.md) how-to guide.
@@ -518,7 +463,7 @@ script and paste its output directly into a
> [runtime](../src/runtime) repository.
To perform analysis on Kata logs, use the
[`kata-log-parser`](../src/tools/log-parser)
[`kata-log-parser`](https://github.com/kata-containers/tests/tree/main/cmd/log-parser)
tool, which can convert the logs into formats (e.g. JSON, TOML, XML, and YAML).
See [Set up a debug console](#set-up-a-debug-console).
@@ -527,7 +472,7 @@ See [Set up a debug console](#set-up-a-debug-console).
## Checking Docker default runtime
```bash
```
$ sudo docker info 2>/dev/null | grep -i "default runtime" | cut -d: -f2- | grep -q runc && echo "SUCCESS" || echo "ERROR: Incorrect default Docker runtime"
```
## Set up a debug console
@@ -544,7 +489,7 @@ contain either `/bin/sh` or `/bin/bash`.
Enable debug_console_enabled in the `configuration.toml` configuration file:
```toml
```
[agent.kata]
debug_console_enabled = true
```
@@ -555,7 +500,7 @@ This will pass `agent.debug_console agent.debug_console_vport=1026` to agent as
For Kata Containers `2.0.x` releases, the `kata-runtime exec` command depends on the`kata-monitor` running, in order to get the sandbox's `vsock` address to connect to. Thus, first start the `kata-monitor` process.
```bash
```
$ sudo kata-monitor
```
@@ -575,7 +520,7 @@ bash-4.2# exit
exit
```
`kata-runtime exec` has a command-line option `runtime-namespace`, which is used to specify under which [runtime namespace](https://github.com/containerd/containerd/blob/main/docs/namespaces.md) the particular pod was created. By default, it is set to `k8s.io` and works for containerd when configured
`kata-runtime exec` has a command-line option `runtime-namespace`, which is used to specify under which [runtime namespace](https://github.com/containerd/containerd/blob/master/docs/namespaces.md) the particular pod was created. By default, it is set to `k8s.io` and works for containerd when configured
with Kubernetes. For CRI-O, the namespace should set to `default` explicitly. This should not be confused with [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/).
For other CRI-runtimes and configurations, you may need to set the namespace utilizing the `runtime-namespace` option.
@@ -617,10 +562,10 @@ an additional `coreutils` package.
For example using CentOS:
```bash
$ pushd kata-containers/tools/osbuilder/rootfs-builder
$ export ROOTFS_DIR="$(realpath ./rootfs)"
$ script -fec 'sudo -E USE_DOCKER=true EXTRA_PKGS="bash coreutils" ./rootfs.sh centos'
```
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true EXTRA_PKGS="bash coreutils" ./rootfs.sh centos'
```
#### Build the debug image
@@ -635,10 +580,9 @@ Install the image:
>**Note**: When using an initrd image, replace the below rootfs image name `kata-containers.img`
>with the initrd image name `kata-containers-initrd.img`.
```bash
```
$ name="kata-containers-centos-with-debug-console.img"
$ sudo install -o root -g root -m 0640 kata-containers.img "/usr/share/kata-containers/${name}"
$ popd
```
Next, modify the `image=` values in the `[hypervisor.qemu]` section of the
@@ -647,7 +591,7 @@ to specify the full path to the image name specified in the previous code
section. Alternatively, recreate the symbolic link so it points to
the new debug image:
```bash
```
$ (cd /usr/share/kata-containers && sudo ln -sf "$name" kata-containers.img)
```
@@ -658,7 +602,7 @@ to avoid all subsequently created containers from using the debug image.
Create a container as normal. For example using `crictl`:
```bash
```
$ sudo crictl run -r kata container.yaml pod.yaml
```
@@ -671,7 +615,7 @@ those for firecracker / cloud-hypervisor.
Add `agent.debug_console` to the guest kernel command line to allow the agent process to start a debug console.
```bash
```
$ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_console"/g' "${kata_configuration_file}"
```
@@ -692,7 +636,7 @@ between the host and the guest. The kernel command line option `agent.debug_cons
Add the parameter `agent.debug_console_vport=1026` to the kernel command line
as shown below:
```bash
```
sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_console_vport=1026"/g' "${kata_configuration_file}"
```
@@ -705,7 +649,7 @@ Next, connect to the debug console. The VSOCKS paths vary slightly between each
VMM solution.
In case of cloud-hypervisor, connect to the `vsock` as shown:
```bash
```
$ sudo su -c 'cd /var/run/vc/vm/${sandbox_id}/root/ && socat stdin unix-connect:clh.sock'
CONNECT 1026
```
@@ -713,7 +657,7 @@ CONNECT 1026
**Note**: You need to type `CONNECT 1026` and press `RETURN` key after entering the `socat` command.
For firecracker, connect to the `hvsock` as shown:
```bash
```
$ sudo su -c 'cd /var/run/vc/firecracker/${sandbox_id}/root/ && socat stdin unix-connect:kata.hvsock'
CONNECT 1026
```
@@ -722,7 +666,7 @@ CONNECT 1026
For QEMU, connect to the `vsock` as shown:
```bash
```
$ sudo su -c 'cd /var/run/vc/vm/${sandbox_id} && socat "stdin,raw,echo=0,escape=0x11" "unix-connect:console.sock"'
```
@@ -735,7 +679,7 @@ If the image is created using
[osbuilder](../tools/osbuilder), the following YAML
file exists and contains details of the image and how it was created:
```bash
```
$ cat /var/lib/osbuilder/osbuilder.yaml
```
@@ -754,11 +698,11 @@ options to have the kernel boot messages logged into the system journal.
For generic information on enabling debug in the configuration file, see the
[Enable full debug](#enable-full-debug) section.
The kernel boot messages will appear in the `kata` logs (and in the `containerd` or `CRI-O` log appropriately).
The kernel boot messages will appear in the `containerd` or `CRI-O` log appropriately,
such as:
```bash
$ sudo journalctl -t kata
$ sudo journalctl -t containerd
-- Logs begin at Thu 2020-02-13 16:20:40 UTC, end at Thu 2020-02-13 16:30:23 UTC. --
...
time="2020-09-15T14:56:23.095113803+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.395399] brd: module loaded"
@@ -768,4 +712,3 @@ time="2020-09-15T14:56:23.105268162+08:00" level=debug msg="reading guest consol
time="2020-09-15T14:56:23.121121598+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.421324] memmap_init_zone_device initialised 32768 pages in 12ms"
...
```
Refer to the [kata-log-parser documentation](../src/tools/log-parser/README.md) which is useful to fetch these.

View File

@@ -46,7 +46,7 @@ The following link shows the latest list of limitations:
# Contributing
If you would like to work on resolving a limitation, please refer to the
[contributors guide](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md).
[contributors guide](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).
If you wish to raise an issue for a new limitation, either
[raise an issue directly on the runtime](https://github.com/kata-containers/kata-containers/issues/new)
or see the
@@ -57,29 +57,13 @@ for advice on which repository to raise the issue against.
This section lists items that might be possible to fix.
## OCI CLI commands
### Docker and Podman support
Currently Kata Containers does not support Podman.
See issue https://github.com/kata-containers/kata-containers/issues/722 for more information.
Docker supports Kata Containers since 22.06:
```bash
$ sudo docker run --runtime io.containerd.kata.v2
```
Kata Containers works perfectly with containerd, we recommend to use
containerd's Docker-style command line tool [`nerdctl`](https://github.com/containerd/nerdctl).
## Runtime commands
### checkpoint and restore
The runtime does not provide `checkpoint` and `restore` commands. There
are discussions about using VM save and restore to give us a
[`criu`](https://github.com/checkpoint-restore/criu)-like functionality,
`[criu](https://github.com/checkpoint-restore/criu)`-like functionality,
which might provide a solution.
Note that the OCI standard does not specify `checkpoint` and `restore`
@@ -102,41 +86,20 @@ All other configurations are supported and are working properly.
## Networking
### Host network
### Docker swarm and compose support
Host network (`nerdctl/docker run --net=host`or [Kubernetes `HostNetwork`](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#hosts-namespaces)) is not supported.
It is not possible to directly access the host networking configuration
from within the VM.
The newest version of Docker supported is specified by the
`externals.docker.version` variable in the
[versions database](https://github.com/kata-containers/runtime/blob/master/versions.yaml).
The `--net=host` option can still be used with `runc` containers and
inter-mixed with running Kata Containers, thus enabling use of `--net=host`
when necessary.
Basic Docker swarm support works. However, if you want to use custom networks
with Docker's swarm, an older version of Docker is required. This is specified
by the `externals.docker.meta.swarm-version` variable in the
[versions database](https://github.com/kata-containers/runtime/blob/master/versions.yaml).
It should be noted, currently passing the `--net=host` option into a
Kata Container may result in the Kata Container networking setup
modifying, re-configuring and therefore possibly breaking the host
networking setup. Do not use `--net=host` with Kata Containers.
See issue https://github.com/kata-containers/runtime/issues/175 for more information.
### Support for joining an existing VM network
Docker supports the ability for containers to join another containers
namespace with the `docker run --net=containers` syntax. This allows
multiple containers to share a common network namespace and the network
interfaces placed in the network namespace. Kata Containers does not
support network namespace sharing. If a Kata Container is setup to
share the network namespace of a `runc` container, the runtime
effectively takes over all the network interfaces assigned to the
namespace and binds them to the VM. Consequently, the `runc` container loses
its network connectivity.
### docker run --link
The runtime does not support the `docker run --link` command. This
command is now deprecated by docker and we have no intention of adding support.
Equivalent functionality can be achieved with the newer docker networking commands.
See more documentation at
[docs.docker.com](https://docs.docker.com/network/links/).
Docker compose normally uses custom networks, so also has the same limitations.
## Resource management
@@ -149,12 +112,82 @@ See issue https://github.com/clearcontainers/runtime/issues/341 and [the constra
For CPUs resource management see
[CPU constraints](design/vcpu-handling.md).
### docker run and shared memory
The runtime does not implement the `docker run --shm-size` command to
set the size of the `/dev/shm tmpfs` within the container. It is possible to pass this configuration value into the VM container so the appropriate mount command happens at launch time.
See issue https://github.com/kata-containers/kata-containers/issues/21 for more information.
### docker run and sysctl
The `docker run --sysctl` feature is not implemented. At the runtime
level, this equates to the `linux.sysctl` OCI configuration. Docker
allows configuring the sysctl settings that support namespacing. From a security and isolation point of view, it might make sense to set them in the VM, which isolates sysctl settings. Also, given that each Kata Container has its own kernel, we can support setting of sysctl settings that are not namespaced. In some cases, we might need to support configuring some of the settings on both the host side Kata Container namespace and the Kata Containers kernel.
See issue https://github.com/kata-containers/runtime/issues/185 for more information.
## Docker daemon features
Some features enabled or implemented via the
[`dockerd` daemon](https://docs.docker.com/config/daemon/) configuration are not yet
implemented.
### SELinux support
The `dockerd` configuration option `"selinux-enabled": true` is not presently implemented
in Kata Containers. Enabling this option causes an OCI runtime error.
See issue https://github.com/kata-containers/runtime/issues/784 for more information.
The consequence of this is that the [Docker --security-opt is only partially supported](#docker---security-opt-option-partially-supported).
Kubernetes [SELinux labels](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container) will also not be applied.
# Architectural limitations
This section lists items that might not be fixed due to fundamental
architectural differences between "soft containers" (i.e. traditional Linux*
containers) and those based on VMs.
## Networking limitations
### Support for joining an existing VM network
Docker supports the ability for containers to join another containers
namespace with the `docker run --net=containers` syntax. This allows
multiple containers to share a common network namespace and the network
interfaces placed in the network namespace. Kata Containers does not
support network namespace sharing. If a Kata Container is setup to
share the network namespace of a `runc` container, the runtime
effectively takes over all the network interfaces assigned to the
namespace and binds them to the VM. Consequently, the `runc` container loses
its network connectivity.
### docker --net=host
Docker host network support (`docker --net=host run`) is not supported.
It is not possible to directly access the host networking configuration
from within the VM.
The `--net=host` option can still be used with `runc` containers and
inter-mixed with running Kata Containers, thus enabling use of `--net=host`
when necessary.
It should be noted, currently passing the `--net=host` option into a
Kata Container may result in the Kata Container networking setup
modifying, re-configuring and therefore possibly breaking the host
networking setup. Do not use `--net=host` with Kata Containers.
### docker run --link
The runtime does not support the `docker run --link` command. This
command is now deprecated by docker and we have no intention of adding support.
Equivalent functionality can be achieved with the newer docker networking commands.
See more documentation at
[docs.docker.com](https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/).
## Storage limitations
### Kubernetes `volumeMounts.subPaths`
@@ -165,11 +198,15 @@ moment.
See [this issue](https://github.com/kata-containers/runtime/issues/2812) for more details.
[Another issue](https://github.com/kata-containers/kata-containers/issues/1728) focuses on the case of `emptyDir`.
## Host resource sharing
### Privileged containers
### docker run --privileged
Privileged support in Kata is essentially different from `runc` containers.
Kata does support `docker run --privileged` command, but in this case full access
to the guest VM is provided in addition to some host access.
The container runs with elevated capabilities within the guest and is granted
access to guest devices instead of the host devices.
This is also true with using `securityContext privileged=true` with Kubernetes.
@@ -179,6 +216,17 @@ The container may also be granted full access to a subset of host devices
See [Privileged Kata Containers](how-to/privileged.md) for how to configure some of this behavior.
# Miscellaneous
This section lists limitations where the possible solutions are uncertain.
## Docker --security-opt option partially supported
The `--security-opt=` option used by Docker is partially supported.
We only support `--security-opt=no-new-privileges` and `--security-opt seccomp=/path/to/seccomp/profile.json`
option as of today.
Note: The `--security-opt apparmor=your_profile` is not yet supported. See https://github.com/kata-containers/runtime/issues/707.
# Appendices
## The constraints challenge

View File

@@ -21,15 +21,17 @@ See the [tracing documentation](tracing.md).
* [Limitations](Limitations.md): differences and limitations compared with the default [Docker](https://www.docker.com/) runtime,
[`runc`](https://github.com/opencontainers/runc).
### How-to guides
### Howto guides
See the [how-to documentation](how-to).
See the [howto documentation](how-to).
## Kata Use-Cases
* [GPU Passthrough with Kata](./use-cases/GPU-passthrough-and-Kata.md)
* [OpenStack Zun with Kata Containers](./use-cases/zun_kata.md)
* [SR-IOV with Kata](./use-cases/using-SRIOV-and-kata.md)
* [Intel QAT with Kata](./use-cases/using-Intel-QAT-and-kata.md)
* [VPP with Kata](./use-cases/using-vpp-and-kata.md)
* [SPDK vhost-user with Kata](./use-cases/using-SPDK-vhostuser-and-kata.md)
* [Intel SGX with Kata](./use-cases/using-Intel-SGX-and-kata.md)
@@ -39,7 +41,7 @@ Documents that help to understand and contribute to Kata Containers.
### Design and Implementations
* [Kata Containers Architecture](design/architecture): Architectural overview of Kata Containers
* [Kata Containers Architecture](design/architecture.md): Architectural overview of Kata Containers
* [Kata Containers E2E Flow](design/end-to-end-flow.md): The entire end-to-end flow of Kata Containers
* [Kata Containers design](./design/README.md): More Kata Containers design documents
* [Kata Containers threat model](./threat-model/threat-model.md): Kata Containers threat model
@@ -47,22 +49,9 @@ Documents that help to understand and contribute to Kata Containers.
### How to Contribute
* [Developer Guide](Developer-Guide.md): Setup the Kata Containers developing environments
* [How to contribute to Kata Containers](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md)
* [How to contribute to Kata Containers](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md)
* [Code of Conduct](../CODE_OF_CONDUCT.md)
## Help Writing a Code PR
* [Code PR advice](code-pr-advice.md).
## Help Writing Unit Tests
* [Unit Test Advice](Unit-Test-Advice.md)
* [Unit testing presentation](presentations/unit-testing/kata-containers-unit-testing.md)
## Help Improving the Documents
* [Documentation Requirements](Documentation-Requirements.md)
### Code Licensing
* [Licensing](Licensing-strategy.md): About the licensing strategy of Kata Containers.
@@ -72,9 +61,9 @@ Documents that help to understand and contribute to Kata Containers.
* [Release strategy](Stable-Branch-Strategy.md)
* [Release Process](Release-Process.md)
## Presentations
## Help Improving the Documents
* [Presentations](presentations)
* [Documentation Requirements](Documentation-Requirements.md)
## Website Changes

View File

@@ -4,11 +4,11 @@
## Requirements
- [hub](https://github.com/github/hub)
* Using an [application token](https://github.com/settings/tokens) is required for hub (set to a GITHUB_TOKEN environment variable).
* Using an [application token](https://github.com/settings/tokens) is required for hub.
- GitHub permissions to push tags and create releases in Kata repositories.
- GPG configured to sign git tags. https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key
- GPG configured to sign git tags. https://help.github.com/articles/generating-a-new-gpg-key/
- You should configure your GitHub to use your ssh keys (to push to branches). See https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/.
* As an alternative, configure hub to push and fork with HTTPS, `git config --global hub.protocol https` (Not tested yet) *
@@ -48,7 +48,6 @@
### Merge all bump version Pull requests
- The above step will create a GitHub pull request in the Kata projects. Trigger the CI using `/test` command on each bump Pull request.
- Trigger the `test-kata-deploy` workflow which is under the `Actions` tab on the repository GitHub page (make sure to select the correct branch and validate it passes).
- Check any failures and fix if needed.
- Work with the Kata approvers to verify that the CI works and the pull requests are merged.
@@ -65,7 +64,7 @@
### Check Git-hub Actions
We make use of [GitHub actions](https://github.com/features/actions) in this [file](../.github/workflows/release.yaml) in the `kata-containers/kata-containers` repository to build and upload release artifacts. This action is auto triggered with the above step when a new tag is pushed to the `kata-containers/kata-containers` repository.
We make use of [GitHub actions](https://github.com/features/actions) in this [file](https://github.com/kata-containers/kata-containers/blob/main/.github/workflows/release.yaml) in the `kata-containers/kata-containers` repository to build and upload release artifacts. This action is auto triggered with the above step when a new tag is pushed to the `kata-containers/kata-containers` repository.
Check the [actions status page](https://github.com/kata-containers/kata-containers/actions) to verify all steps in the actions workflow have completed successfully. On success, a static tarball containing Kata release artifacts will be uploaded to the [Release page](https://github.com/kata-containers/kata-containers/releases).

View File

@@ -120,7 +120,7 @@ stable and main. While this is not in place currently, it should be considered i
### Patch releases
Releases are made every four weeks, which include a GitHub release as
Releases are made every three weeks, which include a GitHub release as
well as binary packages. These patch releases are made for both stable branches, and a "release candidate"
for the next `MAJOR` or `MINOR` is created from main. If there are no changes across all the repositories, no
release is created and an announcement is made on the developer mailing list to highlight this.
@@ -136,7 +136,8 @@ The process followed for making a release can be found at [Release Process](Rele
### Frequency
Minor releases are less frequent in order to provide a more stable baseline for users. They are currently
running on a sixteen weeks cadence. The release schedule can be seen on the
running on a twelve week cadence. As the Kata Containers code base has reached a certain level of
maturity, we have increased the cadence from six weeks to twelve weeks. The release schedule can be seen on the
[release rotation wiki page](https://github.com/kata-containers/community/wiki/Release-Team-Rota).
### Compatibility

View File

@@ -1,377 +0,0 @@
# Unit Test Advice
## Overview
This document offers advice on writing a Unit Test (UT) in
[Golang](https://golang.org) and [Rust](https://www.rust-lang.org).
## General advice
### Unit test strategies
#### Positive and negative tests
Always add positive tests (where success is expected) *and* negative
tests (where failure is expected).
#### Boundary condition tests
Try to add unit tests that exercise boundary conditions such as:
- Missing values (`null` or `None`).
- Empty strings and huge strings.
- Empty (or uninitialised) complex data structures
(such as lists, vectors and hash tables).
- Common numeric values (such as `-1`, `0`, `1` and the minimum and
maximum values).
#### Test unusual values
Also always consider "unusual" input values such as:
- String values containing spaces, Unicode characters, special
characters, escaped characters or null bytes.
> **Note:** Consider these unusual values in prefix, infix and
> suffix position.
- String values that cannot be converted into numeric values or which
contain invalid structured data (such as invalid JSON).
#### Other types of tests
If the code requires other forms of testing (such as stress testing,
fuzz testing and integration testing), raise a GitHub issue and
reference it on the issue you are using for the main work. This
ensures the test team are aware that a new test is required.
### Test environment
#### Create unique files and directories
Ensure your tests do not write to a fixed file or directory. This can
cause problems when running multiple tests simultaneously and also
when running tests after a previous test run failure.
#### Assume parallel testing
Always assume your tests will be run *in parallel*. If this is
problematic for a test, force it to run in isolation using the
`serial_test` crate for Rust code for example.
### Running
Ensure you run the unit tests and they all pass before raising a PR.
Ideally do this on different distributions on different architectures
to maximise coverage (and so minimise surprises when your code runs in
the CI).
## Assertions
### Golang assertions
Use the `testify` assertions package to create a new assertion object as this
keeps the test code free from distracting `if` tests:
```go
func TestSomething(t *testing.T) {
assert := assert.New(t)
err := doSomething()
assert.NoError(err)
}
```
### Rust assertions
Use the standard set of `assert!()` macros.
## Table driven tests
Try to write tests using a table-based approach. This allows you to distill
the logic into a compact table (rather than spreading the tests across
multiple test functions). It also makes it easy to cover all the
interesting boundary conditions:
### Golang table driven tests
Assume the following function:
```go
// The function under test.
//
// Accepts a string and an integer and returns the
// result of sticking them together separated by a dash as a string.
func joinParamsWithDash(str string, num int) (string, error) {
if str == "" {
return "", errors.New("string cannot be blank")
}
if num <= 0 {
return "", errors.New("number must be positive")
}
return fmt.Sprintf("%s-%d", str, num), nil
}
```
A table driven approach to testing it:
```go
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestJoinParamsWithDash(t *testing.T) {
assert := assert.New(t)
// Type used to hold function parameters and expected results.
type testData struct {
param1 string
param2 int
expectedResult string
expectError bool
}
// List of tests to run including the expected results
data := []testData{
// Failure scenarios
{"", -1, "", true},
{"", 0, "", true},
{"", 1, "", true},
{"foo", 0, "", true},
{"foo", -1, "", true},
// Success scenarios
{"foo", 1, "foo-1", false},
{"bar", 42, "bar-42", false},
}
// Run the tests
for i, d := range data {
// Create a test-specific string that is added to each assert
// call. It will be displayed if any assert test fails.
msg := fmt.Sprintf("test[%d]: %+v", i, d)
// Call the function under test
result, err := joinParamsWithDash(d.param1, d.param2)
// update the message for more information on failure
msg = fmt.Sprintf("%s, result: %q, err: %v", msg, result, err)
if d.expectError {
assert.Error(err, msg)
// If an error is expected, there is no point
// performing additional checks.
continue
}
assert.NoError(err, msg)
assert.Equal(d.expectedResult, result, msg)
}
}
```
### Rust table driven tests
Assume the following function:
```rust
// Convenience type to allow Result return types to only specify the type
// for the true case; failures are specified as static strings.
// XXX: This is an example. In real code use the "anyhow" and
// XXX: "thiserror" crates.
pub type Result<T> = std::result::Result<T, &'static str>;
// The function under test.
//
// Accepts a string and an integer and returns the
// result of sticking them together separated by a dash as a string.
fn join_params_with_dash(str: &str, num: i32) -> Result<String> {
if str.is_empty() {
return Err("string cannot be blank");
}
if num <= 0 {
return Err("number must be positive");
}
let result = format!("{}-{}", str, num);
Ok(result)
}
```
A table driven approach to testing it:
```rust
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_join_params_with_dash() {
// This is a type used to record all details of the inputs
// and outputs of the function under test.
#[derive(Debug)]
struct TestData<'a> {
str: &'a str,
num: i32,
result: Result<String>,
}
// The tests can now be specified as a set of inputs and outputs
let tests = &[
// Failure scenarios
TestData {
str: "",
num: 0,
result: Err("string cannot be blank"),
},
TestData {
str: "foo",
num: -1,
result: Err("number must be positive"),
},
// Success scenarios
TestData {
str: "foo",
num: 42,
result: Ok("foo-42".to_string()),
},
TestData {
str: "-",
num: 1,
result: Ok("--1".to_string()),
},
];
// Run the tests
for (i, d) in tests.iter().enumerate() {
// Create a string containing details of the test
let msg = format!("test[{}]: {:?}", i, d);
// Call the function under test
let result = join_params_with_dash(d.str, d.num);
// Update the test details string with the results of the call
let msg = format!("{}, result: {:?}", msg, result);
// Perform the checks
if d.result.is_ok() {
assert!(result == d.result, msg);
continue;
}
let expected_error = format!("{}", d.result.as_ref().unwrap_err());
let actual_error = format!("{}", result.unwrap_err());
assert!(actual_error == expected_error, msg);
}
}
}
```
## Temporary files
Use `t.TempDir()` to create temporary directory. The directory created by
`t.TempDir()` is automatically removed when the test and all its subtests
complete.
### Golang temporary files
```go
func TestSomething(t *testing.T) {
assert := assert.New(t)
// Create a temporary directory
tmpdir := t.TempDir()
// Add test logic that will use the tmpdir here...
}
```
### Rust temporary files
Use the `tempfile` crate which allows files and directories to be deleted
automatically:
```rust
#[cfg(test)]
mod tests {
use tempfile::tempdir;
#[test]
fn test_something() {
// Create a temporary directory (which will be deleted automatically
let dir = tempdir().expect("failed to create tmpdir");
let filename = dir.path().join("file.txt");
// create filename ...
}
}
```
## Test user
[Unit tests are run *twice*](../src/runtime/go-test.sh):
- as the current user
- as the `root` user (if different to the current user)
When writing a test consider which user should run it; even if the code the
test is exercising runs as `root`, it may be necessary to *only* run the test
as a non-`root` for the test to be meaningful. Add appropriate skip
guards around code that requires `root` and non-`root` so that the test
will run if the correct type of user is detected and skipped if not.
### Run Golang tests as a different user
The main repository has the most comprehensive set of skip abilities. See:
- [`katatestutils`](../src/runtime/pkg/katatestutils)
### Run Rust tests as a different user
One method is to use the `nix` crate along with some custom macros:
```rust
#[cfg(test)]
mod tests {
#[allow(unused_macros)]
macro_rules! skip_if_root {
() => {
if nix::unistd::Uid::effective().is_root() {
println!("INFO: skipping {} which needs non-root", module_path!());
return;
}
};
}
#[allow(unused_macros)]
macro_rules! skip_if_not_root {
() => {
if !nix::unistd::Uid::effective().is_root() {
println!("INFO: skipping {} which needs root", module_path!());
return;
}
};
}
#[test]
fn test_that_must_be_run_as_root() {
// Not running as the superuser, so skip.
skip_if_not_root!();
// Run test *iff* the user running the test is root
// ...
}
}
```

View File

@@ -102,7 +102,7 @@ first
[install the latest release](#determine-latest-version).
See the
[manual installation documentation](install/README.md#manual-installation)
[manual installation installation documentation](install/README.md#manual-installation)
for details on how to automatically install and configuration a static release
with containerd.
@@ -114,7 +114,7 @@ with containerd.
> kernel or image.
If you are using custom
[guest assets](design/architecture/README.md#guest-assets),
[guest assets](design/architecture.md#guest-assets),
you must upgrade them to work with Kata Containers 2.x since Kata
Containers 1.x assets will **not** work.

View File

@@ -1,247 +0,0 @@
# Code PR Advice
Before raising a PR containing code changes, we suggest you consider
the following to ensure a smooth and fast process.
> **Note:**
>
> - All the advice in this document is optional. However, if the
> advice provided is not followed, there is no guarantee your PR
> will be merged.
>
> - All the check tools will be run automatically on your PR by the CI.
> However, if you run them locally first, there is a much better
> chance of a successful initial CI run.
## Assumptions
This document assumes you have already read (and in the case of the
code of conduct agreed to):
- The [Kata Containers code of conduct](https://github.com/kata-containers/community/blob/main/CODE_OF_CONDUCT.md).
- The [Kata Containers contributing guide](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md).
## Code
### Architectures
Do not write architecture-specific code if it is possible to write the
code generically.
### General advice
- Do not write code to impress: instead write code that is easy to read and understand.
- Always consider which user will run the code. Try to minimise
the privileges the code requires.
### Comments
Always add comments if the intent of the code is not obvious. However,
try to avoid comments if the code could be made clearer (for example
by using more meaningful variable names).
### Constants
Don't embed magic numbers and strings in functions, particularly if
they are used repeatedly.
Create constants at the top of the file instead.
### Copyright and license
Ensure all new files contain a copyright statement and an SPDX license
identifier in the comments at the top of the file.
### FIXME and TODO
If the code contains areas that are not fully implemented, make this
clear a comment which provides a link to a GitHub issue that provides
further information.
Do not just rely on comments in this case though: if possible, return
a "`BUG: feature X not implemented see {bug-url}`" type error.
### Functions
- Keep functions relatively short (less than 100 lines is a good "rule of thumb").
- Document functions if the parameters, return value or general intent
of the function is not obvious.
- Always return errors where possible.
Do not discard error return values from the functions this function
calls.
### Logging
- Don't use multiple log calls when a single log call could be used.
- Use structured logging where possible to allow
[standard tooling](../src/tools/log-parser)
be able to extract the log fields.
### Names
Give functions, macros and variables clear and meaningful names.
### Structures
#### Golang structures
Unlike Rust, Go does not enforce that all structure members be set.
This has lead to numerous bugs in the past where code like the
following is used:
```go
type Foo struct {
Key string
Value string
}
// BUG: Key not set, but nobody noticed! ;(
let foo1 = Foo {
Value: "foo",
}
```
A much safer approach is to create a constructor function to enforce
integrity:
```go
type Foo struct {
Key string
Value string
}
func NewFoo(key, value string) (*Foo, error) {
if key == "" {
return nil, errors.New("Foo needs a key")
}
if value == "" {
return nil, errors.New("Foo needs a value")
}
return &Foo{
Key: key,
Value: value,
}, nil
}
func testFoo() error {
// BUG: Key not set, but nobody noticed! ;(
badFoo := Foo{Value: "value"}
// Ok - the constructor performs needed validation
goodFoo, err := NewFoo("name", "value")
if err != nil {
return err
}
return nil
```
> **Note:**
>
> The above is just an example. The *safest* approach would be to move
> `NewFoo()` into a separate package and make `Foo` and it's elements
> private. The compiler would then enforce the use of the constructor
> to guarantee correctly defined objects.
### Tracing
Consider if the code needs to create a new
[trace span](./tracing.md).
Ensure any new trace spans added to the code are completed.
## Tests
### Unit tests
Where possible, code changes should be accompanied by unit tests.
Consider using the standard
[table-based approach](Unit-Test-Advice.md)
as it encourages you to make functions small and simple, and also
allows you to think about what types of value to test.
### Other categories of test
Raised a GitHub issue in the
[`tests`](https://github.com/kata-containers/tests) repository that
explains what sort of test is required along with as much detail as
possible. Ensure the original issue is referenced on the `tests` issue.
### Unsafe code
#### Rust language specifics
Minimise the use of `unsafe` blocks in Rust code and since it is
potentially dangerous always write [unit tests][#unit-tests]
for this code where possible.
`expect()` and `unwrap()` will cause the code to panic on error.
Prefer to return a `Result` on error rather than using these calls to
allow the caller to deal with the error condition.
The table below lists the small number of cases where use of
`expect()` and `unwrap()` are permitted:
| Area | Rationale for permitting |
|-|-|
| In test code (the `tests` module) | Panics will cause the test to fail, which is desirable. |
| `lazy_static!()` | This magic macro cannot "return" a value as it runs before `main()`. |
| `defer!()` | Similar to golang's `defer()` but doesn't allow the use of `?`. |
| `tokio::spawn(async move {})` | Cannot currently return a `Result` from an `async move` closure. |
| If an explicit test is performed before the `unwrap()` / `expect()` | *"Just about acceptable"*, but not ideal `[*]` |
| `Mutex.lock()` | Almost unrecoverable if failed in the lock acquisition |
`[*]` - There can lead to bad *future* code: consider what would
happen if the explicit test gets dropped in the future. This is easier
to happen if the test and the extraction of the value are two separate
operations. In summary, this strategy can introduce an insidious
maintenance issue.
## Documentation
### General requirements
- All new features should be accompanied by documentation explaining:
- What the new feature does
- Why it is useful
- How to use the feature
- Any known issues or limitations
Links should be provided to GitHub issues tracking the issues
- The [documentation requirements document](Documentation-Requirements.md)
explains how the project formats documentation.
### Markdown syntax
Run the
[markdown checker](https://github.com/kata-containers/tests/tree/main/cmd/check-markdown)
on your documentation changes.
### Spell check
Run the
[spell checker](https://github.com/kata-containers/tests/tree/main/cmd/check-spelling)
on your documentation changes.
## Finally
You may wish to read the documentation that the
[Kata Review Team](https://github.com/kata-containers/community/blob/main/Rota-Process.md) use to help review PRs:
- [PR review guide](https://github.com/kata-containers/community/blob/main/PR-Review-Guide.md).
- [documentation review process](https://github.com/kata-containers/community/blob/main/Documentation-Review-Process.md).

View File

@@ -2,19 +2,15 @@
Kata Containers design documents:
- [Kata Containers architecture](architecture)
- [Kata Containers architecture](architecture.md)
- [API Design of Kata Containers](kata-api-design.md)
- [Design requirements for Kata Containers](kata-design-requirements.md)
- [VSocks](VSocks.md)
- [VCPU handling](vcpu-handling.md)
- [VCPU threads pinning](vcpu-threads-pinning.md)
- [Host cgroups](host-cgroups.md)
- [Agent systemd cgroup](agent-systemd-cgroup.md)
- [`Inotify` support](inotify.md)
- [Metrics(Kata 2.0)](kata-2-0-metrics.md)
- [Design for Kata Containers `Lazyload` ability with `nydus`](kata-nydus-design.md)
- [Design for direct-assigned volume](direct-blk-device-assignment.md)
- [Design for core-scheduling](core-scheduling.md)
---
- [Design proposals](proposals)

View File

@@ -67,15 +67,22 @@ Using a proxy for multiplexing the connections between the VM and the host uses
4.5MB per [POD][2]. In a high density deployment this could add up to GBs of
memory that could have been used to host more PODs. When we talk about density
each kilobyte matters and it might be the decisive factor between run another
POD or not. Before making the decision not to use VSOCKs, you should ask
POD or not. For example if you have 500 PODs running in a server, the same
amount of [`kata-proxy`][3] processes will be running and consuming for around
2250MB of RAM. Before making the decision not to use VSOCKs, you should ask
yourself, how many more containers can run with the memory RAM consumed by the
Kata proxies?
### Reliability
[`kata-proxy`][3] is in charge of multiplexing the connections between virtual
machine and host processes, if it dies all connections get broken. For example
if you have a [POD][2] with 10 containers running, if `kata-proxy` dies it would
be impossible to contact your containers, though they would still be running.
Since communication via VSOCKs is direct, the only way to lose communication
with the containers is if the VM itself or the `containerd-shim-kata-v2` dies, if this happens
the containers are removed automatically.
[1]: https://wiki.qemu.org/Features/VirtioVsock
[2]: ./vcpu-handling.md#virtual-cpus-and-kubernetes-pods
[3]: https://github.com/kata-containers/proxy

View File

@@ -1,84 +0,0 @@
# Systemd Cgroup for Agent
As we know, we can interact with cgroups in two ways, **`cgroupfs`** and **`systemd`**. The former is achieved by reading and writing cgroup `tmpfs` files under `/sys/fs/cgroup` while the latter is done by configuring a transient unit by requesting systemd. Kata agent uses **`cgroupfs`** by default, unless you pass the parameter `--systemd-cgroup`.
## usage
For systemd, kata agent configures cgroups according to the following `linux.cgroupsPath` format standard provided by `runc` (`[slice]:[prefix]:[name]`). If you don't provide a valid `linux.cgroupsPath`, kata agent will treat it as `"system.slice:kata_agent:<container-id>"`.
> Here slice is a systemd slice under which the container is placed. If empty, it defaults to system.slice, except when cgroup v2 is used and rootless container is created, in which case it defaults to user.slice.
>
> Note that slice can contain dashes to denote a sub-slice (e.g. user-1000.slice is a correct notation, meaning a `subslice` of user.slice), but it must not contain slashes (e.g. user.slice/user-1000.slice is invalid).
>
> A slice of `-` represents a root slice.
>
> Next, prefix and name are used to compose the unit name, which is `<prefix>-<name>.scope`, unless name has `.slice` suffix, in which case prefix is ignored and the name is used as is.
## supported properties
The kata agent will translate the parameters in the `linux.resources` of `config.json` into systemd unit properties, and send it to systemd for configuration. Since systemd supports limited properties, only the following parameters in `linux.resources` will be applied. We will simply treat hybrid mode as legacy mode by the way.
- CPU
- v1
| runtime spec resource | systemd property name |
| --------------------- | --------------------- |
| `cpu.shares` | `CPUShares` |
- v2
| runtime spec resource | systemd property name |
| -------------------------- | -------------------------- |
| `cpu.shares` | `CPUShares` |
| `cpu.period` | `CPUQuotaPeriodUSec`(v242) |
| `cpu.period` & `cpu.quota` | `CPUQuotaPerSecUSec` |
- MEMORY
- v1
| runtime spec resource | systemd property name |
| --------------------- | --------------------- |
| `memory.limit` | `MemoryLimit` |
- v2
| runtime spec resource | systemd property name |
| ------------------------------ | --------------------- |
| `memory.low` | `MemoryLow` |
| `memory.max` | `MemoryMax` |
| `memory.swap` & `memory.limit` | `MemorySwapMax` |
- PIDS
| runtime spec resource | systemd property name |
| --------------------- | --------------------- |
| `pids.limit ` | `TasksMax` |
- CPUSET
| runtime spec resource | systemd property name |
| --------------------- | -------------------------- |
| `cpuset.cpus` | `AllowedCPUs`(v244) |
| `cpuset.mems` | `AllowedMemoryNodes`(v244) |
## Systemd Interface
`session.rs` and `system.rs` in `src/agent/rustjail/src/cgroups/systemd/interface` are automatically generated by `zbus-xmlgen`, which is is an accompanying tool provided by `zbus` to generate Rust code from `D-Bus XML interface descriptions`. The specific commands to generate these two files are as follows:
```shell
// system.rs
zbus-xmlgen --system org.freedesktop.systemd1 /org/freedesktop/systemd1
// session.rs
zbus-xmlgen --session org.freedesktop.systemd1 /org/freedesktop/systemd1
```
The current implementation of `cgroups/systemd` uses `system.rs` while `session.rs` could be used to build rootless containers in the future.
## references
- [runc - systemd cgroup driver](https://github.com/opencontainers/runc/blob/main/docs/systemd.md)
- [systemd.resource-control — Resource control unit settings](https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html)

View File

@@ -1 +1 @@
<mxfile host="app.diagrams.net" modified="2021-11-05T13:07:32.992Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" etag="j5e7J3AOXxeQrt-Zz2uw" version="15.6.8" type="device"><diagram id="XNV8G0dePIPkhS_Khqr4" name="Page-1">7Vxdd9o4EP01nLP7QI5s+fORUNhNT7rNbnqaZl/2CCywG2OxQhDIr18Z29iyZD6CDZRuHho8toQ9986dGVlNC3Yny98omvqfiIfDlg68ZQt+aOm6AUyb/4otq8TiGnpiGNPAS0wgNzwGbzgxapl1Hnh4ltoSEyMkZMFUNA5JFOEhE2yIUvIqXjYioScYpmiMJcPjEIWy9SnwmJ9YdQjd/MTvOBj76VdDCNI7n6Ds6tQw85FHXgsm2GvBLiWEJZ8myy4OY++JjulXnN3cGcUR22fAn3fPzx+jj7e9HrIXA330feIZ7czPCxTO00dO75atMh+MKZlP08swZXip8jwaZJcD+ca0zeNyomAywYyu+CXpRG3NNM1kUMoSXU+PX3OfG9nEfsHdG56gFOfxZvbcE/xD6gy1Yz7/88nuPiLwcNcZDLvEvZ10Vm1zt1+4WyIPx5NoLXj76gcMP07RMD77yqOB23w2CdPTIxKxlN78nuHtmCIv4A7qkpDQ9XzQxsjC8blREIYFu4ewMxpy+4xR8oILZ6yhgwcjfkZ2+Xa0yzDK2JzP81ZzntcNtedHI8+1LNnzo9FIHyo971kDy7Qa8Xx61gJCSGhyRHCtkXHRLLMhXGwJF659/KFrCwtHBkAbIA3rKgAAsHqdfjqDCBn/aRIYTRORUWiVa8rAwKZwcSRcuCQzFESYcrNWLz6K4HFtD9i2QrZM7HiGCjtHH8Ak3ETs+n0A+v0msdNhCTv7RkZPM1TwNSV37lb4asw6VwifDc4MnqJ88ldTTBfBjHulDB1/SibiI/o2IhEuAZGaUBiMI3445B7lvIC3sc8CXqZ20hOTwPPir1ESIqcMUORDn9DgLaZcmF7QnHKWUhqQ4TNUpUZj6GkSeuM5nvGUBl4wjeJW5odAsDnAzBJihiLgrIYgUz6DrJYSRqdvVwzblol82qJZM3Y75sfrV9wKGC+pXdEa7BTP16/s7/lLbVc0uY+8hn7lYGCkdgVKyJy0XdHkPvJn6VcOxu4C2xVte7t5zf3K0fCdv12Ry6efpl05XDgvrFvR5V7zmruVw/E6Z7OiDjcoQYK9MX5MDwllPhmTCIW93FpyXn7NPSHTFMXvmLFV6lI0Z0TEGC8D9q3w+TmeiueN5OjDMp15fbCSMdJijDg0dPUtuzI+KMwSH+bTrI+yeWRss3xO5nSYsfb+y53jDp6+P1r+y+fXj+HLU97AMETHmG1xalo+xI7cygqKQ8SCBRZuQwXxemiHUrQqXDAlQcRmhZkfYoPQBNqOmJstu0gYxQjD1ksjnBLFkrvICbd5nCNUA0qqwRidDpXMvEcDLiMCm/ZXAopnwVvaV8dcSF3IJzdvW+YHBctktmwNo727+erWHdxormWIMpEcHUYXGV0osmFz09kUZDSaYSZJymEIqyNHLqirEb5A7Xmv1hTZZB+nPa6sPVtFaqf45HyDBrRFvtnHEa5WQqklQy7ir5g6aiE6hjpqEbuQtOUCUf52p63yCFOzS6w7Lm1t9WtB1LqFNrMXjfmnlm6FcYE74CZrHH/6ZdOLeq04F/T5v92/7tqff5UoXeeyj+U5tqXsPWHHNGA2w37LPtlLib0L37auOa4IKpQXpDXHkktfq4bSV4lf1tb+HBpyXOmr75t+4L7pZ28ROQpjXY7RBxrP7eP5LH5wTBdYXla4rsATyz4LqALPaCbw1Mn7nHGXx9pzMdR2xF0eas/ZfCfJ3VDRcqrFrPa4e2fytk1bSbfq5F0eYTpmrclbyUH5XaTP2FRJzMvsOPUKJTi44QQ3Um6uqd/MXm9l/aZuiFM01x7ICwqV6J5MdqCY8QGwTqI98aQPmAbcpTFTj1wC21+P4J56tGGhCQEUK8QjaVhNs8NVzbHFezNAaSP7TlUrjTha1dDX0f1d+292B77+8dRGX7/MfHCmzPpOplaycPcah9tIslM1lpq4jWZTPGWTJBGTjjuGYVXftKXpLY0wHbh9hA5ca9uIZtpkKKfaF8RQe0KigCle6V3sXofDa2/NNfWbCgIVqm/dVLxgba76lrcvXGjb+64yetvK1u4VMKtuYTkOKjl0frQXI5VR8446FZqmXlNlCsqvQkqyXktpqnxnvMdWvOZ3hzqGmAgMR7EmYG928hR1yW5s05XCMcnaqRcsBAdZ/87j/5C4pmR7tuZkh1+gGdPlmpjZ+WzFNV9wbc/8YNJep5/FZnp+t+tvSC6uLx8ZDeejrfQ6aDtqBdTNrbzKujYjw5f6XK/a8esAwIt4hes7JgAGOKXrs/2o2o1e2qUtb51T1QZ1bL5SPoL8mvYCxEn5pkDNWKcpxir3rv+vToeGiF1BnMtSJ3n16ArUaX/XV6qTKW9Wq0md+GH+RwaSSiv/Ww2w9x8=</diagram></mxfile>
<mxfile host="Chrome" modified="2020-07-02T06:44:28.736Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" etag="r7FpfnbGNK7jbg54Gu9x" version="13.3.5" type="device"><diagram id="XNV8G0dePIPkhS_Khqr4" name="Page-1">7VvZcuI4FP0aHqFky+sjkNCTqfR0qtLV6fTLlMDy0hiLscWWrx8Zy3iRQkhjQxbygnVlhK1zz9HRkg4cztZfYjT3vxIHhx0VOOsOvOqoKrRthX2kkU0WMTWYBbw4cLIQKAL3wRPOgkoeXQQOTngsC1FCQhrMq8EJiSI8oZUYimOyqt7mktCpBObIw0LgfoJCMfoQONTPoiqEdlHxFw48n/80hIA/+Qzld/NA4iOHrEoheN2Bw5gQml3N1kMcpr1X7ZjRM7W7J4txRA/5wrd/v5rDewTubvrjyZDYg1l/01W0rJklChf8lfnT0k3eBzFZRA5OW1E6cLDyA4rv52iS1q4Y6izm01nIq10SUQ4jwxAOvBg5AXvCIQlJvG0PmhgZOK1zgzAsxR2ELXfC4gmNyRSXaoyJhccuqxHfmXfDEscUr0sh3gdfMJlhGm/YLby2q+i6nn2J56QOzKy8KhDWctT8Eri7rEQ8q7xd60W/swve9a+BQW8PBlWTw+C6jm0YIgyu66oTKQyOMTZ0oykYNLsGQ/7OJRgYnUQYFENvCYYWUXgvZFDss5PBuHBBVc7OBQUKvY4dNjbyIompTzwSofC6iA4KXNKULu65JWTO0fiNKd1wONCCkipWeB3Qn6Xrx7Spns5LV2ve8raw4YUyy7R9iCRkEU/4u3i3328se/zw+97wp99Wf4fTh2I0pCj2MN3TOZwjaYfsBTjGIaLBsmomGodKhQJjKI3nEymAt2jMPFql01EYeBG7nrAOwyy/B2nqBswE9XnFLHCcDF+cBE9ovG0v7fo5CSK6fR190NGvDgJjb7YJpNlZO/6rFfMkJRPoKbahVUUtKx2MBm/8Ln27Ustqmonldrt2tQ3iugnLmzqeu4c8COK9mVkRRSNkXTpwgmUFZeO/RWopt0h0ky0UfXaDos3XWzzyenblpZ+sfykKIhw73cQPZt0poqi73DXPHnf7C9nNzY2Hmqi2yhgpWJWpLQDGdX/EW6jqM/trSoUts6bCUBwLFdPMk6Csw0YDg6EcePMV3H6D4szwiDc/y4XSt9Ji8bVtSSbq5nGibouivpdjL6o6TxjQA5ZuVjMGHKc0jQqJfKwQzdQHzpwj7YAkc+Sj17nswN7HLklGofHNCbglCrjhWKahyQQc9nUN5i20JeBMnMHLAi6bzLQm35r+megGjqKbeqhQw0OF+jR8U0W+3cVp2z5eJOmL45gl8ccmnm1XiGdIltQUS2uHePJx7py8K7j2WKbaC7wrqPaYt3eSYQ5KZr1yMTsb76QQi1Min9K5FPe3OelVnyHaq+e8oMfYVWXgkVPefIKrKL3aAlN71lRcxXgWz5PxWP2YRIYHEnmXXxBa1fSyj8uvRrMJ/XBvb7q/6A348c9DF/34nvjgzANA61lzgizJMX4jNguKer9dqpqRKKCkXX917pUpW1drS48yh6Xqp1yZ0kS9Tshk2hwOsl0xCwATynDo6wBooG0cLFibYFoiCjIQYGs2VxH6+43OL/9omNu32vLyqozxpuyqKurXe9uleZYyf+BYoa6rx3mInJWGUuFkVwOn86yKgOllW6Zp0TVr2zKaRHRPvS2jiWT+8IOfqcBeDQodqucd/8TdMeSl7/iRzaCm1bYpVREEWwZCW0dFLAGEnXilCtcsGJLTO7bpANOUEEbHliNdFbXUMczO+1SBGo0AGI2aAgqqdaC0XKTK0qWdkjB79oZYWJw0f1qsDMkgc1Kk8vN15fWwzRzHyyCRzHbZi9IqGNWOjEiEa73OQ4f7Shn61blE/aidT+LgKc2vsPPCssWrzizWBVB2ZlF2ZLE1qEQb6C1wkprUKY6j9Ez8u4CrmeEJVNGBsk1Y46TwiCdKP59L0HOhOpdLkJxkutgE2dCjw/PbBGW/p7v4hB1Y5tl9gmjpLj5B6hNka+Yn9QmqaOkuPmGHjtaeT2DF4h/tsrW/4v8V4fX/</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 90 KiB

After

Width:  |  Height:  |  Size: 93 KiB

View File

@@ -1 +0,0 @@
<mxfile host="app.diagrams.net" modified="2022-01-18T14:06:01.890Z" agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" etag="nId-8OV6FDjWTDgzqDu-" version="15.8.9" type="device"><diagram id="bkF_ZONM9sPFCpIYoGFl" name="Page-1">5Vtbj6M2GP01eUyEbW55nGSmM2q70mqnUnf6UrnBCdYSnIIzSfbX14AJYDsLSSDZUWdHGvxhDD7n8F1sdoTm6/1zgjfhJxaQaAStYD9CjyMIgQ1d8SezHAqL504LwyqhgexUGV7pdyKNlrRuaUDSRkfOWMTppmlcsDgmC96w4SRhu2a3JYuad93gFdEMrwsc6dY/acDDwupDr7K/ELoKyzuDcn5rXHaWM0lDHLBdzYSeRmieMMaLo/V+TqIMvBKX4rpfTpw9PlhCYt7lgpd4/J1SSP9++bR8Cb6A1de/XsZ2Mco7jrZywiFLuXxgfihR4GQv7jEL+ToSBiAOU56wb2TOIpYIS8xi0XO2pFGkmHBEV7FoLsRTEmGfvZOEU4HvgzyxpkGQ3Wa2Cyknrxu8yO65E2oStoRt44BkE7BES5+xBCEbk+xrJonAM2FrwpOD6CLP+o5kQ8rRsZ2ivavItWWXsMZrSSKWclodR64QFwcSdDMB8ycYj+f+Hw/+b3jxvN57e/vXsa8RoIHfBMEEU42XJYu5fIugfeSplC5QSBpBtHSyf/LKmr340ZgWZ9z858iHBr6BopN8INDkAwGdj6llIMSxh2JkamDEjbhEqEGN+++WlSfGaY76g+gA3c2+OimOVtnf+BBs03Ea400aMp69DHJY8ZTFyEW/H/AP+uC/D9aQNbFAkzjDiwQ8A3H+ULyVSrqCOARNxInQwjGNSRIMzth0OMacCYJN14csnTFnOkG+Tpo3GGnAQJqCJomDhyySZ1EkwmlKFzlKOOG6uYZr023WUBYTRDOBW3L4mp2cOGXzTV6ZNx738sqidWjEIBJoWYMWlFK2TRakg2DFTFaEt3kkndoab47JQ0pbQiLM6XvzeU1Eyjt8ZjR/W0rluErELD10OUQxT3lVPf9QBrIVV2+7ykAFDtpAua6O075Cauh6x97iH8ZpSNfjb5jj8TscxFn04Aocx2n3A65BUMM5AT0L7c+lwqFcqg8UHKEeAVGJdSOXdAYD0rle4tOTucvw4W8wrhyvyZU7NWQr0KB5dzCq3OupMqaZufcRVWnOzwfNVnxbiTlTg4tCP4h5/dPlXZin1KA7phxjkT3DRtZhTbxj+0Tikbc+k4SKCWWFdGHcU/61HF4cv1UJjWhVI2WNITIYdM/MxIOKStSEomtmosrNVVOcoTOTDosAncWl5LNWm6ykgirVvNX0dCMFdciBC0ruJjWkKAReKjWnZaCBpQZNRfLFUmu6sFYPdmdn1bXcuq9Xc1WFqClIV6mpA3nWjaV2aWlfl9oFkql5QgvYTYkC95Ioexd/Z/9MoVWLiJ39HWiJ0UOLEBpEeF6aDXxTmr3akrRzhv0zbZ9cl5grcdBxJL732j6BpqWDM/k1llHFNthHordZifn9EA6A4gmQYZXjtozraxxzoFFyaU2bB4hBalpggROpX1tRO9gaBNTXILLt6GX6IeH0O8KJBoNTXyOg6+zzAhGOPw6sSi3sGTZkgWlDdjhYTdXxmS7eMbn4NBSwBDQZZJ2s9OwRWfJ+qJmq+bxxq/yGxKAOteStNzc0t2BC6aZeodx1/d/LV0kdfeve8jXtB95ZvtNpO0i3VW+Hrbm2Iv70RjysL0DWS/xbrQkVL+e9qmzfP8H2uVW2Fhrs21bZyLTv2K9KykWd4wJkvx9rtK7HFFnIvZQCLNiXVFxVKt7kxmLRq47yo7g8mpmL63Mrahm4TtbTqXDjNF79nnd7tCvLF0leZmLi8mWUazYUFxIxwmyT4ZIj5czEr0Bznq1IOuJZ56INqrb4zbonfM5i8fiY5pojOOW7bO0okzzHHP+Tz1Sv4HvLiFzHLJ2adD3DZwrDxZRet7vO24MIcBoe43mP7qEQ9f3cg6VwrC6/dHUP6kYXALA//yCa1efuRffqPw2gp/8A</diagram></mxfile>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 942 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

290
docs/design/architecture.md Normal file
View File

@@ -0,0 +1,290 @@
# Kata Containers Architecture
## Overview
This is an architectural overview of Kata Containers, based on the 2.0 release.
The primary deliverable of the Kata Containers project is a CRI friendly shim. There is also a CRI friendly library API behind them.
The [Kata Containers runtime](../../src/runtime)
is compatible with the [OCI](https://github.com/opencontainers) [runtime specification](https://github.com/opencontainers/runtime-spec)
and therefore works seamlessly with the [Kubernetes\* Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
through the [CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and
[Containerd\*](https://github.com/containerd/containerd) implementation.
Kata Containers creates a QEMU\*/KVM virtual machine for pod that `kubelet` (Kubernetes) creates respectively.
The [`containerd-shim-kata-v2` (shown as `shimv2` from this point onwards)](../../src/runtime/cmd/containerd-shim-kata-v2/)
is the Kata Containers entrypoint, which
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata.
Before `shimv2` (as done in [Kata Containers 1.x releases](https://github.com/kata-containers/runtime/releases)), we need to create a `containerd-shim` and a [`kata-shim`](https://github.com/kata-containers/shim) for each container and the Pod sandbox itself, plus an optional [`kata-proxy`](https://github.com/kata-containers/proxy) when VSOCK is not available. With `shimv2`, Kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead of `2N+1` shims, and no standalone `kata-proxy` process even if no VSOCK is available.
![Kubernetes integration with shimv2](arch-images/shimv2.svg)
The container process is then spawned by
[`kata-agent`](../../src/agent), an agent process running
as a daemon inside the virtual machine. `kata-agent` runs a [`ttRPC`](https://github.com/containerd/ttrpc-rust) server in
the guest using a VIRTIO serial or VSOCK interface which QEMU exposes as a socket
file on the host. `shimv2` uses a `ttRPC` protocol to communicate with
the agent. This protocol allows the runtime to send container management
commands to the agent. The protocol is also used to carry the I/O streams (stdout,
stderr, stdin) between the containers and the manage engines (e.g. CRI-O or containerd).
For any given container, both the init process and all potentially executed
commands within that container, together with their related I/O streams, need
to go through the VSOCK interface exported by QEMU.
The container workload, that is, the actual OCI bundle rootfs, is exported from the
host to the virtual machine. In the case where a block-based graph driver is
configured, `virtio-scsi` will be used. In all other cases a `virtio-fs` VIRTIO mount point
will be used. `kata-agent` uses this mount point as the root filesystem for the
container processes.
## Virtualization
How Kata Containers maps container concepts to virtual machine technologies, and how this is realized in the multiple
hypervisors and VMMs that Kata supports is described within the [virtualization documentation](./virtualization.md)
## Guest assets
The hypervisor will launch a virtual machine which includes a minimal guest kernel
and a guest image.
### Guest kernel
The guest kernel is passed to the hypervisor and used to boot the virtual
machine. The default kernel provided in Kata Containers is highly optimized for
kernel boot time and minimal memory footprint, providing only those services
required by a container workload. This is based on a very current upstream Linux
kernel.
### Guest image
Kata Containers supports both an `initrd` and `rootfs` based minimal guest image.
#### Root filesystem image
The default packaged root filesystem image, sometimes referred to as the "mini O/S", is a
highly optimized container bootstrap system based on [Clear Linux](https://clearlinux.org/). It provides an extremely minimal environment and
has a highly optimized boot path.
The only services running in the context of the mini O/S are the init daemon
(`systemd`) and the [Agent](#agent). The real workload the user wishes to run
is created using libcontainer, creating a container in the same manner that is done
by `runc`.
For example, when `ctr run -ti ubuntu date` is run:
- The hypervisor will boot the mini-OS image using the guest kernel.
- `systemd`, running inside the mini-OS context, will launch the `kata-agent` in
the same context.
- The agent will create a new confined context to run the specified command in
(`date` in this example).
- The agent will then execute the command (`date` in this example) inside this
new context, first setting the root filesystem to the expected Ubuntu\* root
filesystem.
#### Initrd image
A compressed `cpio(1)` archive, created from a rootfs which is loaded into memory and used as part of the Linux startup process. During startup, the kernel unpacks it into a special instance of a `tmpfs` that becomes the initial root filesystem.
The only service running in the context of the initrd is the [Agent](#agent) as the init daemon. The real workload the user wishes to run is created using libcontainer, creating a container in the same manner that is done by `runc`.
## Agent
[`kata-agent`](../../src/agent) is a process running in the guest as a supervisor for managing containers and processes running within those containers.
For the 2.0 release, the `kata-agent` is rewritten in the [RUST programming language](https://www.rust-lang.org/) so that we can minimize its memory footprint while keeping the memory safety of the original GO version of [`kata-agent` used in Kata Container 1.x](https://github.com/kata-containers/agent). This memory footprint reduction is pretty impressive, from tens of megabytes down to less than 100 kilobytes, enabling Kata Containers in more use cases like functional computing and edge computing.
The `kata-agent` execution unit is the sandbox. A `kata-agent` sandbox is a container sandbox defined by a set of namespaces (NS, UTS, IPC and PID). `shimv2` can
run several containers per VM to support container engines that require multiple
containers running inside a pod.
`kata-agent` communicates with the other Kata components over `ttRPC`.
## Runtime
`containerd-shim-kata-v2` is a [containerd runtime shimv2](https://github.com/containerd/containerd/blob/v1.4.1/runtime/v2/README.md) implementation and is responsible for handling the `runtime v2 shim APIs`, which is similar to [the OCI runtime specification](https://github.com/opencontainers/runtime-spec) but simplifies the architecture by loading the runtime once and making RPC calls to handle the various container lifecycle commands. This refinement is an improvement on the OCI specification which requires the container manager call the runtime binary multiple times, at least once for each lifecycle command.
`containerd-shim-kata-v2` heavily utilizes the
[virtcontainers package](../../src/runtime/virtcontainers/), which provides a generic, runtime-specification agnostic, hardware-virtualized containers library.
### Configuration
The runtime uses a TOML format configuration file called `configuration.toml`. By default this file is installed in the `/usr/share/defaults/kata-containers` directory and contains various settings such as the paths to the hypervisor, the guest kernel and the mini-OS image.
The actual configuration file paths can be determined by running:
```
$ kata-runtime --show-default-config-paths
```
Most users will not need to modify the configuration file.
The file is well commented and provides a few "knobs" that can be used to modify the behavior of the runtime and your chosen hypervisor.
The configuration file is also used to enable runtime [debug output](../Developer-Guide.md#enable-full-debug).
## Networking
Containers will typically live in their own, possibly shared, networking namespace.
At some point in a container lifecycle, container engines will set up that namespace
to add the container to a network which is isolated from the host network, but
which is shared between containers
In order to do so, container engines will usually add one end of a virtual
ethernet (`veth`) pair into the container networking namespace. The other end of
the `veth` pair is added to the host networking namespace.
This is a very namespace-centric approach as many hypervisors/VMMs cannot handle `veth`
interfaces. Typically, `TAP` interfaces are created for VM connectivity.
To overcome incompatibility between typical container engines expectations
and virtual machines, Kata Containers networking transparently connects `veth`
interfaces with `TAP` ones using Traffic Control:
![Kata Containers networking](arch-images/network.png)
With a TC filter in place, a redirection is created between the container network and the
virtual machine. As an example, the CNI may create a device, `eth0`, in the container's network
namespace, which is a VETH device. Kata Containers will create a tap device for the VM, `tap0_kata`,
and setup a TC redirection filter to mirror traffic from `eth0`'s ingress to `tap0_kata`'s egress,
and a second to mirror traffic from `tap0_kata`'s ingress to `eth0`'s egress.
Kata Containers maintains support for MACVTAP, which was an earlier implementation used in Kata. TC-filter
is the default because it allows for simpler configuration, better CNI plugin compatibility, and performance
on par with MACVTAP.
Kata Containers has deprecated support for bridge due to lacking performance relative to TC-filter and MACVTAP.
Kata Containers supports both
[CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model)
and [CNI](https://github.com/containernetworking/cni) for networking management.
### Network Hotplug
Kata Containers has developed a set of network sub-commands and APIs to add, list and
remove a guest network endpoint and to manipulate the guest route table.
The following diagram illustrates the Kata Containers network hotplug workflow.
![Network Hotplug](arch-images/kata-containers-network-hotplug.png)
## Storage
Container workloads are shared with the virtualized environment through [virtio-fs](https://virtio-fs.gitlab.io/).
The [devicemapper `snapshotter`](https://github.com/containerd/containerd/tree/master/snapshots/devmapper) is a special case. The `snapshotter` uses dedicated block devices rather than formatted filesystems, and operates at the block level rather than the file level. This knowledge is used to directly use the underlying block device instead of the overlay file system for the container root file system. The block device maps to the top read-write layer for the overlay. This approach gives much better I/O performance compared to using `virtio-fs` to share the container file system.
Kata Containers has the ability to hotplug and remove block devices, which makes it possible to use block devices for containers started after the VM has been launched.
Users can check to see if the container uses the devicemapper block device as its rootfs by calling `mount(8)` within the container. If the devicemapper block device
is used, `/` will be mounted on `/dev/vda`. Users can disable direct mounting of the underlying block device through the runtime configuration.
## Kubernetes support
[Kubernetes\*](https://github.com/kubernetes/kubernetes/) is a popular open source
container orchestration engine. In Kubernetes, a set of containers sharing resources
such as networking, storage, mount, PID, etc. is called a
[Pod](https://kubernetes.io/docs/user-guide/pods/).
A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster
only needs to run a container runtime and a container agent (called a
[Kubelet](https://kubernetes.io/docs/admin/kubelet/)).
A Kubernetes cluster runs a control plane where a scheduler (typically running on a
dedicated master node) calls into a compute Kubelet. This Kubelet instance is
responsible for managing the lifecycle of pods within the nodes and eventually relies
on a container runtime to handle execution. The Kubelet architecture decouples
lifecycle management from container execution through the dedicated
`gRPC` based [Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/container-runtime-interface-v1.md).
In other words, a Kubelet is a CRI client and expects a CRI implementation to
handle the server side of the interface.
[CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and [Containerd\*](https://github.com/containerd/containerd/) are CRI implementations that rely on [OCI](https://github.com/opencontainers/runtime-spec)
compatible runtimes for managing container instances.
Kata Containers is an officially supported CRI-O and Containerd runtime. Refer to the following guides on how to set up Kata Containers with Kubernetes:
- [How to use Kata Containers and Containerd](../how-to/containerd-kata.md)
- [Run Kata Containers with Kubernetes](../how-to/run-kata-with-k8s.md)
#### OCI annotations
In order for the Kata Containers runtime (or any virtual machine based OCI compatible
runtime) to be able to understand if it needs to create a full virtual machine or if it
has to create a new container inside an existing pod's virtual machine, CRI-O adds
specific annotations to the OCI configuration file (`config.json`) which is passed to
the OCI compatible runtime.
Before calling its runtime, CRI-O will always add a `io.kubernetes.cri-o.ContainerType`
annotation to the `config.json` configuration file it produces from the Kubelet CRI
request. The `io.kubernetes.cri-o.ContainerType` annotation can either be set to `sandbox`
or `container`. Kata Containers will then use this annotation to decide if it needs to
respectively create a virtual machine or a container inside a virtual machine associated
with a Kubernetes pod:
```Go
containerType, err := ociSpec.ContainerType()
if err != nil {
return err
}
handleFactory(ctx, runtimeConfig)
disableOutput := noNeedForOutput(detach, ociSpec.Process.Terminal)
var process vc.Process
switch containerType {
case vc.PodSandbox:
process, err = createSandbox(ctx, ociSpec, runtimeConfig, containerID, bundlePath, console, disableOutput, systemdCgroup)
if err != nil {
return err
}
case vc.PodContainer:
process, err = createContainer(ctx, ociSpec, containerID, bundlePath, console, disableOutput)
if err != nil {
return err
}
}
```
#### Mixing VM based and namespace based runtimes
> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/)
> has been supported and the user can specify runtime without the non-standardized annotations.
With `RuntimeClass`, users can define Kata Containers as a `RuntimeClass` and then explicitly specify that a pod being created as a Kata Containers pod. For details, please refer to [How to use Kata Containers and Containerd](../../docs/how-to/containerd-kata.md).
# Appendices
## DAX
Kata Containers utilizes the Linux kernel DAX [(Direct Access filesystem)](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/dax.rst?h=v5.14)
feature to efficiently map some host-side files into the guest VM space.
In particular, Kata Containers uses the QEMU NVDIMM feature to provide a
memory-mapped virtual device that can be used to DAX map the virtual machine's
root filesystem into the guest memory address space.
Mapping files using DAX provides a number of benefits over more traditional VM
file and device mapping mechanisms:
- Mapping as a direct access devices allows the guest to directly access
the host memory pages (such as via Execute In Place (XIP)), bypassing the guest
page cache. This provides both time and space optimizations.
- Mapping as a direct access device inside the VM allows pages from the
host to be demand loaded using page faults, rather than having to make requests
via a virtualized device (causing expensive VM exits/hypercalls), thus providing
a speed optimization.
- Utilizing `MAP_SHARED` shared memory on the host allows the host to efficiently
share pages.
Kata Containers uses the following steps to set up the DAX mappings:
1. QEMU is configured with an NVDIMM memory device, with a memory file
backend to map in the host-side file into the virtual NVDIMM space.
2. The guest kernel command line mounts this NVDIMM device with the DAX
feature enabled, allowing direct page mapping and access, thus bypassing the
guest page cache.
![DAX](arch-images/DAX.png)
Information on the use of NVDIMM via QEMU is available in the [QEMU source code](http://git.qemu-project.org/?p=qemu.git;a=blob;f=docs/nvdimm.txt;hb=HEAD)

View File

@@ -1,477 +0,0 @@
# Kata Containers Architecture
## Overview
Kata Containers is an open source community working to build a secure
container [runtime](#runtime) with lightweight virtual machines (VM's)
that feel and perform like standard Linux containers, but provide
stronger [workload](#workload) isolation using hardware
[virtualization](#virtualization) technology as a second layer of
defence.
Kata Containers runs on [multiple architectures](../../../src/runtime/README.md#platform-support)
and supports [multiple hypervisors](../../hypervisors.md).
This document is a summary of the Kata Containers architecture.
## Background knowledge
This document assumes the reader understands a number of concepts
related to containers and file systems. The
[background](background.md) document explains these concepts.
## Example command
This document makes use of a particular [example
command](example-command.md) throughout the text to illustrate certain
concepts.
## Virtualization
For details on how Kata Containers maps container concepts to VM
technologies, and how this is realized in the multiple hypervisors and
VMMs that Kata supports see the
[virtualization documentation](../virtualization.md).
## Compatibility
The [Kata Containers runtime](../../../src/runtime) is compatible with
the [OCI](https://github.com/opencontainers)
[runtime specification](https://github.com/opencontainers/runtime-spec)
and therefore works seamlessly with the
[Kubernetes Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
through the [CRI-O](https://github.com/kubernetes-incubator/cri-o)
and [containerd](https://github.com/containerd/containerd)
implementations.
Kata Containers provides a ["shimv2"](#shim-v2-architecture) compatible runtime.
## Shim v2 architecture
The Kata Containers runtime is shim v2 ("shimv2") compatible. This
section explains what this means.
> **Note:**
>
> For a comparison with the Kata 1.x architecture, see
> [the architectural history document](history.md).
The
[containerd runtime shimv2 architecture](https://github.com/containerd/containerd/tree/main/runtime/v2)
or _shim API_ architecture resolves the issues with the old
architecture by defining a set of shimv2 APIs that a compatible
runtime implementation must supply. Rather than calling the runtime
binary multiple times for each new container, the shimv2 architecture
runs a single instance of the runtime binary (for any number of
containers). This improves performance and resolves the state handling
issue.
The shimv2 API is similar to the
[OCI runtime](https://github.com/opencontainers/runtime-spec)
API in terms of the way the container lifecycle is split into
different verbs. Rather than calling the runtime multiple times, the
container manager creates a socket and passes it to the shimv2
runtime. The socket is a bi-directional communication channel that
uses a gRPC based protocol to allow the container manager to send API
calls to the runtime, which returns the result to the container
manager using the same channel.
The shimv2 architecture allows running several containers per VM to
support container engines that require multiple containers running
inside a pod.
With the new architecture [Kubernetes](kubernetes.md) can
launch both Pod and OCI compatible containers with a single
[runtime](#runtime) shim per Pod, rather than `2N+1` shims. No stand
alone `kata-proxy` process is required, even if VSOCK is not
available.
## Workload
The workload is the command the user requested to run in the
container and is specified in the [OCI bundle](background.md#oci-bundle)'s
configuration file.
In our [example](example-command.md), the workload is the `sh(1)` command.
### Workload root filesystem
For details of how the [runtime](#runtime) makes the
[container image](background.md#container-image) chosen by the user available to
the workload process, see the
[Container creation](#container-creation) and [storage](#storage) sections.
Note that the workload is isolated from the [guest VM](#environments) environment by its
surrounding [container environment](#environments). The guest VM
environment where the container runs in is also isolated from the _outer_
[host environment](#environments) where the container manager runs.
## System overview
### Environments
The following terminology is used to describe the different or
environments (or contexts) various processes run in. It is necessary
to study this table closely to make sense of what follows:
| Type | Name | Virtualized | Containerized | rootfs | Rootfs device type | Mount type | Description |
|-|-|-|-|-|-|-|-|
| Host | Host | no `[1]` | no | Host specific | Host specific | Host specific | The environment provided by a standard, physical non virtualized system. |
| VM root | Guest VM | yes | no | rootfs inside the [guest image](guest-assets.md#guest-image) | Hypervisor specific `[2]` | `ext4` | The first (or top) level VM environment created on a host system. |
| VM container root | Container | yes | yes | rootfs type requested by user ([`ubuntu` in the example](example-command.md)) | `kataShared` | [virtio FS](storage.md#virtio-fs) | The first (or top) level container environment created inside the VM. Based on the [OCI bundle](background.md#oci-bundle). |
**Key:**
- `[1]`: For simplicity, this document assumes the host environment
runs on physical hardware.
- `[2]`: See the [DAX](#dax) section.
> **Notes:**
>
> - The word "root" is used to mean _top level_ here in a similar
> manner to the term [rootfs](background.md#root-filesystem).
>
> - The term "first level" prefix used above is important since it implies
> that it is possible to create multi level systems. However, they do
> not form part of a standard Kata Containers environment so will not
> be considered in this document.
The reasons for containerizing the [workload](#workload) inside the VM
are:
- Isolates the workload entirely from the VM environment.
- Provides better isolation between containers in a [pod](kubernetes.md).
- Allows the workload to be managed and monitored through its cgroup
confinement.
### Container creation
The steps below show at a high level how a Kata Containers container is
created using the containerd container manager:
1. The user requests the creation of a container by running a command
like the [example command](example-command.md).
1. The container manager daemon runs a single instance of the Kata
[runtime](#runtime).
1. The Kata runtime loads its [configuration file](#configuration).
1. The container manager calls a set of shimv2 API functions on the runtime.
1. The Kata runtime launches the configured [hypervisor](#hypervisor).
1. The hypervisor creates and starts (_boots_) a VM using the
[guest assets](guest-assets.md#guest-assets):
- The hypervisor [DAX](#dax) shares the
[guest image](guest-assets.md#guest-image)
into the VM to become the VM [rootfs](background.md#root-filesystem) (mounted on a `/dev/pmem*` device),
which is known as the [VM root environment](#environments).
- The hypervisor mounts the [OCI bundle](background.md#oci-bundle), using [virtio FS](storage.md#virtio-fs),
into a container specific directory inside the VM's rootfs.
This container specific directory will become the
[container rootfs](#environments), known as the
[container environment](#environments).
1. The [agent](#agent) is started as part of the VM boot.
1. The runtime calls the agent's `CreateSandbox` API to request the
agent create a container:
1. The agent creates a [container environment](#environments)
in the container specific directory that contains the [container rootfs](#environments).
The container environment hosts the [workload](#workload) in the
[container rootfs](#environments) directory.
1. The agent spawns the workload inside the container environment.
> **Notes:**
>
> - The container environment created by the agent is equivalent to
> a container environment created by the
> [`runc`](https://github.com/opencontainers/runc) OCI runtime;
> Linux cgroups and namespaces are created inside the VM by the
> [guest kernel](guest-assets.md#guest-kernel) to isolate the
> workload from the VM environment the container is created in.
> See the [Environments](#environments) section for an
> explanation of why this is done.
>
> - See the [guest image](guest-assets.md#guest-image) section for
> details of exactly how the agent is started.
1. The container manager returns control of the container to the
user running the `ctr` command.
> **Note:**
>
> At this point, the container is running and:
>
> - The [workload](#workload) process ([`sh(1)` in the example](example-command.md))
> is running in the [container environment](#environments).
> - The user is now able to interact with the workload
> (using the [`ctr` command in the example](example-command.md)).
> - The [agent](#agent), running inside the VM is monitoring the
> [workload](#workload) process.
> - The [runtime](#runtime) is waiting for the agent's `WaitProcess` API
> call to complete.
Further details of these steps are provided in the sections below.
### Container shutdown
There are two possible ways for the container environment to be
terminated:
- When the [workload](#workload) exits.
This is the standard, or _graceful_ shutdown method.
- When the container manager forces the container to be deleted.
#### Workload exit
The [agent](#agent) will detect when the [workload](#workload) process
exits, capture its exit status (see `wait(2)`) and return that value
to the [runtime](#runtime) by specifying it as the response to the
`WaitProcess` agent API call made by the [runtime](#runtime).
The runtime then passes the value back to the container manager by the
`Wait` [shimv2 API](#shim-v2-architecture) call.
Once the workload has fully exited, the VM is no longer needed and the
runtime cleans up the environment (which includes terminating the
[hypervisor](#hypervisor) process).
> **Note:**
>
> When [agent tracing is enabled](../../tracing.md#agent-shutdown-behaviour),
> the shutdown behaviour is different.
#### Container manager requested shutdown
If the container manager requests the container be deleted, the
[runtime](#runtime) will signal the agent by sending it a
`DestroySandbox` [ttRPC API](../../../src/libs/protocols/protos/agent.proto) request.
## Guest assets
The guest assets comprise a guest image and a guest kernel that are
used by the [hypervisor](#hypervisor).
See the [guest assets](guest-assets.md) document for further
information.
## Hypervisor
The [hypervisor](../../hypervisors.md) specified in the
[configuration file](#configuration) creates a VM to host the
[agent](#agent) and the [workload](#workload) inside the
[container environment](#environments).
> **Note:**
>
> The hypervisor process runs inside an environment slightly different
> to the host environment:
>
> - It is run in a different cgroup environment to the host.
> - It is given a separate network namespace from the host.
> - If the [OCI configuration specifies a SELinux label](https://github.com/opencontainers/runtime-spec/blob/main/config.md#linux-process),
> the hypervisor process will run with that label (*not* the workload running inside the hypervisor's VM).
## Agent
The Kata Containers agent ([`kata-agent`](../../../src/agent)), written
in the [Rust programming language](https://www.rust-lang.org), is a
long running process that runs inside the VM. It acts as the
supervisor for managing the containers and the [workload](#workload)
running within those containers. Only a single agent process is run
for each VM created.
### Agent communications protocol
The agent communicates with the other Kata components (primarily the
[runtime](#runtime)) using a
[`ttRPC`](https://github.com/containerd/ttrpc-rust) based
[protocol](../../../src/libs/protocols/protos).
> **Note:**
>
> If you wish to learn more about this protocol, a practical way to do
> so is to experiment with the
> [agent control tool](#agent-control-tool) on a test system.
> This tool is for test and development purposes only and can send
> arbitrary ttRPC agent API commands to the [agent](#agent).
## Runtime
The Kata Containers runtime (the [`containerd-shim-kata-v2`](../../../src/runtime/cmd/containerd-shim-kata-v2
) binary) is a [shimv2](#shim-v2-architecture) compatible runtime.
> **Note:**
>
> The Kata Containers runtime is sometimes referred to as the Kata
> _shim_. Both terms are correct since the `containerd-shim-kata-v2`
> is a container runtime, and that runtime implements the containerd
> shim v2 API.
The runtime makes heavy use of the [`virtcontainers`
package](../../../src/runtime/virtcontainers), which provides a generic,
runtime-specification agnostic, hardware-virtualized containers
library.
The runtime is responsible for starting the [hypervisor](#hypervisor)
and it's VM, and communicating with the [agent](#agent) using a
[ttRPC based protocol](#agent-communications-protocol) over a VSOCK
socket that provides a communications link between the VM and the
host.
This protocol allows the runtime to send container management commands
to the agent. The protocol is also used to carry the standard I/O
streams (`stdout`, `stderr`, `stdin`) between the containers and
container managers (such as CRI-O or containerd).
## Utility program
The `kata-runtime` binary is a utility program that provides
administrative commands to manipulate and query a Kata Containers
installation.
> **Note:**
>
> In Kata 1.x, this program also acted as the main
> [runtime](#runtime), but this is no longer required due to the
> improved shimv2 architecture.
### exec command
The `exec` command allows an administrator or developer to enter the
[VM root environment](#environments) which is not accessible by the container
[workload](#workload).
See [the developer guide](../../Developer-Guide.md#connect-to-debug-console) for further details.
### Configuration
See the [configuration file details](../../../src/runtime/README.md#configuration).
The configuration file is also used to enable runtime [debug output](../../Developer-Guide.md#enable-full-debug).
## Process overview
The table below shows an example of the main processes running in the
different [environments](#environments) when a Kata Container is
created with containerd using our [example command](example-command.md):
| Description | Host | VM root environment | VM container environment |
|-|-|-|-|
| Container manager | `containerd` | |
| Kata Containers | [runtime](#runtime), [`virtiofsd`](storage.md#virtio-fs), [hypervisor](#hypervisor) | [agent](#agent) |
| User [workload](#workload) | | | [`ubuntu sh`](example-command.md) |
## Networking
See the [networking document](networking.md).
## Storage
See the [storage document](storage.md).
## Kubernetes support
See the [Kubernetes document](kubernetes.md).
#### OCI annotations
In order for the Kata Containers [runtime](#runtime) (or any VM based OCI compatible
runtime) to be able to understand if it needs to create a full VM or if it
has to create a new container inside an existing pod's VM, CRI-O adds
specific annotations to the OCI configuration file (`config.json`) which is passed to
the OCI compatible runtime.
Before calling its runtime, CRI-O will always add a `io.kubernetes.cri-o.ContainerType`
annotation to the `config.json` configuration file it produces from the Kubelet CRI
request. The `io.kubernetes.cri-o.ContainerType` annotation can either be set to `sandbox`
or `container`. Kata Containers will then use this annotation to decide if it needs to
respectively create a virtual machine or a container inside a virtual machine associated
with a Kubernetes pod:
| Annotation value | Kata VM created? | Kata container created? |
|-|-|-|
| `sandbox` | yes | yes (inside new VM) |
| `container`| no | yes (in existing VM) |
#### Mixing VM based and namespace based runtimes
> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/)
> has been supported and the user can specify runtime without the non-standardized annotations.
With `RuntimeClass`, users can define Kata Containers as a
`RuntimeClass` and then explicitly specify that a pod must be created
as a Kata Containers pod. For details, please refer to [How to use
Kata Containers and containerd](../../../docs/how-to/containerd-kata.md).
## Tracing
The [tracing document](../../tracing.md) provides details on the tracing
architecture.
# Appendices
## DAX
Kata Containers utilizes the Linux kernel DAX
[(Direct Access filesystem)](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/dax.rst?h=v5.14)
feature to efficiently map the [guest image](guest-assets.md#guest-image) in the
[host environment](#environments) into the
[guest VM environment](#environments) to become the VM's
[rootfs](background.md#root-filesystem).
If the [configured](#configuration) [hypervisor](#hypervisor) is set
to either QEMU or Cloud Hypervisor, DAX is used with the feature shown
in the table below:
| Hypervisor | Feature used | rootfs device type |
|-|-|-|
| Cloud Hypervisor (CH) | `dax` `FsConfig` configuration option | PMEM (emulated Persistent Memory device) |
| QEMU | NVDIMM memory device with a memory file backend | NVDIMM (emulated Non-Volatile Dual In-line Memory Module device) |
The features in the table above are equivalent in that they provide a memory-mapped
virtual device which is used to DAX map the VM's
[rootfs](background.md#root-filesystem) into the [VM guest](#environments) memory
address space.
The VM is then booted, specifying the `root=` kernel parameter to make
the [guest kernel](guest-assets.md#guest-kernel) use the appropriate emulated device
as its rootfs.
### DAX advantages
Mapping files using [DAX](#dax) provides a number of benefits over
more traditional VM file and device mapping mechanisms:
- Mapping as a direct access device allows the guest to directly
access the host memory pages (such as via Execute In Place (XIP)),
bypassing the [guest kernel](guest-assets.md#guest-kernel)'s page cache. This
zero copy provides both time and space optimizations.
- Mapping as a direct access device inside the VM allows pages from the
host to be demand loaded using page faults, rather than having to make requests
via a virtualized device (causing expensive VM exits/hypercalls), thus providing
a speed optimization.
- Utilizing `mmap(2)`'s `MAP_SHARED` shared memory option on the host
allows the host to efficiently share pages.
![DAX](../arch-images/DAX.png)
For further details of the use of NVDIMM with QEMU, see the [QEMU
project documentation](https://www.qemu.org).
## Agent control tool
The [agent control tool](../../../src/tools/agent-ctl) is a test and
development tool that can be used to learn more about a Kata
Containers system.
## Terminology
See the [project glossary](../../../Glossary.md).

View File

@@ -1,81 +0,0 @@
# Kata Containers architecture background knowledge
The following sections explain some of the background concepts
required to understand the [architecture document](README.md).
## Root filesystem
This document uses the term _rootfs_ to refer to a root filesystem
which is mounted as the top-level directory ("`/`") and often referred
to as _slash_.
It is important to understand this term since the overall system uses
multiple different rootfs's (as explained in the
[Environments](README.md#environments) section.
## Container image
In the [example command](example-command.md) the user has specified the
type of container they wish to run via the container image name:
`ubuntu`. This image name corresponds to a _container image_ that can
be used to create a container with an Ubuntu Linux environment. Hence,
in our [example](example-command.md), the `sh(1)` command will be run
inside a container which has an Ubuntu rootfs.
> **Note:**
>
> The term _container image_ is confusing since the image in question
> is **not** a container: it is simply a set of files (_an image_)
> that can be used to _create_ a container. The term _container
> template_ would be more accurate but the term _container image_ is
> commonly used so this document uses the standard term.
For the purposes of this document, the most important part of the
[example command line](example-command.md) is the container image the
user has requested. Normally, the container manager will _pull_
(download) a container image from a remote site and store a copy
locally. This local container image is used by the container manager
to create an [OCI bundle](#oci-bundle) which will form the environment
the container will run in. After creating the OCI bundle, the
container manager launches a [runtime](README.md#runtime) which will create the
container using the provided OCI bundle.
## OCI bundle
To understand what follows, it is important to know at a high level
how an OCI ([Open Containers Initiative](https://opencontainers.org)) compatible container is created.
An OCI compatible container is created by taking a
[container image](#container-image) and converting the embedded rootfs
into an
[OCI rootfs bundle](https://github.com/opencontainers/runtime-spec/blob/main/bundle.md),
or more simply, an _OCI bundle_.
An OCI bundle is a `tar(1)` archive normally created by a container
manager which is passed to an OCI [runtime](README.md#runtime) which converts
it into a full container rootfs. The bundle contains two assets:
- A container image [rootfs](#root-filesystem)
This is simply a directory of files that will be used to represent
the rootfs for the container.
For the [example command](example-command.md), the directory will
contain the files necessary to create a minimal Ubuntu root
filesystem.
- An [OCI configuration file](https://github.com/opencontainers/runtime-spec/blob/main/config.md)
This is a JSON file called `config.json`.
The container manager will create this file so that:
- The `root.path` value is set to the full path of the specified
container rootfs.
In [the example](example-command.md) this value will be `ubuntu`.
- The `process.args` array specifies the list of commands the user
wishes to run. This is known as the [workload](README.md#workload).
In [the example](example-command.md) the workload is `sh(1)`.

View File

@@ -1,30 +0,0 @@
# Example command
The following containerd command creates a container. It is referred
to throughout the architecture document to help explain various points:
```bash
$ sudo ctr run --runtime "io.containerd.kata.v2" --rm -t "quay.io/libpod/ubuntu:latest" foo sh
```
This command requests that containerd:
- Create a container (`ctr run`).
- Use the Kata [shimv2](README.md#shim-v2-architecture) runtime (`--runtime "io.containerd.kata.v2"`).
- Delete the container when it [exits](README.md#workload-exit) (`--rm`).
- Attach the container to the user's terminal (`-t`).
- Use the Ubuntu Linux [container image](background.md#container-image)
to create the container [rootfs](background.md#root-filesystem) that will become
the [container environment](README.md#environments)
(`quay.io/libpod/ubuntu:latest`).
- Create the container with the name "`foo`".
- Run the `sh(1)` command in the Ubuntu rootfs based container
environment.
The command specified here is referred to as the [workload](README.md#workload).
> **Note:**
>
> For the purposes of this document and to keep explanations
> simpler, we assume the user is running this command in the
> [host environment](README.md#environments).

View File

@@ -1,152 +0,0 @@
# Guest assets
Kata Containers creates a VM in which to run one or more containers.
It does this by launching a [hypervisor](README.md#hypervisor) to
create the VM. The hypervisor needs two assets for this task: a Linux
kernel and a small root filesystem image to boot the VM.
## Guest kernel
The [guest kernel](../../../tools/packaging/kernel)
is passed to the hypervisor and used to boot the VM.
The default kernel provided in Kata Containers is highly optimized for
kernel boot time and minimal memory footprint, providing only those
services required by a container workload. It is based on the latest
Linux LTS (Long Term Support) [kernel](https://www.kernel.org).
## Guest image
The hypervisor uses an image file which provides a minimal root
filesystem used by the guest kernel to boot the VM and host the Kata
Container. Kata Containers supports both initrd and rootfs based
minimal guest images. The [default packages](../../install/) provide both
an image and an initrd, both of which are created using the
[`osbuilder`](../../../tools/osbuilder) tool.
> **Notes:**
>
> - Although initrd and rootfs based images are supported, not all
> [hypervisors](README.md#hypervisor) support both types of image.
>
> - The guest image is *unrelated* to the image used in a container
> workload.
>
> For example, if a user creates a container that runs a shell in a
> BusyBox image, they will run that shell in a BusyBox environment.
> However, the guest image running inside the VM that is used to
> *host* that BusyBox image could be running Clear Linux, Ubuntu,
> Fedora or any other distribution potentially.
>
> The `osbuilder` tool provides
> [configurations for various common Linux distributions](../../../tools/osbuilder/rootfs-builder)
> which can be built into either initrd or rootfs guest images.
>
> - If you are using a [packaged version of Kata
> Containers](../../install), you can see image details by running the
> [`kata-collect-data.sh`](../../../src/runtime/data/kata-collect-data.sh.in)
> script as `root` and looking at the "Image details" section of the
> output.
#### Root filesystem image
The default packaged rootfs image, sometimes referred to as the _mini
O/S_, is a highly optimized container bootstrap system.
If this image type is [configured](README.md#configuration), when the
user runs the [example command](example-command.md):
- The [runtime](README.md#runtime) will launch the configured [hypervisor](README.md#hypervisor).
- The hypervisor will boot the mini-OS image using the [guest kernel](#guest-kernel).
- The kernel will start the init daemon as PID 1 (`systemd`) inside the VM root environment.
- `systemd`, running inside the mini-OS context, will launch the [agent](README.md#agent)
in the root context of the VM.
- The agent will create a new container environment, setting its root
filesystem to that requested by the user (Ubuntu in [the example](example-command.md)).
- The agent will then execute the command (`sh(1)` in [the example](example-command.md))
inside the new container.
The table below summarises the default mini O/S showing the
environments that are created, the services running in those
environments (for all platforms) and the root filesystem used by
each service:
| Process | Environment | systemd service? | rootfs | User accessible | Notes |
|-|-|-|-|-|-|
| systemd | VM root | n/a | [VM guest image](#guest-image)| [debug console][debug-console] | The init daemon, running as PID 1 |
| [Agent](README.md#agent) | VM root | yes | [VM guest image](#guest-image)| [debug console][debug-console] | Runs as a systemd service |
| `chronyd` | VM root | yes | [VM guest image](#guest-image)| [debug console][debug-console] | Used to synchronise the time with the host |
| container workload (`sh(1)` in [the example](example-command.md)) | VM container | no | User specified (Ubuntu in [the example](example-command.md)) | [exec command](README.md#exec-command) | Managed by the agent |
See also the [process overview](README.md#process-overview).
> **Notes:**
>
> - The "User accessible" column shows how an administrator can access
> the environment.
>
> - The container workload is running inside a full container
> environment which itself is running within a VM environment.
>
> - See the [configuration files for the `osbuilder` tool](../../../tools/osbuilder/rootfs-builder)
> for details of the default distribution for platforms other than
> Intel x86_64.
#### Initrd image
The initrd image is a compressed `cpio(1)` archive, created from a
rootfs which is loaded into memory and used as part of the Linux
startup process. During startup, the kernel unpacks it into a special
instance of a `tmpfs` mount that becomes the initial root filesystem.
If this image type is [configured](README.md#configuration), when the user runs
the [example command](example-command.md):
- The [runtime](README.md#runtime) will launch the configured [hypervisor](README.md#hypervisor).
- The hypervisor will boot the mini-OS image using the [guest kernel](#guest-kernel).
- The kernel will start the init daemon as PID 1 (the
[agent](README.md#agent))
inside the VM root environment.
- The [agent](README.md#agent) will create a new container environment, setting its root
filesystem to that requested by the user (`ubuntu` in
[the example](example-command.md)).
- The agent will then execute the command (`sh(1)` in [the example](example-command.md))
inside the new container.
The table below summarises the default mini O/S showing the environments that are created,
the processes running in those environments (for all platforms) and
the root filesystem used by each service:
| Process | Environment | rootfs | User accessible | Notes |
|-|-|-|-|-|
| [Agent](README.md#agent) | VM root | [VM guest image](#guest-image) | [debug console][debug-console] | Runs as the init daemon (PID 1) |
| container workload | VM container | User specified (Ubuntu in this example) | [exec command](README.md#exec-command) | Managed by the agent |
> **Notes:**
>
> - The "User accessible" column shows how an administrator can access
> the environment.
>
> - It is possible to use a standard init daemon such as systemd with
> an initrd image if this is desirable.
See also the [process overview](README.md#process-overview).
#### Image summary
| Image type | Default distro | Init daemon | Reason | Notes |
|-|-|-|-|-|
| [image](background.md#root-filesystem-image) | [Clear Linux](https://clearlinux.org) (for x86_64 systems)| systemd | Minimal and highly optimized | systemd offers flexibility |
| [initrd](#initrd-image) | [Alpine Linux](https://alpinelinux.org) | Kata [agent](README.md#agent) (as no systemd support) | Security hardened and tiny C library |
See also:
- The [osbuilder](../../../tools/osbuilder) tool
This is used to build all default image types.
- The [versions database](../../../versions.yaml)
The `default-image-name` and `default-initrd-name` options specify
the default distributions for each image type.
[debug-console]: ../../Developer-Guide.md#connect-to-debug-console

View File

@@ -1,41 +0,0 @@
# History
## Kata 1.x architecture
In the old [Kata 1.x architecture](https://github.com/kata-containers/documentation/blob/master/design/architecture.md),
the Kata [runtime](README.md#runtime) was an executable called `kata-runtime`.
The container manager called this executable multiple times when
creating each container. Each time the runtime was called a different
OCI command-line verb was provided. This architecture was simple, but
not well suited to creating VM based containers due to the issue of
handling state between calls. Additionally, the architecture suffered
from performance issues related to continually having to spawn new
instances of the runtime binary, and
[Kata shim](https://github.com/kata-containers/shim) and
[Kata proxy](https://github.com/kata-containers/proxy) processes for systems
that did not provide VSOCK.
## Kata 2.x architecture
See the ["shimv2"](README.md#shim-v2-architecture) section of the
architecture document.
## Architectural comparison
| Kata version | Kata Runtime process calls | Kata shim processes | Kata proxy processes (if no VSOCK) |
|-|-|-|-|
| 1.x | multiple per container | 1 per container connection | 1 |
| 2.x | 1 per VM (hosting any number of containers) | 0 | 0 |
> **Notes:**
>
> - A single VM can host one or more containers.
>
> - The "Kata shim processes" column refers to the old
> [Kata shim](https://github.com/kata-containers/shim) (`kata-shim` binary),
> *not* the new shimv2 runtime instance (`containerd-shim-kata-v2` binary).
The diagram below shows how the original architecture was simplified
with the advent of shimv2.
![Kubernetes integration with shimv2](../arch-images/shimv2.svg)

View File

@@ -1,35 +0,0 @@
# Kubernetes support
[Kubernetes](https://github.com/kubernetes/kubernetes/), or K8s, is a popular open source
container orchestration engine. In Kubernetes, a set of containers sharing resources
such as networking, storage, mount, PID, etc. is called a
[pod](https://kubernetes.io/docs/user-guide/pods/).
A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster
only needs to run a container runtime and a container agent (called a
[Kubelet](https://kubernetes.io/docs/admin/kubelet/)).
Kata Containers represents a Kubelet pod as a VM.
A Kubernetes cluster runs a control plane where a scheduler (typically
running on a dedicated master node) calls into a compute Kubelet. This
Kubelet instance is responsible for managing the lifecycle of pods
within the nodes and eventually relies on a container runtime to
handle execution. The Kubelet architecture decouples lifecycle
management from container execution through a dedicated gRPC based
[Container Runtime Interface (CRI)](https://github.com/kubernetes/design-proposals-archive/blob/main/node/container-runtime-interface-v1.md).
In other words, a Kubelet is a CRI client and expects a CRI
implementation to handle the server side of the interface.
[CRI-O](https://github.com/kubernetes-incubator/cri-o) and
[containerd](https://github.com/containerd/containerd/) are CRI
implementations that rely on
[OCI](https://github.com/opencontainers/runtime-spec) compatible
runtimes for managing container instances.
Kata Containers is an officially supported CRI-O and containerd
runtime. Refer to the following guides on how to set up Kata
Containers with Kubernetes:
- [How to use Kata Containers and containerd](../../how-to/containerd-kata.md)
- [Run Kata Containers with Kubernetes](../../how-to/run-kata-with-k8s.md)

View File

@@ -1,49 +0,0 @@
# Networking
Containers typically live in their own, possibly shared, networking namespace.
At some point in a container lifecycle, container engines will set up that namespace
to add the container to a network which is isolated from the host network.
In order to setup the network for a container, container engines call into a
networking plugin. The network plugin will usually create a virtual
ethernet (`veth`) pair adding one end of the `veth` pair into the container
networking namespace, while the other end of the `veth` pair is added to the
host networking namespace.
This is a very namespace-centric approach as many hypervisors or VM
Managers (VMMs) such as `virt-manager` cannot handle `veth`
interfaces. Typically, [`TAP`](https://www.kernel.org/doc/Documentation/networking/tuntap.txt)
interfaces are created for VM connectivity.
To overcome incompatibility between typical container engines expectations
and virtual machines, Kata Containers networking transparently connects `veth`
interfaces with `TAP` ones using [Traffic Control](https://man7.org/linux/man-pages/man8/tc.8.html):
![Kata Containers networking](../arch-images/network.png)
With a TC filter rules in place, a redirection is created between the container network
and the virtual machine. As an example, the network plugin may place a device,
`eth0`, in the container's network namespace, which is one end of a VETH device.
Kata Containers will create a tap device for the VM, `tap0_kata`,
and setup a TC redirection filter to redirect traffic from `eth0`'s ingress to `tap0_kata`'s egress,
and a second TC filter to redirect traffic from `tap0_kata`'s ingress to `eth0`'s egress.
Kata Containers maintains support for MACVTAP, which was an earlier implementation used in Kata.
With this method, Kata created a MACVTAP device to connect directly to the `eth0` device.
TC-filter is the default because it allows for simpler configuration, better CNI plugin
compatibility, and performance on par with MACVTAP.
Kata Containers has deprecated support for bridge due to lacking performance relative to TC-filter and MACVTAP.
Kata Containers supports both
[CNM](https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model)
and [CNI](https://github.com/containernetworking/cni) for networking management.
## Network Hotplug
Kata Containers has developed a set of network sub-commands and APIs to add, list and
remove a guest network endpoint and to manipulate the guest route table.
The following diagram illustrates the Kata Containers network hotplug workflow.
![Network Hotplug](../arch-images/kata-containers-network-hotplug.png)

View File

@@ -1,56 +0,0 @@
# Storage
## Limits
Kata Containers is [compatible](README.md#compatibility) with existing
standards and runtime. From the perspective of storage, this means no
limits are placed on the amount of storage a container
[workload](README.md#workload) may use.
Since cgroups are not able to set limits on storage allocation, if you
wish to constrain the amount of storage a container uses, consider
using an existing facility such as `quota(1)` limits or
[device mapper](#devicemapper) limits.
## virtio SCSI
If a block-based graph driver is [configured](README.md#configuration),
`virtio-scsi` is used to _share_ the workload image (such as
`busybox:latest`) into the container's environment inside the VM.
## virtio FS
If a block-based graph driver is _not_ [configured](README.md#configuration), a
[`virtio-fs`](https://virtio-fs.gitlab.io) (`VIRTIO`) overlay
filesystem mount point is used to _share_ the workload image instead. The
[agent](README.md#agent) uses this mount point as the root filesystem for the
container processes.
For virtio-fs, the [runtime](README.md#runtime) starts one `virtiofsd` daemon
(that runs in the host context) for each VM created.
## Devicemapper
The
[devicemapper `snapshotter`](https://github.com/containerd/containerd/tree/main/snapshots/devmapper)
is a special case. The `snapshotter` uses dedicated block devices
rather than formatted filesystems, and operates at the block level
rather than the file level. This knowledge is used to directly use the
underlying block device instead of the overlay file system for the
container root file system. The block device maps to the top
read-write layer for the overlay. This approach gives much better I/O
performance compared to using `virtio-fs` to share the container file
system.
#### Hot plug and unplug
Kata Containers has the ability to hot plug add and hot plug remove
block devices. This makes it possible to use block devices for
containers started after the VM has been launched.
Users can check to see if the container uses the `devicemapper` block
device as its rootfs by calling `mount(8)` within the container. If
the `devicemapper` block device is used, the root filesystem (`/`)
will be mounted from `/dev/vda`. Users can disable direct mounting of
the underlying block device through the runtime
[configuration](README.md#configuration).

View File

@@ -1,169 +0,0 @@
# Kata 3.0 Architecture
## Overview
In cloud-native scenarios, there is an increased demand for container startup speed, resource consumption, stability, and security, areas where the present Kata Containers runtime is challenged relative to other runtimes. To achieve this, we propose a solid, field-tested and secure Rust version of the kata-runtime.
Also, we provide the following designs:
- Turn key solution with builtin `Dragonball` Sandbox
- Async I/O to reduce resource consumption
- Extensible framework for multiple services, runtimes and hypervisors
- Lifecycle management for sandbox and container associated resources
### Rationale for choosing Rust
We chose Rust because it is designed as a system language with a focus on efficiency.
In contrast to Go, Rust makes a variety of design trade-offs in order to obtain
good execution performance, with innovative techniques that, in contrast to C or
C++, provide reasonable protection against common memory errors (buffer
overflow, invalid pointers, range errors), error checking (ensuring errors are
dealt with), thread safety, ownership of resources, and more.
These benefits were verified in our project when the Kata Containers guest agent
was rewritten in Rust. We notably saw a significant reduction in memory usage
with the Rust-based implementation.
## Design
### Architecture
![architecture](./images/architecture.png)
### Built-in VMM
#### Current Kata 2.x architecture
![not_builtin_vmm](./images/not_built_in_vmm.png)
As shown in the figure, runtime and VMM are separate processes. The runtime process forks the VMM process and interacts through the inter-process RPC. Typically, process interaction consumes more resources than peers within the process, and it will result in relatively low efficiency. At the same time, the cost of resource operation and maintenance should be considered. For example, when performing resource recovery under abnormal conditions, the exception of any process must be detected by others and activate the appropriate resource recovery process. If there are additional processes, the recovery becomes even more difficult.
#### How To Support Built-in VMM
We provide `Dragonball` Sandbox to enable built-in VMM by integrating VMM's function into the Rust library. We could perform VMM-related functionalities by using the library. Because runtime and VMM are in the same process, there is a benefit in terms of message processing speed and API synchronization. It can also guarantee the consistency of the runtime and the VMM life cycle, reducing resource recovery and exception handling maintenance, as shown in the figure:
![builtin_vmm](./images/built_in_vmm.png)
### Async Support
#### Why Need Async
**Async is already in stable Rust and allows us to write async code**
- Async provides significantly reduced CPU and memory overhead, especially for workloads with a large amount of IO-bound tasks
- Async is zero-cost in Rust, which means that you only pay for what you use. Specifically, you can use async without heap allocations and dynamic dispatch, which greatly improves efficiency
- For more (see [Why Async?](https://rust-lang.github.io/async-book/01_getting_started/02_why_async.html) and [The State of Asynchronous Rust](https://rust-lang.github.io/async-book/01_getting_started/03_state_of_async_rust.html)).
**There may be several problems if implementing kata-runtime with Sync Rust**
- Too many threads with a new TTRPC connection
- TTRPC threads: reaper thread(1) + listener thread(1) + client handler(2)
- Add 3 I/O threads with a new container
- In Sync mode, implementing a timeout mechanism is challenging. For example, in TTRPC API interaction, the timeout mechanism is difficult to align with Golang
#### How To Support Async
The kata-runtime is controlled by TOKIO_RUNTIME_WORKER_THREADS to run the OS thread, which is 2 threads by default. For TTRPC and container-related threads run in the `tokio` thread in a unified manner, and related dependencies need to be switched to Async, such as Timer, File, Netlink, etc. With the help of Async, we can easily support no-block I/O and timer. Currently, we only utilize Async for kata-runtime. The built-in VMM keeps the OS thread because it can ensure that the threads are controllable.
**For N tokio worker threads and M containers**
- Sync runtime(both OS thread and `tokio` task are OS thread but without `tokio` worker thread) OS thread number: 4 + 12*M
- Async runtime(only OS thread is OS thread) OS thread number: 2 + N
```shell
├─ main(OS thread)
├─ async-logger(OS thread)
└─ tokio worker(N * OS thread)
├─ agent log forwarder(1 * tokio task)
├─ health check thread(1 * tokio task)
├─ TTRPC reaper thread(M * tokio task)
├─ TTRPC listener thread(M * tokio task)
├─ TTRPC client handler thread(7 * M * tokio task)
├─ container stdin io thread(M * tokio task)
├─ container stdout io thread(M * tokio task)
└─ container stderr io thread(M * tokio task)
```
### Extensible Framework
The Kata 3.x runtime is designed with the extension of service, runtime, and hypervisor, combined with configuration to meet the needs of different scenarios. At present, the service provides a register mechanism to support multiple services. Services could interact with runtime through messages. In addition, the runtime handler handles messages from services. To meet the needs of a binary that supports multiple runtimes and hypervisors, the startup must obtain the runtime handler type and hypervisor type through configuration.
![framework](./images/framework.png)
### Resource Manager
In our case, there will be a variety of resources, and every resource has several subtypes. Especially for `Virt-Container`, every subtype of resource has different operations. And there may be dependencies, such as the share-fs rootfs and the share-fs volume will use share-fs resources to share files to the VM. Currently, network and share-fs are regarded as sandbox resources, while rootfs, volume, and cgroup are regarded as container resources. Also, we abstract a common interface for each resource and use subclass operations to evaluate the differences between different subtypes.
![resource manager](./images/resourceManager.png)
## Roadmap
- Stage 1 (June): provide basic features (current delivered)
- Stage 2 (September): support common features
- Stage 3: support full features
| **Class** | **Sub-Class** | **Development Stage** | **Status** |
| -------------------------- | ------------------- | --------------------- |------------|
| Service | task service | Stage 1 | ✅ |
| | extend service | Stage 3 | 🚫 |
| | image service | Stage 3 | 🚫 |
| Runtime handler | `Virt-Container` | Stage 1 | ✅ |
| Endpoint | VETH Endpoint | Stage 1 | ✅ |
| | Physical Endpoint | Stage 2 | ✅ |
| | Tap Endpoint | Stage 2 | ✅ |
| | `Tuntap` Endpoint | Stage 2 | ✅ |
| | `IPVlan` Endpoint | Stage 2 | ✅ |
| | `MacVlan` Endpoint | Stage 2 | ✅ |
| | MACVTAP Endpoint | Stage 3 | 🚫 |
| | `VhostUserEndpoint` | Stage 3 | 🚫 |
| Network Interworking Model | Tc filter | Stage 1 | ✅ |
| | `MacVtap` | Stage 3 | 🚧 |
| Storage | Virtio-fs | Stage 1 | ✅ |
| | `nydus` | Stage 2 | 🚧 |
| | `device mapper` | Stage 2 | 🚫 |
| `Cgroup V2` | | Stage 2 | 🚧 |
| Hypervisor | `Dragonball` | Stage 1 | 🚧 |
| | QEMU | Stage 2 | 🚫 |
| | ACRN | Stage 3 | 🚫 |
| | Cloud Hypervisor | Stage 3 | 🚫 |
| | Firecracker | Stage 3 | 🚫 |
## FAQ
- Are the "service", "message dispatcher" and "runtime handler" all part of the single Kata 3.x runtime binary?
Yes. They are components in Kata 3.x runtime. And they will be packed into one binary.
1. Service is an interface, which is responsible for handling multiple services like task service, image service and etc.
2. Message dispatcher, it is used to match multiple requests from the service module.
3. Runtime handler is used to deal with the operation for sandbox and container.
- What is the name of the Kata 3.x runtime binary?
Apparently we can't use `containerd-shim-v2-kata` because it's already used. We are facing the hardest issue of "naming" again. Any suggestions are welcomed.
Internally we use `containerd-shim-v2-rund`.
- Is the Kata 3.x design compatible with the containerd shimv2 architecture?
Yes. It is designed to follow the functionality of go version kata. And it implements the `containerd shim v2` interface/protocol.
- How will users migrate to the Kata 3.x architecture?
The migration plan will be provided before the Kata 3.x is merging into the main branch.
- Is `Dragonball` limited to its own built-in VMM? Can the `Dragonball` system be configured to work using an external `Dragonball` VMM/hypervisor?
The `Dragonball` could work as an external hypervisor. However, stability and performance is challenging in this case. Built in VMM could optimise the container overhead, and it's easy to maintain stability.
`runD` is the `containerd-shim-v2` counterpart of `runC` and can run a pod/containers. `Dragonball` is a `microvm`/VMM that is designed to run container workloads. Instead of `microvm`/VMM, we sometimes refer to it as secure sandbox.
- QEMU, Cloud Hypervisor and Firecracker support are planned, but how that would work. Are they working in separate process?
Yes. They are unable to work as built in VMM.
- What is `upcall`?
The `upcall` is used to hotplug CPU/memory/MMIO devices, and it solves two issues.
1. avoid dependency on PCI/ACPI
2. avoid dependency on `udevd` within guest and get deterministic results for hotplug operations. So `upcall` is an alternative to ACPI based CPU/memory/device hotplug. And we may cooperate with the community to add support for ACPI based CPU/memory/device hotplug if needed.
`Dbs-upcall` is a `vsock-based` direct communication tool between VMM and guests. The server side of the `upcall` is a driver in guest kernel (kernel patches are needed for this feature) and it'll start to serve the requests once the kernel has started. And the client side is in VMM , it'll be a thread that communicates with VSOCK through `uds`. We have accomplished device hotplug / hot-unplug directly through `upcall` in order to avoid virtualization of ACPI to minimize virtual machine's overhead. And there could be many other usage through this direct communication channel. It's already open source.
https://github.com/openanolis/dragonball-sandbox/tree/main/crates/dbs-upcall
- The URL below says the kernel patches work with 4.19, but do they also work with 5.15+ ?
Forward compatibility should be achievable, we have ported it to 5.10 based kernel.
- Are these patches platform-specific or would they work for any architecture that supports VSOCK?
It's almost platform independent, but some message related to CPU hotplug are platform dependent.
- Could the kernel driver be replaced with a userland daemon in the guest using loopback VSOCK?
We need to create device nodes for hot-added CPU/memory/devices, so it's not easy for userspace daemon to do these tasks.
- The fact that `upcall` allows communication between the VMM and the guest suggests that this architecture might be incompatible with https://github.com/confidential-containers where the VMM should have no knowledge of what happens inside the VM.
1. `TDX` doesn't support CPU/memory hotplug yet.
2. For ACPI based device hotplug, it depends on ACPI `DSDT` table, and the guest kernel will execute `ASL` code to handle during handling those hotplug event. And it should be easier to audit VSOCK based communication than ACPI `ASL` methods.
- What is the security boundary for the monolithic / "Built-in VMM" case?
It has the security boundary of virtualization. More details will be provided in next stage.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 139 KiB

File diff suppressed because one or more lines are too long

View File

@@ -1,12 +0,0 @@
# Core scheduling
Core scheduling is a Linux kernel feature that allows only trusted tasks to run concurrently on
CPUs sharing compute resources (for example, hyper-threads on a core).
Containerd versions >= 1.6.4 leverage this to treat all of the processes associated with a
given pod or container to be a single group of trusted tasks. To indicate this should be carried
out, containerd sets the `SCHED_CORE` environment variable for each shim it spawns. When this is
set, the Kata Containers shim implementation uses the `prctl` syscall to create a new core scheduling
domain for the shim process itself as well as future VMM processes it will start.
For more details on the core scheduling feature, see the [Linux documentation](https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/core-scheduling.html).

View File

@@ -1825,8 +1825,12 @@ components:
desc: ""
- value: grpc.StartContainerRequest
desc: ""
- value: grpc.StartTracingRequest
desc: ""
- value: grpc.StatsContainerRequest
desc: ""
- value: grpc.StopTracingRequest
desc: ""
- value: grpc.TtyWinResizeRequest
desc: ""
- value: grpc.UpdateContainerRequest

View File

@@ -1,253 +0,0 @@
# Motivation
Today, there exist a few gaps between Container Storage Interface (CSI) and virtual machine (VM) based runtimes such as Kata Containers
that prevent them from working together smoothly.
First, its cumbersome to use a persistent volume (PV) with Kata Containers. Today, for a PV with Filesystem volume mode, Virtio-fs
is the only way to surface it inside a Kata Container guest VM. But often mounting the filesystem (FS) within the guest operating system (OS) is
desired due to performance benefits, availability of native FS features and security benefits over the Virtio-fs mechanism.
Second, its difficult if not impossible to resize a PV online with Kata Containers. While a PV can be expanded on the host OS,
the updated metadata needs to be propagated to the guest OS in order for the application container to use the expanded volume.
Currently, there is not a way to propagate the PV metadata from the host OS to the guest OS without restarting the Pod sandbox.
# Proposed Solution
Because of the OS boundary, these features cannot be implemented in the CSI node driver plugin running on the host OS
as is normally done in the runc container. Instead, they can be done by the Kata Containers agent inside the guest OS,
but it requires the CSI driver to pass the relevant information to the Kata Containers runtime.
An ideal long term solution would be to have the `kubelet` coordinating the communication between the CSI driver and
the container runtime, as described in [KEP-2857](https://github.com/kubernetes/enhancements/pull/2893/files).
However, as the KEP is still under review, we would like to propose a short/medium term solution to unblock our use case.
The proposed solution is built on top of a previous [proposal](https://github.com/egernst/kata-containers/blob/da-proposal/docs/design/direct-assign-volume.md)
described by Eric Ernst. The previous proposal has two gaps:
1. Writing a `csiPlugin.json` file to the volume root path introduced a security risk. A malicious user can gain unauthorized
access to a block device by writing their own `csiPlugin.json` to the above location through an ephemeral CSI plugin.
2. The proposal didn't describe how to establish a mapping between a volume and a kata sandbox, which is needed for
implementing CSI volume resize and volume stat collection APIs.
This document particularly focuses on how to address these two gaps.
## Assumptions and Limitations
1. The proposal assumes that a block device volume will only be used by one Pod on a node at a time, which we believe
is the most common pattern in Kata Containers use cases. Its also unsafe to have the same block device attached to more than
one Kata pod. In the context of Kubernetes, the `PersistentVolumeClaim` (PVC) needs to have the `accessMode` as `ReadWriteOncePod`.
2. More advanced Kubernetes volume features such as, `fsGroup`, `fsGroupChangePolicy`, and `subPath` are not supported.
## End User Interface
1. The user specifies a PV as a direct-assigned volume. How a PV is specified as a direct-assigned volume is left for each CSI implementation to decide.
There are a few options for reference:
1. A storage class parameter specifies whether it's a direct-assigned volume. This avoids any lookups of PVC
or Pod information from the CSI plugin (as external provisioner takes care of these). However, all PVs in the storage class with the parameter set
will have host mounts skipped.
2. Use a PVC annotation. This approach requires the CSI plugins have `--extra-create-metadata` [set](https://kubernetes-csi.github.io/docs/external-provisioner.html#persistentvolumeclaim-and-persistentvolume-parameters)
to be able to perform a lookup of the PVC annotations from the API server. Pro: API server lookup of annotations only required during creation of PV.
Con: The CSI plugin will always skip host mounting of the PV.
3. The CSI plugin can also lookup pod `runtimeclass` during `NodePublish`. This approach can be found in the [ALIBABA CSI plugin](https://github.com/kubernetes-sigs/alibaba-cloud-csi-driver/blob/master/pkg/disk/nodeserver.go#L248).
2. The CSI node driver delegates the direct assigned volume to the Kata Containers runtime. The CSI node driver APIs need to
be modified to pass the volume mount information and collect volume information to/from the Kata Containers runtime by invoking `kata-runtime` command line commands.
* **NodePublishVolume** -- It invokes `kata-runtime direct-volume add --volume-path [volumePath] --mount-info [mountInfo]`
to propagate the volume mount information to the Kata Containers runtime for it to carry out the filesystem mount operation.
The `volumePath` is the [target_path](https://github.com/container-storage-interface/spec/blob/master/csi.proto#L1364) in the CSI `NodePublishVolumeRequest`.
The `mountInfo` is a serialized JSON string.
* **NodeGetVolumeStats** -- It invokes `kata-runtime direct-volume stats --volume-path [volumePath]` to retrieve the filesystem stats of direct-assigned volume.
* **NodeExpandVolume** -- It invokes `kata-runtime direct-volume resize --volume-path [volumePath] --size [size]` to send a resize request to the Kata Containers runtime to
resize the direct-assigned volume.
* **NodeStageVolume/NodeUnStageVolume** -- It invokes `kata-runtime direct-volume remove --volume-path [volumePath]` to remove the persisted metadata of a direct-assigned volume.
The `mountInfo` object is defined as follows:
```Golang
type MountInfo struct {
// The type of the volume (ie. block)
VolumeType string `json:"volume-type"`
// The device backing the volume.
Device string `json:"device"`
// The filesystem type to be mounted on the volume.
FsType string `json:"fstype"`
// Additional metadata to pass to the agent regarding this volume.
Metadata map[string]string `json:"metadata,omitempty"`
// Additional mount options.
Options []string `json:"options,omitempty"`
}
```
Notes: given that the `mountInfo` is persisted to the disk by the Kata runtime, it shouldn't container any secrets (such as SMB mount password).
## Implementation Details
### Kata runtime
Instead of the CSI node driver writing the mount info into a `csiPlugin.json` file under the volume root,
as described in the original proposal, here we propose that the CSI node driver passes the mount information to
the Kata Containers runtime through a new `kata-runtime` commandline command. The `kata-runtime` then writes the mount
information to a `mountInfo.json` file in a predefined location (`/run/kata-containers/shared/direct-volumes/[volume_path]/`).
When the Kata Containers runtime starts a container, it verifies whether a volume mount is a direct-assigned volume by checking
whether there is a `mountInfo` file under the computed Kata `direct-volumes` directory. If it is, the runtime parses the `mountInfo` file,
updates the mount spec with the data in `mountInfo`. The updated mount spec is then passed to the Kata agent in the guest VM together
with other mounts. The Kata Containers runtime also creates a file named by the sandbox id under the `direct-volumes/[volume_path]/`
directory. The reason for adding a sandbox id file is to establish a mapping between the volume and the sandbox using it.
Later, when the Kata Containers runtime handles the `get-stats` and `resize` commands, it uses the sandbox id to identify
the endpoint of the corresponding `containerd-shim-kata-v2`.
### containerd-shim-kata-v2 changes
`containerd-shim-kata-v2` provides an API for sandbox management through a Unix domain socket. Two new handlers are proposed: `/direct-volume/stats` and `/direct-volume/resize`:
Example:
```bash
$ curl --unix-socket "$shim_socket_path" -I -X GET 'http://localhost/direct-volume/stats/[urlSafeVolumePath]'
$ curl --unix-socket "$shim_socket_path" -I -X POST 'http://localhost/direct-volume/resize' -d '{ "volumePath"": [volumePath], "Size": "123123" }'
```
The shim then forwards the corresponding request to the `kata-agent` to carry out the operations inside the guest VM. For `resize` operation,
the Kata runtime also needs to notify the hypervisor to resize the block device (e.g. call `block_resize` in QEMU).
### Kata agent changes
The mount spec of a direct-assigned volume is passed to `kata-agent` through the existing `Storage` GRPC object.
Two new APIs and three new GRPC objects are added to GRPC protocol between the shim and agent for resizing and getting volume stats:
```protobuf
rpc GetVolumeStats(VolumeStatsRequest) returns (VolumeStatsResponse);
rpc ResizeVolume(ResizeVolumeRequest) returns (google.protobuf.Empty);
message VolumeStatsRequest {
// The volume path on the guest outside the container
string volume_guest_path = 1;
}
message ResizeVolumeRequest {
// Full VM guest path of the volume (outside the container)
string volume_guest_path = 1;
uint64 size = 2;
}
// This should be kept in sync with CSI NodeGetVolumeStatsResponse (https://github.com/container-storage-interface/spec/blob/v1.5.0/csi.proto)
message VolumeStatsResponse {
// This field is OPTIONAL.
repeated VolumeUsage usage = 1;
// Information about the current condition of the volume.
// This field is OPTIONAL.
// This field MUST be specified if the VOLUME_CONDITION node
// capability is supported.
VolumeCondition volume_condition = 2;
}
message VolumeUsage {
enum Unit {
UNKNOWN = 0;
BYTES = 1;
INODES = 2;
}
// The available capacity in specified Unit. This field is OPTIONAL.
// The value of this field MUST NOT be negative.
uint64 available = 1;
// The total capacity in specified Unit. This field is REQUIRED.
// The value of this field MUST NOT be negative.
uint64 total = 2;
// The used capacity in specified Unit. This field is OPTIONAL.
// The value of this field MUST NOT be negative.
uint64 used = 3;
// Units by which values are measured. This field is REQUIRED.
Unit unit = 4;
}
// VolumeCondition represents the current condition of a volume.
message VolumeCondition {
// Normal volumes are available for use and operating optimally.
// An abnormal volume does not meet these criteria.
// This field is REQUIRED.
bool abnormal = 1;
// The message describing the condition of the volume.
// This field is REQUIRED.
string message = 2;
}
```
### Step by step walk-through
Given the following definition:
```YAML
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
runtime-class: kata-qemu
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
skip-hostmount: "true"
name: ebs-claim
spec:
accessModes:
- ReadWriteOncePod
volumeMode: Filesystem
storageClassName: ebs-sc
resources:
requests:
storage: 4Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
csi.storage.k8s.io/fstype: ext4
```
Lets assume that changes have been made in the `aws-ebs-csi-driver` node driver.
**Node publish volume**
1. In the node CSI driver, the `NodePublishVolume` API invokes: `kata-runtime direct-volume add --volume-path "/kubelet/a/b/c/d/sdf" --mount-info "{\"Device\": \"/dev/sdf\", \"fstype\": \"ext4\"}"`.
2. The `Kata-runtime` writes the mount-info JSON to a file called `mountInfo.json` under `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
**Node unstage volume**
1. In the node CSI driver, the `NodeUnstageVolume` API invokes: `kata-runtime direct-volume remove --volume-path "/kubelet/a/b/c/d/sdf"`.
2. Kata-runtime deletes the directory `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
**Use the volume in sandbox**
1. Upon the request to start a container, the `containerd-shim-kata-v2` examines the container spec,
and iterates through the mounts. For each mount, if there is a `mountInfo.json` file under `/run/kata-containers/shared/direct-volumes/[mount source path]`,
it generates a `storage` GRPC object after overwriting the mount spec with the information in `mountInfo.json`.
2. The shim sends the storage objects to kata-agent through TTRPC.
3. The shim writes a file with the sandbox id as the name under `/run/kata-containers/shared/direct-volumes/[mount source path]`.
4. The kata-agent mounts the storage objects for the container.
**Node expand volume**
1. In the node CSI driver, the `NodeExpandVolume` API invokes: `kata-runtime direct-volume resize -volume-path "/kubelet/a/b/c/d/sdf" -size 8Gi`.
2. The Kata runtime checks whether there is a sandbox id file under the directory `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
3. The Kata runtime identifies the shim instance through the sandbox id, and sends a GRPC request to resize the volume.
4. The shim handles the request, asks the hypervisor to resize the block device and sends a GRPC request to Kata agent to resize the filesystem.
5. Kata agent receives the request and resizes the filesystem.
**Node get volume stats**
1. In the node CSI driver, the `NodeGetVolumeStats` API invokes: `kata-runtime direct-volume stats -volume-path "/kubelet/a/b/c/d/sdf"`.
2. The Kata runtime checks whether there is a sandbox id file under the directory `/run/kata-containers/shared/direct-volumes/kubelet/a/b/c/d/sdf`.
3. The Kata runtime identifies the shim instance through the sandbox id, and sends a GRPC request to get the volume stats.
4. The shim handles the request and forwards it to the Kata agent.
5. Kata agent receives the request and returns the filesystem stats.

View File

@@ -12,14 +12,14 @@ The OCI [runtime specification][linux-config] provides guidance on where the con
> [`cgroupsPath`][cgroupspath]: (string, OPTIONAL) path to the cgroups. It can be used to either control the cgroups
> hierarchy for containers or to run a new process in an existing container
The cgroups are hierarchical, and this can be seen with the following pod example:
Cgroups are hierarchical, and this can be seen with the following pod example:
- Pod 1: `cgroupsPath=/kubepods/pod1`
- Container 1: `cgroupsPath=/kubepods/pod1/container1`
- Container 2: `cgroupsPath=/kubepods/pod1/container2`
- Pod 2: `cgroupsPath=/kubepods/pod2`
- Container 1: `cgroupsPath=/kubepods/pod2/container1`
- Container 1: `cgroupsPath=/kubepods/pod2/container2`
- Container 2: `cgroupsPath=/kubepods/pod2/container2`
Depending on the upper-level orchestration layers, the cgroup under which the pod is placed is
@@ -247,14 +247,14 @@ cgroup size and constraints accordingly.
# Supported cgroups
Kata Containers currently supports cgroups `v1` and `v2`.
Kata Containers currently only supports cgroups `v1`.
In the following sections each cgroup is described briefly.
## cgroups v1
## Cgroups V1
`cgroups v1` are under a [`tmpfs`][1] filesystem mounted at `/sys/fs/cgroup`, where each cgroup is
mounted under a separate cgroup filesystem. A `cgroups v1` hierarchy may look like the following
`Cgroups V1` are under a [`tmpfs`][1] filesystem mounted at `/sys/fs/cgroup`, where each cgroup is
mounted under a separate cgroup filesystem. A `Cgroups v1` hierarchy may look like the following
diagram:
```
@@ -301,12 +301,13 @@ diagram:
A process can join a cgroup by writing its process id (`pid`) to `cgroup.procs` file,
or join a cgroup partially by writing the task (thread) id (`tid`) to the `tasks` file.
Kata Containers only supports `v1`.
To know more about `cgroups v1`, see [cgroupsv1(7)][2].
## cgroups v2
## Cgroups V2
`cgroups v2` are also known as unified cgroups, unlike `cgroups v1`, the cgroups are
mounted under the same cgroup filesystem. A `cgroups v2` hierarchy may look like the following
`Cgroups v2` are also known as unified cgroups, unlike `cgroups v1`, the cgroups are
mounted under the same cgroup filesystem. A `Cgroups v2` hierarchy may look like the following
diagram:
```
@@ -353,6 +354,8 @@ Same as `cgroups v1`, a process can join the cgroup by writing its process id (`
`cgroup.procs` file, or join a cgroup partially by writing the task (thread) id (`tid`) to
`cgroup.threads` file.
Kata Containers does not support cgroups `v2` on the host.
### Distro Support
Many Linux distributions do not yet support `cgroups v2`, as it is quite a recent addition.

View File

@@ -1,21 +1,21 @@
# Kata 2.0 Metrics Design
Kata implements CRI's API and supports [`ContainerStats`](https://github.com/kubernetes/kubernetes/blob/release-1.18/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto#L101) and [`ListContainerStats`](https://github.com/kubernetes/kubernetes/blob/release-1.18/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto#L103) interfaces to expose containers metrics. User can use these interfaces to get basic metrics about containers.
Kata implement CRI's API and support [`ContainerStats`](https://github.com/kubernetes/kubernetes/blob/release-1.18/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto#L101) and [`ListContainerStats`](https://github.com/kubernetes/kubernetes/blob/release-1.18/staging/src/k8s.io/cri-api/pkg/apis/runtime/v1alpha2/api.proto#L103) interfaces to expose containers metrics. User can use these interface to get basic metrics about container.
Unlike `runc`, Kata is a VM-based runtime and has a different architecture.
But unlike `runc`, Kata is a VM-based runtime and has a different architecture.
## Limitations of Kata 1.x and target of Kata 2.0
## Limitations of Kata 1.x and the target of Kata 2.0
Kata 1.x has a number of limitations related to observability that may be obstacles to running Kata Containers at scale.
In Kata 2.0, the following components will be able to provide more details about the system:
In Kata 2.0, the following components will be able to provide more details about the system.
- containerd shim v2 (effectively `kata-runtime`)
- Hypervisor statistics
- Agent process
- Guest OS statistics
> **Note**: In Kata 1.x, the main user-facing component was the runtime (`kata-runtime`). From 1.5, Kata introduced the Kata containerd shim v2 (`containerd-shim-kata-v2`) which is essentially a modified runtime that is loaded by containerd to simplify and improve the way VM-based containers are created and managed.
> **Note**: In Kata 1.x, the main user-facing component was the runtime (`kata-runtime`). From 1.5, Kata then introduced the Kata containerd shim v2 (`containerd-shim-kata-v2`) which is essentially a modified runtime that is loaded by containerd to simplify and improve the way VM-based containers are created and managed.
>
> For Kata 2.0, the main component is the Kata containerd shim v2, although the deprecated `kata-runtime` binary will be maintained for a period of time.
>
@@ -25,15 +25,14 @@ In Kata 2.0, the following components will be able to provide more details about
Kata 2.0 metrics strongly depend on [Prometheus](https://prometheus.io/), a graduated project from CNCF.
Kata Containers 2.0 introduces a new Kata component called `kata-monitor` which is used to monitor the Kata components on the host. It's shipped with the Kata runtime to provide an interface to:
Kata Containers 2.0 introduces a new Kata component called `kata-monitor` which is used to monitor the other Kata components on the host. It's the monitor interface with Kata runtime, and we can do something like these:
- Get metrics
- Get events
At present, `kata-monitor` supports retrieval of metrics only: this is what will be covered in this document.
In this document we will cover metrics only. And until now it only supports metrics function.
This is the architecture overview of metrics in Kata Containers 2.0:
This is the architecture overview metrics in Kata Containers 2.0.
![Kata Containers 2.0 metrics](arch-images/kata-2-metrics.png)
@@ -46,39 +45,38 @@ For a quick evaluation, you can check out [this how to](../how-to/how-to-set-pro
### Kata monitor
The `kata-monitor` management agent should be started on each node where the Kata containers runtime is installed. `kata-monitor` will:
`kata-monitor` is a management agent on one node, where many Kata containers are running. `kata-monitor`'s work include:
> **Note**: a *node* running Kata containers will be either a single host system or a worker node belonging to a K8s cluster capable of running Kata pods.
> **Note**: node is a single host system or a node in K8s clusters.
- Aggregate sandbox metrics running on the node, adding the `sandbox_id` label to them.
- Attach the additional `cri_uid`, `cri_name` and `cri_namespace` labels to the sandbox metrics, tracking the `uid`, `name` and `namespace` Kubernetes pod metadata.
- Expose a new Prometheus target, allowing all node metrics coming from the Kata shim to be collected by Prometheus indirectly. This simplifies the targets count in Prometheus and avoids exposing shim's metrics by `ip:port`.
- Aggregate sandbox metrics running on this node, and add `sandbox_id` label
- As a Prometheus target, all metrics from Kata shim on this node will be collected by Prometheus indirectly. This can easy the targets count in Prometheus, and also need not to expose shim's metrics by `ip:port`
Only one `kata-monitor` process runs in each node.
Only one `kata-monitor` process are running on one node.
`kata-monitor` uses a different communication channel than the one used by the container engine (`containerd`/`CRI-O`) to communicate with the Kata shim. The Kata shim exposes a dedicated socket address reserved to `kata-monitor`.
`kata-monitor` is using a different communication channel other than that `conatinerd` communicating with Kata shim, and Kata shim listen on a new socket address for communicating with `kata-monitor`.
The shim's metrics socket file is created under the virtcontainers sandboxes directory, i.e. `vc/sbs/${PODID}/shim-monitor.sock`.
The way `kata-monitor` get shim's metrics socket file(`monitor_address`) like that `containerd` get shim address. The socket is an abstract socket and saved as file `abstract` with the same directory of `address` for `containerd`.
> **Note**: If there is no Prometheus server configured, i.e., there are no scrape operations, `kata-monitor` will not collect any metrics.
> **Note**: If there is no Prometheus server is configured, i.e., there is no scrape operations, `kata-monitor` will do nothing initiative.
### Kata runtime
Kata runtime is responsible for:
Runtime is responsible for:
- Gather metrics about shim process
- Gather metrics about hypervisor process
- Gather metrics about running sandbox
- Get metrics from Kata agent (through `ttrpc`)
- Get metrics from Kata agent(through `ttrpc`)
### Kata agent
Kata agent is responsible for:
Agent is responsible for:
- Gather agent process metrics
- Gather guest OS metrics
In Kata 2.0, the agent adds a new interface:
And in Kata 2.0, agent will add a new interface:
```protobuf
rpc GetMetrics(GetMetricsRequest) returns (Metrics);
@@ -95,49 +93,33 @@ The `metrics` field is Prometheus encoded content. This can avoid defining a fix
### Performance and overhead
Metrics should not become a bottleneck for the system or downgrade the performance: they should run with minimal overhead.
Metrics should not become the bottleneck of system, downgrade the performance, and run with minimal overhead.
Requirements:
* Metrics **MUST** be quick to collect
* Metrics **MUST** be small
* Metrics **MUST** be small.
* Metrics **MUST** be generated only if there are subscribers to the Kata metrics service
* Metrics **MUST** be stateless
In Kata 2.0, metrics are collected only when needed (pull mode), mainly from the `/proc` filesystem, and consumed by Prometheus. This means that if the Prometheus collector is not running (so no one cares about the metrics) the overhead will be zero.
In Kata 2.0, metrics are collected mainly from `/proc` filesystem, and consumed by Prometheus, based on a pull mode, that is mean if there is no Prometheus collector is running, so there will be zero overhead if nobody cares the metrics.
The metrics service also doesn't hold any metrics in memory.
#### Metrics size ####
Metrics service also doesn't hold any metrics in memory.
|\*|No Sandbox | 1 Sandbox | 2 Sandboxes |
|---|---|---|---|
|Metrics count| 39 | 106 | 173 |
|Metrics size (bytes)| 9K | 144K | 283K |
|Metrics size (`gzipped`, bytes)| 2K | 10K | 17K |
|Metrics size(bytes)| 9K | 144K | 283K |
|Metrics size(`gzipped`, bytes)| 2K | 10K | 17K |
*Metrics size*: response size of one Prometheus scrape request.
*Metrics size*: Response size of one Prometheus scrape request.
It's easy to estimate the size of one metrics fetch request issued by Prometheus.
The formula to calculate the expected size when no gzip compression is in place is:
9 + (144 - 9) * `number of kata sandboxes`
Prometheus supports `gzip compression`. When enabled, the response size of each request will be smaller:
2 + (10 - 2) * `number of kata sandboxes`
**Example**
We have 10 sandboxes running on a node. The expected size of one metrics fetch request issued by Prometheus against the kata-monitor agent running on that node will be:
9 + (144 - 9) * 10 = **1.35M**
If `gzip compression` is enabled:
2 + (10 - 2) * 10 = **82K**
#### Metrics delay ####
It's easy to estimated that if there are 10 sandboxes running in the host, the size of one metrics fetch request issued by Prometheus will be about to 9 + (144 - 9) * 10 = 1.35M (not `gzipped`) or 2 + (10 - 2) * 10 = 82K (`gzipped`). Of course Prometheus support `gzip` compression, that can reduce the response size of every request.
And here is some test data:
- End-to-end (from Prometheus server to `kata-monitor` and `kata-monitor` write response back): **20ms**(avg)
- Agent (RPC all from shim to agent): **3ms**(avg)
- End-to-end (from Prometheus server to `kata-monitor` and `kata-monitor` write response back): 20ms(avg)
- Agent(RPC all from shim to agent): 3ms(avg)
Test infrastructure:
@@ -146,13 +128,13 @@ Test infrastructure:
**Scrape interval**
Prometheus default `scrape_interval` is 1 minute, but it is usually set to 15 seconds. A smaller `scrape_interval` causes more overhead, so users should set it depending on their monitoring needs.
Prometheus default `scrape_interval` is 1 minute, and usually it is set to 15s. Small `scrape_interval` will cause more overhead, so user should set it on monitor demand.
## Metrics list
Here are listed all the metrics supported by Kata 2.0. Some metrics are dependent on the VM guest kernel, so the available ones may differ based on the environment.
Here listed is all supported metrics by Kata 2.0. Some metrics is dependent on guest kernels in the VM, so there may be some different by your environment.
Metrics are categorized by the component from/for which the metrics are collected.
Metrics is categorized by component where metrics are collected from and for.
* [Metric types](#metric-types)
* [Kata agent metrics](#kata-agent-metrics)
@@ -163,15 +145,15 @@ Metrics are categorized by the component from/for which the metrics are collecte
* [Kata containerd shim v2 metrics](#kata-containerd-shim-v2-metrics)
> **Note**:
> * Labels here do not include the `instance` and `job` labels added by Prometheus.
> * Labels here are not include `instance` and `job` labels that added by Prometheus.
> * Notes about metrics unit
> * `Kibibytes`, abbreviated `KiB`. 1 `KiB` equals 1024 B.
> * For some metrics (like network devices statistics from file `/proc/net/dev`), unit depends on label( for example `recv_bytes` and `recv_packets` have different units).
> * Most of these metrics are collected from the `/proc` filesystem, so the unit of each metric matches the unit of the relevant `/proc` entry. See the `proc(5)` manual page for further details.
> * For some metrics (like network devices statistics from file `/proc/net/dev`), unit is depend on label( for example `recv_bytes` and `recv_packets` are having different units).
> * Most of these metrics is collected from `/proc` filesystem, so the unit of metrics are keeping the same unit as `/proc`. See the `proc(5)` manual page for further details.
### Metric types
Prometheus offers four core metric types.
Prometheus offer four core metric types.
- Counter: A counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase.
@@ -306,7 +288,7 @@ Metrics about Kata containerd shim v2 process.
| Metric name | Type | Units | Labels | Introduced in Kata version |
|---|---|---|---|---|
| `kata_shim_agent_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_shim_agent_rpc_durations_histogram_milliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StartTracingRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.StopTracingRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_shim_fds`: <br> Kata containerd shim v2 open FDs. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_shim_go_gc_duration_seconds`: <br> A summary of the pause duration of garbage collection cycles. | `SUMMARY` | `seconds` | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_shim_go_goroutines`: <br> Number of goroutines that currently exist. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |

View File

@@ -1,93 +0,0 @@
# Background
[Research](https://www.usenix.org/conference/fast16/technical-sessions/presentation/harter) shows that time to take for pull operation accounts for 76% of container startup time but only 6.4% of that data is read. So if we can get data on demand (lazy load), it will speed up the container start. [`Nydus`](https://github.com/dragonflyoss/image-service) is a project which build image with new format and can get data on demand when container start.
The following benchmarking result shows the performance improvement compared with the OCI image for the container cold startup elapsed time on containerd. As the OCI image size increases, the container startup time of using `nydus` image remains very short. [Click here](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-design.md) to see `nydus` design.
![`nydus`-performance](arch-images/nydus-performance.png)
## Proposal - Bring `lazyload` ability to Kata Containers
`Nydusd` is a fuse/`virtiofs` daemon which is provided by `nydus` project and it supports `PassthroughFS` and [RAFS](https://github.com/dragonflyoss/image-service/blob/master/docs/nydus-design.md) (Registry Acceleration File System) natively, so in Kata Containers, we can use `nydusd` in place of `virtiofsd` and mount `nydus` image to guest in the meanwhile.
The process of creating/starting Kata Containers with `virtiofsd`,
1. When creating sandbox, the Kata Containers Containerd v2 [shim](https://github.com/kata-containers/kata-containers/blob/main/docs/design/architecture/README.md#runtime) will launch `virtiofsd` before VM starts and share directories with VM.
2. When creating container, the Kata Containers Containerd v2 shim will mount rootfs to `kataShared`(/run/kata-containers/shared/sandboxes/\<SANDBOX\>/mounts/\<CONTAINER\>/rootfs), so it can be seen at the path `/run/kata-containers/shared/containers/shared/\<CONTAINER\>/rootfs` in the guest and used as container's rootfs.
The process of creating/starting Kata Containers with `nydusd`,
![kata-`nydus`](arch-images/kata-nydus.png)
1. When creating sandbox, the Kata Containers Containerd v2 shim will launch `nydusd` daemon before VM starts.
After VM starts, `kata-agent` will mount `virtiofs` at the path `/run/kata-containers/shared` and Kata Containers Containerd v2 shim mount `passthroughfs` filesystem to path `/run/kata-containers/shared/containers` when the VM starts.
```bash
# start nydusd
$ sandbox_id=my-test-sandbox
$ sudo /usr/local/bin/nydusd --log-level info --sock /run/vc/vm/${sandbox_id}/vhost-user-fs.sock --apisock /run/vc/vm/${sandbox_id}/api.sock
```
```bash
# source: the host sharedir which will pass through to guest
$ sudo curl -v --unix-socket /run/vc/vm/${sandbox_id}/api.sock \
-X POST "http://localhost/api/v1/mount?mountpoint=/containers" -H "accept: */*" \
-H "Content-Type: application/json" \
-d '{
"source":"/path/to/sharedir",
"fs_type":"passthrough_fs",
"config":""
}'
```
2. When creating normal container, the Kata Containers Containerd v2 shim send request to `nydusd` to mount `rafs` at the path `/run/kata-containers/shared/rafs/<container_id>/lowerdir` in guest.
```bash
# source: the metafile of nydus image
# config: the config of this image
$ sudo curl --unix-socket /run/vc/vm/${sandbox_id}/api.sock \
-X POST "http://localhost/api/v1/mount?mountpoint=/rafs/<container_id>/lowerdir" -H "accept: */*" \
-H "Content-Type: application/json" \
-d '{
"source":"/path/to/bootstrap",
"fs_type":"rafs",
"config":"config":"{\"device\":{\"backend\":{\"type\":\"localfs\",\"config\":{\"dir\":\"blobs\"}},\"cache\":{\"type\":\"blobcache\",\"config\":{\"work_dir\":\"cache\"}}},\"mode\":\"direct\",\"digest_validate\":true}",
}'
```
The Kata Containers Containerd v2 shim will also bind mount `snapshotdir` which `nydus-snapshotter` assigns to `sharedir`
So in guest, container rootfs=overlay(`lowerdir=rafs`, `upperdir=snapshotdir/fs`, `workdir=snapshotdir/work`)
> how to transfer the `rafs` info from `nydus-snapshotter` to the Kata Containers Containerd v2 shim?
By default, when creating `OCI` image container, `nydus-snapshotter` will return [`struct` Mount slice](https://github.com/containerd/containerd/blob/main/mount/mount.go#L21) below to containerd and containerd use them to mount rootfs
```
[
{
Type: "overlay",
Source: "overlay",
Options: [lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_A>/mnt,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_B>/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_B>/work],
}
]
```
Then, we can append `rafs` info into `Options`, but if do this, containerd will mount failed, as containerd can not identify `rafs` info. Here, we can refer to [containerd mount helper](https://github.com/containerd/containerd/blob/main/mount/mount_linux.go#L42) and provide a binary called `nydus-overlayfs`. The `Mount` slice which `nydus-snapshotter` returned becomes
```
[
{
Type: "fuse.nydus-overlayfs",
Source: "overlay",
Options: [lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_A>/mnt,upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_B>/fs,workdir=/var/lib/containerd/io.containerd.snapshotter.v1.nydus/snapshots/<snapshot_B>/work,extraoption=base64({source:xxx,config:xxx,snapshotdir:xxx})],
}
]
```
When containerd find `Type` is `fuse.nydus-overlayfs`,
1. containerd will call `mount.fuse` command;
2. in `mount.fuse`, it will call `nydus-overlayfs`.
3. in `nydus-overlayfs`, it will ignore the `extraoption` and do the overlay mount.
Finally, in the Kata Containers Containerd v2 shim, it parse `extraoption` and get the `rafs` info to mount the image in guest.

View File

@@ -209,5 +209,5 @@ network accessible to the collector.
- The trace collection proposals are still being considered.
[kata-1x-tracing]: https://github.com/kata-containers/agent/blob/master/TRACING.md
[trace-forwarder]: /src/tools/trace-forwarder
[trace-forwarder]: /src/trace-forwarder
[tracing-doc-pr]: https://github.com/kata-containers/kata-containers/pull/1937

View File

@@ -2,15 +2,24 @@
## Default number of virtual CPUs
Before starting a container, the [runtime][4] reads the `default_vcpus` option
from the [configuration file][5] to determine the number of virtual CPUs
Before starting a container, the [runtime][6] reads the `default_vcpus` option
from the [configuration file][7] to determine the number of virtual CPUs
(vCPUs) needed to start the virtual machine. By default, `default_vcpus` is
equal to 1 for fast boot time and a small memory footprint per virtual machine.
Be aware that increasing this value negatively impacts the virtual machine's
boot time and memory footprint.
In general, we recommend that you do not edit this variable, unless you know
what are you doing. If your container needs more than one vCPU, use
[Kubernetes `cpu` limits][1] to assign more resources.
[docker `--cpus`][1], [docker update][4], or [Kubernetes `cpu` limits][2] to
assign more resources.
*Docker*
```sh
$ docker run --name foo -ti --cpus 2 debian bash
$ docker update --cpus 4 foo
```
*Kubernetes*
@@ -40,7 +49,7 @@ $ sudo -E kubectl create -f ~/cpu-demo.yaml
## Virtual CPUs and Kubernetes pods
A Kubernetes pod is a group of one or more containers, with shared storage and
network, and a specification for how to run the containers [[specification][2]].
network, and a specification for how to run the containers [[specification][3]].
In Kata Containers this group of containers, which is called a sandbox, runs inside
the same virtual machine. If you do not specify a CPU constraint, the runtime does
not add more vCPUs and the container is not placed inside a CPU cgroup.
@@ -64,7 +73,13 @@ constraints with each container trying to consume 100% of vCPU, the resources
divide in two parts, 50% of vCPU for each container because your virtual
machine does not have enough resources to satisfy containers needs. If you want
to give access to a greater or lesser portion of vCPUs to a specific container,
use [Kubernetes `cpu` requests][1].
use [`docker --cpu-shares`][1] or [Kubernetes `cpu` requests][2].
*Docker*
```sh
$ docker run -ti --cpus-shares=512 debian bash
```
*Kubernetes*
@@ -94,9 +109,10 @@ $ sudo -E kubectl create -f ~/cpu-demo.yaml
Before running containers without CPU constraint, consider that your containers
are not running alone. Since your containers run inside a virtual machine other
processes use the vCPUs as well (e.g. `systemd` and the Kata Containers
[agent][3]). In general, we recommend setting `default_vcpus` equal to 1 to
[agent][5]). In general, we recommend setting `default_vcpus` equal to 1 to
allow non-container processes to run on this vCPU and to specify a CPU
constraint for each container.
constraint for each container. If your container is already running and needs
more vCPUs, you can add more using [docker update][4].
## Container with CPU constraint
@@ -105,7 +121,7 @@ constraints using the following formula: `vCPUs = ceiling( quota / period )`, wh
`quota` specifies the number of microseconds per CPU Period that the container is
guaranteed CPU access and `period` specifies the CPU CFS scheduler period of time
in microseconds. The result determines the number of vCPU to hot plug into the
virtual machine. Once the vCPUs have been added, the [agent][3] places the
virtual machine. Once the vCPUs have been added, the [agent][5] places the
container inside a CPU cgroup. This placement allows the container to use only
its assigned resources.
@@ -122,34 +138,30 @@ the virtual machine starts with 8 vCPUs and 1 vCPUs is added and assigned
to the container. Non-container processes might be able to use 8 vCPUs but they
use a maximum 1 vCPU, hence 7 vCPUs might not be used.
## Virtual CPU handling without hotplug
In some cases, the hardware and/or software architecture being utilized does not support
hotplug. For example, Firecracker VMM does not support CPU or memory hotplug. Similarly,
the current Linux Kernel for aarch64 does not support CPU or memory hotplug. To appropriately
size the virtual machine for the workload within the container or pod, we provide a `static_sandbox_resource_mgmt`
flag within the Kata Containers configuration. When this is set, the runtime will:
- Size the VM based on the workload requirements as well as the `default_vcpus` option specified in the configuration.
- Not resize the virtual machine after it has been launched.
*Container without CPU constraint*
VM size determination varies depending on the type of container being run, and may not always
be available. If workload sizing information is not available, the virtual machine will be started with the
`default_vcpus`.
```sh
$ docker run -ti debian bash -c "nproc; cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_*"
1 # number of vCPUs
100000 # cfs period
-1 # cfs quota
```
In the case of a pod, the initial sandbox container (pause container) typically doesn't contain any resource
information in its runtime `spec`. It is possible that the upper layer runtime
(i.e. containerd or CRI-O) may pass sandbox sizing annotations within the pause container's
`spec`. If these are provided, we will use this to appropriately size the VM. In particular,
we'll calculate the number of CPUs required for the workload and augment this by `default_vcpus`
configuration option, and use this for the virtual machine size.
*Container with CPU constraint*
In the case of a single container (i.e., not a pod), if the container specifies resource requirements,
the container's `spec` will provide the sizing information directly. If these are set, we will
calculate the number of CPUs required for the workload and augment this by `default_vcpus`
configuration option, and use this for the virtual machine size.
```sh
docker run --cpus 4 -ti debian bash -c "nproc; cat /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_*"
5 # number of vCPUs
100000 # cfs period
400000 # cfs quota
```
[1]: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource
[2]: https://kubernetes.io/docs/concepts/workloads/pods/pod/
[3]: ../../src/agent
[4]: ../../src/runtime
[5]: ../../src/runtime/README.md#configuration
[1]: https://docs.docker.com/config/containers/resource_constraints/#cpu
[2]: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource
[3]: https://kubernetes.io/docs/concepts/workloads/pods/pod/
[4]: https://docs.docker.com/engine/reference/commandline/update/
[5]: ../../src/agent
[6]: ../../src/runtime
[7]: ../../src/runtime/README.md#configuration

View File

@@ -1,37 +0,0 @@
# Design Doc for Kata Containers' VCPUs Pinning Feature
## Background
By now, vCPU threads of Kata Containers are scheduled randomly to CPUs. And each pod would request a specific set of CPUs which we call it CPU set (just the CPU set meaning in Linux cgroups).
If the number of vCPU threads are equal to that of CPUs claimed in CPU set, we can then pin each vCPU thread to one specified CPU, to reduce the cost of random scheduling.
## Detailed Design
### Passing Config Parameters
Two ways are provided to use this vCPU thread pinning feature: through `QEMU` configuration file and through annotations. Finally the pinning parameter is passed to `HypervisorConfig`.
### Related Linux Thread Scheduling API
| API Info | Value |
|-------------------|-----------------------------------------------------------|
| Package | `golang.org/x/sys/unix` |
| Method | `unix.SchedSetaffinity(thread_id, &unixCPUSet)` |
| Official Doc Page | https://pkg.go.dev/golang.org/x/sys/unix#SchedSetaffinity |
### When is VCPUs Pinning Checked?
As shown in Section 1, when `num(vCPU threads) == num(CPUs in CPU set)`, we shall pin each vCPU thread to a specified CPU. And when this condition is broken, we should restore to the original random scheduling pattern.
So when may `num(CPUs in CPU set)` change? There are 5 possible scenes:
| Possible scenes | Related Code |
|-----------------------------------|--------------------------------------------|
| when creating a container | File Sandbox.go, in method `CreateContainer` |
| when starting a container | File Sandbox.go, in method `StartContainer` |
| when deleting a container | File Sandbox.go, in method `DeleteContainer` |
| when updating a container | File Sandbox.go, in method `UpdateContainer` |
| when creating multiple containers | File Sandbox.go, in method `createContainers` |
### Core Pinning Logics
We can split the whole process into the following steps. Related methods are `checkVCPUsPinning` and `resetVCPUsPinning`, in file Sandbox.go.
![](arch-images/vcpus-pinning-process.png)

View File

@@ -39,9 +39,9 @@ Details of each solution and a summary are provided below.
Kata Containers with QEMU has complete compatibility with Kubernetes.
Depending on the host architecture, Kata Containers supports various machine types,
for example `q35` on x86 systems, `virt` on ARM systems and `pseries` on IBM Power systems. The default Kata Containers
machine type is `q35`. The machine type and its [`Machine accelerators`](#machine-accelerators) can
be changed by editing the runtime [`configuration`](architecture/README.md#configuration) file.
for example `pc` and `q35` on x86 systems, `virt` on ARM systems and `pseries` on IBM Power systems. The default Kata Containers
machine type is `pc`. The machine type and its [`Machine accelerators`](#machine-accelerators) can
be changed by editing the runtime [`configuration`](./architecture.md/#configuration) file.
Devices and features used:
- virtio VSOCK or virtio serial
@@ -60,8 +60,9 @@ Machine accelerators are architecture specific and can be used to improve the pe
and enable specific features of the machine types. The following machine accelerators
are used in Kata Containers:
- NVDIMM: This machine accelerator is x86 specific and only supported by `q35` machine types.
`nvdimm` is used to provide the root filesystem as a persistent memory device to the Virtual Machine.
- NVDIMM: This machine accelerator is x86 specific and only supported by `pc` and
`q35` machine types. `nvdimm` is used to provide the root filesystem as a persistent
memory device to the Virtual Machine.
#### Hotplug devices
@@ -110,7 +111,7 @@ Devices and features used:
- VFIO
- hotplug
- seccomp filters
- [HTTP OpenAPI](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/vmm/src/api/openapi/cloud-hypervisor.yaml)
- [HTTP OpenAPI](https://github.com/cloud-hypervisor/cloud-hypervisor/blob/master/vmm/src/api/openapi/cloud-hypervisor.yaml)
### Summary

View File

@@ -5,7 +5,7 @@
- [Run Kata containers with `crictl`](run-kata-with-crictl.md)
- [Run Kata Containers with Kubernetes](run-kata-with-k8s.md)
- [How to use Kata Containers and Containerd](containerd-kata.md)
- [How to use Kata Containers and containerd with Kubernetes](how-to-use-k8s-with-containerd-and-kata.md)
- [How to use Kata Containers and CRI (containerd) with Kubernetes](how-to-use-k8s-with-cri-containerd-and-kata.md)
- [Kata Containers and service mesh for Kubernetes](service-mesh.md)
- [How to import Kata Containers logs into Fluentd](how-to-import-kata-logs-with-fluentd.md)
@@ -15,11 +15,6 @@
- `qemu`
- `cloud-hypervisor`
- `firecracker`
In the case of `firecracker` the use of a block device `snapshotter` is needed
for the VM rootfs. Refer to the following guide for additional configuration
steps:
- [Setup Kata containers with `firecracker`](how-to-use-kata-containers-with-firecracker.md)
- `ACRN`
While `qemu` , `cloud-hypervisor` and `firecracker` work out of the box with installation of Kata,
@@ -41,10 +36,3 @@
- [How to use hotplug memory on arm64 in Kata Containers](how-to-hotplug-memory-arm64.md)
- [How to setup swap devices in guest kernel](how-to-setup-swap-devices-in-guest-kernel.md)
- [How to run rootless vmm](how-to-run-rootless-vmm.md)
- [How to run Docker with Kata Containers](how-to-run-docker-with-kata.md)
- [How to run Kata Containers with `nydus`](how-to-use-virtio-fs-nydus-with-kata.md)
- [How to run Kata Containers with AMD SEV-SNP](how-to-run-kata-containers-with-SNP-VMs.md)
## Confidential Containers
- [How to use build and test the Confidential Containers `CCv0` proof of concept](how-to-build-and-test-ccv0.md)
- [How to generate a Kata Containers payload for the Confidential Containers Operator](how-to-generate-a-kata-containers-payload-for-the-confidential-containers-operator.md)

View File

@@ -1,640 +0,0 @@
#!/bin/bash -e
#
# Copyright (c) 2021, 2022 IBM Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# Disclaimer: This script is work in progress for supporting the CCv0 prototype
# It shouldn't be considered supported by the Kata Containers community, or anyone else
# Based on https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md,
# but with elements of the tests/.ci scripts used
readonly script_name="$(basename "${BASH_SOURCE[0]}")"
# By default in Golang >= 1.16 GO111MODULE is set to "on", but not all modules support it, so overwrite to "auto"
export GO111MODULE="auto"
# Setup kata containers environments if not set - we default to use containerd
export CRI_CONTAINERD=${CRI_CONTAINERD:-"yes"}
export CRI_RUNTIME=${CRI_RUNTIME:-"containerd"}
export CRIO=${CRIO:-"no"}
export KATA_HYPERVISOR="${KATA_HYPERVISOR:-qemu}"
export KUBERNETES=${KUBERNETES:-"no"}
export AGENT_INIT="${AGENT_INIT:-${TEST_INITRD:-no}}"
export AA_KBC="${AA_KBC:-offline_fs_kbc}"
# Allow the user to overwrite the default repo and branch names if they want to build from a fork
export katacontainers_repo="${katacontainers_repo:-github.com/kata-containers/kata-containers}"
export katacontainers_branch="${katacontainers_branch:-CCv0}"
export kata_default_branch=${katacontainers_branch}
export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
export tests_branch="${tests_branch:-CCv0}"
export target_branch=${tests_branch} # kata-containers/ci/lib.sh uses target branch var to check out tests repo
# if .bash_profile exists then use it, otherwise fall back to .profile
export PROFILE="${HOME}/.profile"
if [ -r "${HOME}/.bash_profile" ]; then
export PROFILE="${HOME}/.bash_profile"
fi
# Stop PS1: unbound variable error happening
export PS1=${PS1:-}
# Create a bunch of common, derived values up front so we don't need to create them in all the different functions
. ${PROFILE}
if [ -z ${GOPATH} ]; then
export GOPATH=${HOME}/go
fi
export tests_repo_dir="${GOPATH}/src/${tests_repo}"
export katacontainers_repo_dir="${GOPATH}/src/${katacontainers_repo}"
export ROOTFS_DIR="${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder/rootfs"
export PULL_IMAGE="${PULL_IMAGE:-quay.io/kata-containers/confidential-containers:signed}" # Doesn't need authentication
export CONTAINER_ID="${CONTAINER_ID:-0123456789}"
source /etc/os-release || source /usr/lib/os-release
grep -Eq "\<fedora\>" /etc/os-release 2> /dev/null && export USE_PODMAN=true
# If we've already checked out the test repo then source the confidential scripts
if [ "${KUBERNETES}" == "yes" ]; then
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/kubernetes/confidential"
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
else
export BATS_TEST_DIRNAME="${tests_repo_dir}/integration/containerd/confidential"
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/lib.sh"
fi
[ -d "${BATS_TEST_DIRNAME}" ] && source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
export RUNTIME_CONFIG_PATH=/etc/kata-containers/configuration.toml
usage() {
exit_code="$1"
cat <<EOF
Overview:
Build and test kata containers from source
Optionally set kata-containers and tests repo and branch as exported variables before running
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/${script_name} build_and_install_all
Usage:
${script_name} [options] <command>
Commands:
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
- agent_pull_image: Run PullImage command against the agent with agent-ctl
- all: Build and install everything, test kata with containerd and capture the logs
- build_and_add_agent_to_rootfs: Builds the kata-agent and adds it to the rootfs
- build_and_install_all: Build and install everything
- build_and_install_rootfs: Builds and installs the rootfs image
- build_kata_runtime: Build and install the kata runtime
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
- build_qemu: Checkout, patch, build and install QEMU
- configure: Configure Kata to use rootfs and enable debug
- connect_to_ssh_demo_pod: Ssh into the ssh demo pod, showing that the decryption succeeded
- copy_signature_files_to_guest Copies signature verification files to guest
- create_rootfs: Create a local rootfs
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
- crictl_create_cc_pod Use crictl to create a new kata cc pod
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
- help: Display this help
- init_kubernetes: initialize a Kubernetes cluster on this system
- initialize: Install dependencies and check out kata-containers source
- install_guest_kernel: Setup, build and install the guest kernel
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_create_ssh_demo_pod: Create a Kata CC runtime pod based on the ssh demo
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_delete_ssh_demo_pod: Delete the Kata CC runtime pod based on the ssh demo
- open_kata_shell: Open a shell into the kata runtime
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
- shim_pull_image: Run PullImage command against the shim with ctr
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
- test: Test using kata with containerd
Options:
-d: Enable debug
-h: Display this help
EOF
# if script sourced don't exit as this will exit the main shell, just return instead
[[ $_ != $0 ]] && return "$exit_code" || exit "$exit_code"
}
build_and_install_all() {
initialize
build_and_install_kata_runtime
configure
create_a_local_rootfs
build_and_install_rootfs
install_guest_kernel_image
case "$KATA_HYPERVISOR" in
"qemu")
build_qemu
;;
"cloud-hypervisor")
build_cloud_hypervisor
;;
*)
echo "Invalid option: $KATA_HYPERVISOR is not supported." >&2
;;
esac
check_kata_runtime
if [ "${KUBERNETES}" == "yes" ]; then
init_kubernetes
fi
}
rebuild_and_install_kata() {
checkout_tests_repo
checkout_kata_containers_repo
build_and_install_kata_runtime
build_and_add_agent_to_rootfs
build_and_install_rootfs
check_kata_runtime
}
# Based on the jenkins_job_build.sh script in kata-containers/tests/.ci - checks out source code and installs dependencies
initialize() {
# We need git to checkout and bootstrap the ci scripts and some other packages used in testing
sudo apt-get update && sudo apt-get install -y curl git qemu-utils
grep -qxF "export GOPATH=\${HOME}/go" "${PROFILE}" || echo "export GOPATH=\${HOME}/go" >> "${PROFILE}"
grep -qxF "export GOROOT=/usr/local/go" "${PROFILE}" || echo "export GOROOT=/usr/local/go" >> "${PROFILE}"
grep -qxF "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" "${PROFILE}" || echo "export PATH=\${GOPATH}/bin:/usr/local/go/bin:\${PATH}" >> "${PROFILE}"
# Load the new go and PATH parameters from the profile
. ${PROFILE}
mkdir -p "${GOPATH}"
checkout_tests_repo
pushd "${tests_repo_dir}"
local ci_dir_name=".ci"
sudo -E PATH=$PATH -s "${ci_dir_name}/install_go.sh" -p -f
sudo -E PATH=$PATH -s "${ci_dir_name}/install_rust.sh"
# Need to change ownership of rustup so later process can create temp files there
sudo chown -R ${USER}:${USER} "${HOME}/.rustup"
checkout_kata_containers_repo
# Run setup, but don't install kata as we will build it ourselves in locations matching the developer guide
export INSTALL_KATA="no"
sudo -E PATH=$PATH -s ${ci_dir_name}/setup.sh
# Reload the profile to pick up installed dependencies
. ${PROFILE}
popd
}
checkout_tests_repo() {
echo "Creating repo: ${tests_repo} and branch ${tests_branch} into ${tests_repo_dir}..."
# Due to git https://github.blog/2022-04-12-git-security-vulnerability-announced/ the tests repo needs
# to be owned by root as it is re-checked out in rootfs.sh
mkdir -p $(dirname "${tests_repo_dir}")
[ -d "${tests_repo_dir}" ] || sudo -E git clone "https://${tests_repo}.git" "${tests_repo_dir}"
sudo -E chown -R root:root "${tests_repo_dir}"
pushd "${tests_repo_dir}"
sudo -E git fetch
if [ -n "${tests_branch}" ]; then
sudo -E git checkout ${tests_branch}
fi
sudo -E git reset --hard origin/${tests_branch}
popd
source "${BATS_TEST_DIRNAME}/lib.sh"
source "${BATS_TEST_DIRNAME}/../../confidential/lib.sh"
}
# Note: clone_katacontainers_repo using go, so that needs to be installed first
checkout_kata_containers_repo() {
source "${tests_repo_dir}/.ci/lib.sh"
echo "Creating repo: ${katacontainers_repo} and branch ${kata_default_branch} into ${katacontainers_repo_dir}..."
clone_katacontainers_repo
sudo -E chown -R ${USER}:${USER} "${katacontainers_repo_dir}"
}
build_and_install_kata_runtime() {
pushd ${katacontainers_repo_dir}/src/runtime
make clean && make DEFAULT_HYPERVISOR=${KATA_HYPERVISOR} && sudo -E PATH=$PATH make DEFAULT_HYPERVISOR=${KATA_HYPERVISOR} install
popd
}
configure() {
configure_kata_to_use_rootfs
enable_full_debug
enable_agent_console
# Switch image offload to true in kata config
switch_image_service_offload "on"
configure_cc_containerd
# From crictl v1.24.1 the default timoout leads to the pod creation failing, so update it
sudo crictl config --set timeout=10
}
configure_kata_to_use_rootfs() {
sudo mkdir -p /etc/kata-containers/
sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
sudo sed -i 's/^\(initrd =.*\)/# \1/g' ${RUNTIME_CONFIG_PATH}
}
build_and_add_agent_to_rootfs() {
build_a_custom_kata_agent
add_custom_agent_to_rootfs
}
build_a_custom_kata_agent() {
# Install libseccomp for static linking
sudo -E PATH=$PATH GOPATH=$GOPATH ${katacontainers_repo_dir}/ci/install_libseccomp.sh /tmp/kata-libseccomp /tmp/kata-gperf
export LIBSECCOMP_LINK_TYPE=static
export LIBSECCOMP_LIB_PATH=/tmp/kata-libseccomp/lib
. "$HOME/.cargo/env"
pushd ${katacontainers_repo_dir}/src/agent
sudo -E PATH=$PATH make
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
# Run a make install into the rootfs directory in order to create the kata-agent.service file which is required when we add to the rootfs
sudo -E PATH=$PATH make install DESTDIR="${ROOTFS_DIR}"
popd
}
create_a_local_rootfs() {
sudo rm -rf "${ROOTFS_DIR}"
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
export distro="ubuntu"
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
sudo -E OS_VERSION="${OS_VERSION:-}" GOPATH=$GOPATH EXTRA_PKGS="vim iputils-ping net-tools" DEBUG="${DEBUG:-}" USE_DOCKER="${use_docker:-}" SKOPEO=${SKOPEO:-} AA_KBC=${AA_KBC:-} UMOCI=yes SECCOMP=yes ./rootfs.sh -r ${ROOTFS_DIR} ${distro}
# Install_rust.sh during rootfs.sh switches us to the main branch of the tests repo, so switch back now
pushd "${tests_repo_dir}"
sudo -E git checkout ${tests_branch}
popd
# During the ./rootfs.sh call the kata agent is built as root, so we need to update the permissions, so we can rebuild it
sudo chown -R ${USER}:${USER} "${katacontainers_repo_dir}/src/agent/"
popd
}
add_custom_agent_to_rootfs() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/rootfs-builder
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
sudo install -o root -g root -m 0550 -t ${ROOTFS_DIR}/usr/bin ${katacontainers_repo_dir}/src/agent/target/${ARCH}-unknown-linux-${LIBC}/release/kata-agent
sudo install -o root -g root -m 0440 ../../../src/agent/kata-agent.service ${ROOTFS_DIR}/usr/lib/systemd/system/
sudo install -o root -g root -m 0440 ../../../src/agent/kata-containers.target ${ROOTFS_DIR}/usr/lib/systemd/system/
popd
}
build_and_install_rootfs() {
build_rootfs_image
install_rootfs_image
}
build_rootfs_image() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
# Logic from install_kata_image.sh - if we aren't using podman (ie on a fedora like), then use docker
[[ -z "${USE_PODMAN:-}" ]] && use_docker="${use_docker:-1}"
sudo -E USE_DOCKER="${use_docker:-}" ./image_builder.sh ${ROOTFS_DIR}
popd
}
install_rootfs_image() {
pushd ${katacontainers_repo_dir}/tools/osbuilder/image-builder
local commit=$(git log --format=%h -1 HEAD)
local date=$(date +%Y-%m-%d-%T.%N%z)
local image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
echo "Built Rootfs from ${ROOTFS_DIR} to /usr/share/kata-containers/${image}"
ls -al /usr/share/kata-containers/
popd
}
install_guest_kernel_image() {
pushd ${katacontainers_repo_dir}/tools/packaging/kernel
sudo -E PATH=$PATH ./build-kernel.sh setup
sudo -E PATH=$PATH ./build-kernel.sh build
sudo chmod u+wrx /usr/share/kata-containers/ # Give user permission to install kernel
sudo -E PATH=$PATH ./build-kernel.sh install
popd
}
build_qemu() {
${tests_repo_dir}/.ci/install_virtiofsd.sh
${tests_repo_dir}/.ci/install_qemu.sh
}
build_cloud_hypervisor() {
${tests_repo_dir}/.ci/install_virtiofsd.sh
${tests_repo_dir}/.ci/install_cloud_hypervisor.sh
}
check_kata_runtime() {
sudo kata-runtime check
}
k8s_pod_file="${HOME}/busybox-cc.yaml"
init_kubernetes() {
# Check that kubeadm was installed and install it otherwise
if ! [ -x "$(command -v kubeadm)" ]; then
pushd "${tests_repo_dir}/.ci"
sudo -E PATH=$PATH -s install_kubernetes.sh
if [ "${CRI_CONTAINERD}" == "yes" ]; then
sudo -E PATH=$PATH -s "configure_containerd_for_kubernetes.sh"
fi
popd
fi
# If kubernetes init has previously run we need to clean it by removing the image and resetting k8s
local cid=$(sudo docker ps -a -q -f name=^/kata-registry$)
if [ -n "${cid}" ]; then
sudo docker stop ${cid} && sudo docker rm ${cid}
fi
local k8s_nodes=$(kubectl get nodes -o name 2>/dev/null || true)
if [ -n "${k8s_nodes}" ]; then
sudo kubeadm reset -f
fi
export CI="true" && sudo -E PATH=$PATH -s ${tests_repo_dir}/integration/kubernetes/init.sh
sudo chown ${USER}:$(id -g -n ${USER}) "$HOME/.kube/config"
cat << EOF > ${k8s_pod_file}
apiVersion: v1
kind: Pod
metadata:
name: busybox-cc
spec:
runtimeClassName: kata
containers:
- name: nginx
image: quay.io/kata-containers/confidential-containers:signed
imagePullPolicy: Always
EOF
}
call_kubernetes_create_cc_pod() {
kubernetes_create_cc_pod ${k8s_pod_file}
}
call_kubernetes_delete_cc_pod() {
pod_name=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
kubernetes_delete_cc_pod $pod_name
}
call_kubernetes_create_ssh_demo_pod() {
setup_decryption_files_in_guest
kubernetes_create_ssh_demo_pod
}
call_connect_to_ssh_demo_pod() {
connect_to_ssh_demo_pod
}
call_kubernetes_delete_ssh_demo_pod() {
pod=$(kubectl get pods -o jsonpath='{.items..metadata.name}')
kubernetes_delete_ssh_demo_pod $pod
}
crictl_sandbox_name=kata-cc-busybox-sandbox
call_crictl_create_cc_pod() {
# Update iptables to allow forwarding to the cni0 bridge avoiding issues caused by the docker0 bridge
sudo iptables -P FORWARD ACCEPT
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
get_pod_config
crictl_delete_cc_pod_if_exists "${crictl_sandbox_name}"
crictl_create_cc_pod "${pod_config}"
sudo crictl pods
}
call_crictl_create_cc_container() {
# Create container configuration yaml based on our test copy of busybox
# get_pod_config in tests_common exports `pod_config` that points to the prepared pod config yaml
get_pod_config
local container_config="${FIXTURES_DIR}/${CONTAINER_CONFIG_FILE:-container-config.yaml}"
local pod_name=${crictl_sandbox_name}
crictl_create_cc_container ${pod_name} ${pod_config} ${container_config}
sudo crictl ps -a
}
crictl_delete_cc() {
crictl_delete_cc_pod ${crictl_sandbox_name}
}
test_kata_runtime() {
echo "Running ctr with the kata runtime..."
local test_image="quay.io/kata-containers/confidential-containers:signed"
if [ -z $(ctr images ls -q name=="${test_image}") ]; then
sudo ctr image pull "${test_image}"
fi
sudo ctr run --runtime "io.containerd.kata.v2" --rm -t "${test_image}" test-kata uname -a
}
run_kata_and_capture_logs() {
echo "Clearing systemd journal..."
sudo systemctl stop systemd-journald
sudo rm -f /var/log/journal/*/* /run/log/journal/*/*
sudo systemctl start systemd-journald
test_kata_runtime
echo "Collecting logs..."
sudo journalctl -q -o cat -a -t kata-runtime > ${HOME}/kata-runtime.log
sudo journalctl -q -o cat -a -t kata > ${HOME}/shimv2.log
echo "Logs output to ${HOME}/kata-runtime.log and ${HOME}/shimv2.log"
}
get_ids() {
guest_cid=$(sudo ss -H --vsock | awk '{print $6}' | cut -d: -f1)
sandbox_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
}
open_kata_shell() {
get_ids
sudo -E "PATH=$PATH" kata-runtime exec ${sandbox_id}
}
build_bundle_dir_if_necessary() {
bundle_dir="/tmp/bundle"
if [ ! -d "${bundle_dir}" ]; then
rootfs_dir="$bundle_dir/rootfs"
image="quay.io/kata-containers/confidential-containers:signed"
mkdir -p "$rootfs_dir" && (cd "$bundle_dir" && runc spec)
sudo docker export $(sudo docker create "$image") | tar -C "$rootfs_dir" -xvf -
fi
# There were errors in create container agent-ctl command due to /bin/ seemingly not being on the path, so hardcode it
sudo sed -i -e 's%^\(\t*\)"sh"$%\1"/bin/sh"%g' "${bundle_dir}/config.json"
}
build_agent_ctl() {
cd ${GOPATH}/src/${katacontainers_repo}/src/tools/agent-ctl/
if [ -e "${HOME}/.cargo/registry" ]; then
sudo chown -R ${USER}:${USER} "${HOME}/.cargo/registry"
fi
sudo -E PATH=$PATH -s make
ARCH=$(uname -m)
[ ${ARCH} == "ppc64le" ] || [ ${ARCH} == "s390x" ] && export LIBC=gnu || export LIBC=musl
[ ${ARCH} == "ppc64le" ] && export ARCH=powerpc64le
cd "./target/${ARCH}-unknown-linux-${LIBC}/release/"
}
run_agent_ctl_command() {
get_ids
build_bundle_dir_if_necessary
command=$1
# If kata-agent-ctl pre-built in this directory, use it directly, otherwise build it first and switch to release
if [ ! -x kata-agent-ctl ]; then
build_agent_ctl
fi
./kata-agent-ctl -l debug connect --bundle-dir "${bundle_dir}" --server-address "vsock://${guest_cid}:1024" -c "${command}"
}
agent_pull_image() {
run_agent_ctl_command "PullImage image=${PULL_IMAGE} cid=${CONTAINER_ID} source_creds=${SOURCE_CREDS}"
}
agent_create_container() {
run_agent_ctl_command "CreateContainer cid=${CONTAINER_ID}"
}
shim_pull_image() {
get_ids
local ctr_shim_command="sudo ctr --namespace k8s.io shim --id ${sandbox_id} pull-image ${PULL_IMAGE} ${CONTAINER_ID}"
echo "Issuing command '${ctr_shim_command}'"
${ctr_shim_command}
}
call_copy_signature_files_to_guest() {
# TODO #5173 - remove this once the kernel_params aren't ignored by the agent config
export DEBUG_CONSOLE="true"
if [ "${SKOPEO:-}" = "yes" ]; then
add_kernel_params "agent.container_policy_file=/etc/containers/quay_verification/quay_policy.json"
setup_skopeo_signature_files_in_guest
else
# TODO #4888 - set config to specifically enable signature verification to be on in ImageClient
setup_offline_fs_kbc_signature_files_in_guest
fi
}
main() {
while getopts "dh" opt; do
case "$opt" in
d)
export DEBUG="-d"
set -x
;;
h)
usage 0
;;
\?)
echo "Invalid option: -$OPTARG" >&2
usage 1
;;
esac
done
shift $((OPTIND - 1))
subcmd="${1:-}"
[ -z "${subcmd}" ] && usage 1
case "${subcmd}" in
all)
build_and_install_all
run_kata_and_capture_logs
;;
build_and_install_all)
build_and_install_all
;;
rebuild_and_install_kata)
rebuild_and_install_kata
;;
initialize)
initialize
;;
build_kata_runtime)
build_and_install_kata_runtime
;;
configure)
configure
;;
create_rootfs)
create_a_local_rootfs
;;
build_and_add_agent_to_rootfs)
build_and_add_agent_to_rootfs
;;
build_and_install_rootfs)
build_and_install_rootfs
;;
install_guest_kernel)
install_guest_kernel_image
;;
build_cloud_hypervisor)
build_cloud_hypervisor
;;
build_qemu)
build_qemu
;;
init_kubernetes)
init_kubernetes
;;
crictl_create_cc_pod)
call_crictl_create_cc_pod
;;
crictl_create_cc_container)
call_crictl_create_cc_container
;;
crictl_delete_cc)
crictl_delete_cc
;;
kubernetes_create_cc_pod)
call_kubernetes_create_cc_pod
;;
kubernetes_delete_cc_pod)
call_kubernetes_delete_cc_pod
;;
kubernetes_create_ssh_demo_pod)
call_kubernetes_create_ssh_demo_pod
;;
connect_to_ssh_demo_pod)
call_connect_to_ssh_demo_pod
;;
kubernetes_delete_ssh_demo_pod)
call_kubernetes_delete_ssh_demo_pod
;;
test)
test_kata_runtime
;;
test_capture_logs)
run_kata_and_capture_logs
;;
open_kata_console)
open_kata_console
;;
open_kata_shell)
open_kata_shell
;;
agent_pull_image)
agent_pull_image
;;
shim_pull_image)
shim_pull_image
;;
agent_create_container)
agent_create_container
;;
copy_signature_files_to_guest)
call_copy_signature_files_to_guest
;;
*)
usage 1
;;
esac
}
main $@

View File

@@ -40,7 +40,7 @@ use `RuntimeClass` instead of the deprecated annotations.
### Containerd Runtime V2 API: Shim V2 API
The [`containerd-shim-kata-v2` (short as `shimv2` in this documentation)](../../src/runtime/cmd/containerd-shim-kata-v2/)
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/main/runtime/v2) for Kata.
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata.
With `shimv2`, Kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to `shimv2`, `2N+1`
shims (i.e. a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself) and no standalone `kata-proxy`
process were used, even with VSOCK not available.
@@ -72,13 +72,14 @@ $ command -v containerd
### Install CNI plugins
> **Note:** You do not need to install CNI plugins if you do not want to use containerd with Kubernetes.
> If you have installed Kubernetes with `kubeadm`, you might have already installed the CNI plugins.
You can manually install CNI plugins as follows:
```bash
$ git clone https://github.com/containernetworking/plugins.git
$ pushd plugins
$ go get github.com/containernetworking/plugins
$ pushd $GOPATH/src/github.com/containernetworking/plugins
$ ./build_linux.sh
$ sudo mkdir /opt/cni
$ sudo cp -r bin /opt/cni/
@@ -93,8 +94,8 @@ $ popd
You can install the `cri-tools` from source code:
```bash
$ git clone https://github.com/kubernetes-sigs/cri-tools.git
$ pushd cri-tools
$ go get github.com/kubernetes-incubator/cri-tools
$ pushd $GOPATH/src/github.com/kubernetes-incubator/cri-tools
$ make
$ sudo -E make install
$ popd
@@ -130,42 +131,74 @@ For
The `RuntimeClass` is suggested.
The following configuration includes two runtime classes:
The following configuration includes three runtime classes:
- `plugins.cri.containerd.runtimes.runc`: the runc, and it is the default runtime.
- `plugins.cri.containerd.runtimes.kata`: The function in containerd (reference [the document here](https://github.com/containerd/containerd/tree/main/runtime/v2#binary-naming))
- `plugins.cri.containerd.runtimes.kata`: The function in containerd (reference [the document here](https://github.com/containerd/containerd/tree/master/runtime/v2#binary-naming))
where the dot-connected string `io.containerd.kata.v2` is translated to `containerd-shim-kata-v2` (i.e. the
binary name of the Kata implementation of [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/main/runtime/v2)).
binary name of the Kata implementation of [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)).
- `plugins.cri.containerd.runtimes.katacli`: the `containerd-shim-runc-v1` calls `kata-runtime`, which is the legacy process.
```toml
[plugins.cri.containerd]
no_pivot = false
[plugins.cri.containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
privileged_without_host_devices = false
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
[plugins.cri.containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v1"
[plugins.cri.containerd.runtimes.runc.options]
NoPivotRoot = false
NoNewKeyring = false
ShimCgroup = ""
IoUid = 0
IoGid = 0
BinaryName = "runc"
Root = ""
CriuPath = ""
SystemdCgroup = false
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
privileged_without_host_devices = true
pod_annotations = ["io.katacontainers.*"]
container_annotations = ["io.katacontainers.*"]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata.options]
ConfigPath = "/opt/kata/share/defaults/kata-containers/configuration.toml"
[plugins.cri.containerd.runtimes.katacli]
runtime_type = "io.containerd.runc.v1"
[plugins.cri.containerd.runtimes.katacli.options]
NoPivotRoot = false
NoNewKeyring = false
ShimCgroup = ""
IoUid = 0
IoGid = 0
BinaryName = "/usr/bin/kata-runtime"
Root = ""
CriuPath = ""
SystemdCgroup = false
```
From Containerd v1.2.4 and Kata v1.6.0, there is a new runtime option supported, which allows you to specify a specific Kata configuration file as follows:
```toml
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
privileged_without_host_devices = true
[plugins.cri.containerd.runtimes.kata.options]
ConfigPath = "/etc/kata-containers/config.toml"
```
`privileged_without_host_devices` tells containerd that a privileged Kata container should not have direct access to all host devices. If unset, containerd will pass all host devices to Kata container, which may cause security issues.
`pod_annotations` is the list of pod annotations passed to both the pod sandbox as well as container through the OCI config.
`container_annotations` is the list of container annotations passed through to the OCI config of the containers.
This `ConfigPath` option is optional. If you do not specify it, shimv2 first tries to get the configuration file from the environment variable `KATA_CONF_FILE`. If neither are set, shimv2 will use the default Kata configuration file paths (`/etc/kata-containers/configuration.toml` and `/usr/share/defaults/kata-containers/configuration.toml`).
If you use Containerd older than v1.2.4 or a version of Kata older than v1.6.0 and also want to specify a configuration file, you can use the following workaround, since the shimv2 accepts an environment variable, `KATA_CONF_FILE` for the configuration file path. Then, you can create a
shell script with the following:
```bash
#!/bin/bash
KATA_CONF_FILE=/etc/kata-containers/firecracker.toml containerd-shim-kata-v2 $@
```
Name it as `/usr/local/bin/containerd-shim-katafc-v2` and reference it in the configuration of containerd:
```toml
[plugins.cri.containerd.runtimes.kata-firecracker]
runtime_type = "io.containerd.katafc.v2"
```
#### Kata Containers as the runtime for untrusted workload
For cases without `RuntimeClass` support, we can use the legacy annotation method to support using Kata Containers
@@ -185,8 +218,28 @@ and then, run an untrusted workload with Kata Containers:
runtime_type = "io.containerd.kata.v2"
```
For the earlier versions of Kata Containers and containerd that do not support Runtime V2 (Shim API), you can use the following alternative configuration:
```toml
[plugins.cri.containerd]
# "plugins.cri.containerd.default_runtime" is the runtime to use in containerd.
[plugins.cri.containerd.default_runtime]
# runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
runtime_type = "io.containerd.runtime.v1.linux"
# "plugins.cri.containerd.untrusted_workload_runtime" is a runtime to run untrusted workloads on it.
[plugins.cri.containerd.untrusted_workload_runtime]
# runtime_type is the runtime type to use in containerd e.g. io.containerd.runtime.v1.linux
runtime_type = "io.containerd.runtime.v1.linux"
# runtime_engine is the name of the runtime engine used by containerd.
runtime_engine = "/usr/bin/kata-runtime"
```
You can find more information on the [Containerd config documentation](https://github.com/containerd/cri/blob/master/docs/config.md)
#### Kata Containers as the default runtime
If you want to set Kata Containers as the only runtime in the deployment, you can simply configure as follows:
@@ -197,6 +250,15 @@ If you want to set Kata Containers as the only runtime in the deployment, you ca
runtime_type = "io.containerd.kata.v2"
```
Alternatively, for the earlier versions of Kata Containers and containerd that do not support Runtime V2 (Shim API), you can use the following alternative configuration:
```toml
[plugins.cri.containerd]
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/bin/kata-runtime"
```
### Configuration for `cri-tools`
> **Note:** If you skipped the [Install `cri-tools`](#install-cri-tools) section, you can skip this section too.
@@ -250,55 +312,11 @@ To run a container with Kata Containers through the containerd command line, you
```bash
$ sudo ctr image pull docker.io/library/busybox:latest
$ sudo ctr run --cni --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh
$ sudo ctr run --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh
```
This launches a BusyBox container named `hello`, and it will be removed by `--rm` after it quits.
The `--cni` flag enables CNI networking for the container. Without this flag, a container with just a
loopback interface is created.
### Launch containers using `ctr` command line with rootfs bundle
#### Get rootfs
Use the script to create rootfs
```bash
ctr i pull quay.io/prometheus/busybox:latest
ctr i export rootfs.tar quay.io/prometheus/busybox:latest
rootfs_tar=rootfs.tar
bundle_dir="./bundle"
mkdir -p "${bundle_dir}"
# extract busybox rootfs
rootfs_dir="${bundle_dir}/rootfs"
mkdir -p "${rootfs_dir}"
layers_dir="$(mktemp -d)"
tar -C "${layers_dir}" -pxf "${rootfs_tar}"
for ((i=0;i<$(cat ${layers_dir}/manifest.json | jq -r ".[].Layers | length");i++)); do
tar -C ${rootfs_dir} -xf ${layers_dir}/$(cat ${layers_dir}/manifest.json | jq -r ".[].Layers[${i}]")
done
```
#### Get `config.json`
Use runc spec to generate `config.json`
```bash
cd ./bundle/rootfs
runc spec
mv config.json ../
```
Change the root `path` in `config.json` to the absolute path of rootfs
```JSON
"root":{
"path":"/root/test/bundle/rootfs",
"readonly": false
},
```
#### Run container
```bash
sudo ctr run -d --runtime io.containerd.run.kata.v2 --config bundle/config.json hello
sudo ctr t exec --exec-id ${ID} -t hello sh
```
### Launch Pods with `crictl` command line
With the `crictl` command line of `cri-tools`, you can specify runtime class with `-r` or `--runtime` flag.

View File

@@ -1,45 +0,0 @@
# Copyright (c) 2021 IBM Corp.
#
# SPDX-License-Identifier: Apache-2.0
#
aa_kbc_params = "$AA_KBC_PARAMS"
https_proxy = "$HTTPS_PROXY"
[endpoints]
allowed = [
"AddARPNeighborsRequest",
"AddSwapRequest",
"CloseStdinRequest",
"CopyFileRequest",
"CreateContainerRequest",
"CreateSandboxRequest",
"DestroySandboxRequest",
#"ExecProcessRequest",
"GetMetricsRequest",
"GetOOMEventRequest",
"GuestDetailsRequest",
"ListInterfacesRequest",
"ListRoutesRequest",
"MemHotplugByProbeRequest",
"OnlineCPUMemRequest",
"PauseContainerRequest",
"PullImageRequest",
"ReadStreamRequest",
"RemoveContainerRequest",
#"ReseedRandomDevRequest",
"ResizeVolumeRequest",
"ResumeContainerRequest",
"SetGuestDateTimeRequest",
"SignalProcessRequest",
"StartContainerRequest",
"StartTracingRequest",
"StatsContainerRequest",
"StopTracingRequest",
"TtyWinResizeRequest",
"UpdateContainerRequest",
"UpdateInterfaceRequest",
"UpdateRoutesRequest",
"VolumeStatsRequest",
"WaitProcessRequest",
"WriteStreamRequest"
]

View File

@@ -45,9 +45,6 @@ spec:
- name: containerdsocket
mountPath: /run/containerd/containerd.sock
readOnly: true
- name: sbs
mountPath: /run/vc/sbs/
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: containerdtask
@@ -56,6 +53,3 @@ spec:
- name: containerdsocket
hostPath:
path: /run/containerd/containerd.sock
- name: sbs
hostPath:
path: /run/vc/sbs/

View File

@@ -1,479 +0,0 @@
# How to build, run and test Kata CCv0
## Introduction and Background
In order to try and make building (locally) and demoing the Kata Containers `CCv0` code base as simple as possible I've
shared a script [`ccv0.sh`](./ccv0.sh). This script was originally my attempt to automate the steps of the
[Developer Guide](https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md) so that I could do
different sections of them repeatedly and reliably as I was playing around with make changes to different parts of the
Kata code base. I then tried to weave in some of the [`tests/.ci`](https://github.com/kata-containers/tests/tree/main/.ci)
scripts in order to have less duplicated code.
As we're progress on the confidential containers journey I hope to add more features to demonstrate the functionality
we have working.
*Disclaimer: This script has mostly just been used and tested by me ([@stevenhorsman](https://github.com/stevenhorsman)),*
*so there might be issues with it. I'm happy to try and help solve these if possible, but this shouldn't be considered a*
*fully supported process by the Kata Containers community.*
### Basic script set-up and optional environment variables
In order to build, configure and demo the CCv0 functionality, these are the set-up steps I take:
- Provision a new VM
- *I choose a Ubuntu 20.04 8GB VM for this as I had one available. There are some dependences on apt-get installed*
*packages, so these will need re-working to be compatible with other platforms.*
- Copy the script over to your VM *(I put it in the home directory)* and ensure it has execute permission by running
```bash
$ chmod u+x ccv0.sh
```
- Optionally set up some environment variables
- By default the script checks out the `CCv0` branches of the `kata-containers/kata-containers` and
`kata-containers/tests` repositories, but it is designed to be used to test of personal forks and branches as well.
If you want to build and run these you can export the `katacontainers_repo`, `katacontainers_branch`, `tests_repo`
and `tests_branch` variables e.g.
```bash
$ export katacontainers_repo=github.com/stevenhorsman/kata-containers
$ export katacontainers_branch=stevenh/agent-pull-image-endpoint
$ export tests_repo=github.com/stevenhorsman/tests
$ export tests_branch=stevenh/add-ccv0-changes-to-build
```
before running the script.
- By default the build and configuration are using `QEMU` as the hypervisor. In order to use `Cloud Hypervisor` instead
set:
```
$ export KATA_HYPERVISOR="cloud-hypervisor"
```
before running the build.
- At this point you can provision a Kata confidential containers pod and container with either
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image),
or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
and then test and use it.
### Using crictl for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
- Run the full build process with Kubernetes turned off, so its configuration doesn't interfere with `crictl` using:
```bash
$ export KUBERNETES="no"
$ export KATA_HYPERVISOR="qemu"
$ ~/ccv0.sh -d build_and_install_all
```
> **Note**: Much of this script has to be run as `sudo`, so you are likely to get prompted for your password.
- *I run this script sourced just so that the required installed components are accessible on the `PATH` to the rest*
*of the process without having to reload the session.*
- The steps that `build_and_install_all` takes is:
- Checkout the git repos for the `tests` and `kata-containers` repos as specified by the environment variables
(default to `CCv0` branches if they are not supplied)
- Use the `tests/.ci` scripts to install the build dependencies
- Build and install the Kata runtime
- Configure Kata to use containerd and for debug and confidential containers features to be enabled (including
enabling console access to the Kata guest shell, which should only be done in development)
- Create, build and install a rootfs for the Kata hypervisor to use. For 'CCv0' this is currently based on Ubuntu
20.04 and has extra packages like `umoci` added.
- Build the Kata guest kernel
- Install the hypervisor (in order to select which hypervisor will be used, the `KATA_HYPERVISOR` environment
variable can be used to select between `qemu` or `cloud-hypervisor`)
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull ubuntu
> ```
> then re-run the command.
- The first time this runs it may take a while, but subsequent runs will be quicker as more things are already
installed and they can be further cut down by not running all the above steps
[see "Additional script usage" below](#additional-script-usage)
- Create a new Kata sandbox pod using `crictl` with:
```bash
$ ~/ccv0.sh crictl_create_cc_pod
```
- This creates a pod configuration file, creates the pod from this using
`sudo crictl runp -r kata ~/pod-config.yaml` and runs `sudo crictl pods` to show the pod
- Create a new Kata confidential container with:
```bash
$ ~/ccv0.sh crictl_create_cc_container
```
- This creates a container (based on `busybox:1.33.1`) in the Kata cc sandbox and prints a list of containers.
This will have been created based on an image pulled in the Kata pod sandbox/guest, not on the host machine.
As this point you should have a `crictl` pod and container that is using the Kata confidential containers runtime.
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
#### Clean up the `crictl` pod sandbox and container
- When the testing is complete you can delete the container and pod by running:
```bash
$ ~/ccv0.sh crictl_delete_cc
```
### Using Kubernetes for end-to-end provisioning of a Kata confidential containers pod with an unencrypted image
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
configured and created using the VM
as a single node cluster:
```bash
$ export KUBERNETES="yes"
$ ~/ccv0.sh build_and_install_all
```
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull registry:2
> $ sudo docker pull ubuntu:20.04
> ```
> then re-run the command.
- Check that your Kubernetes cluster has been correctly set-up by running :
```bash
$ kubectl get nodes
```
and checking that you see a single node e.g.
```text
NAME STATUS ROLES AGE VERSION
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
```
- Create a Kata confidential containers pod by running:
```bash
$ ~/ccv0.sh kubernetes_create_cc_pod
```
- Wait a few seconds for pod to start then check that the pod's status is `Running` with
```bash
$ kubectl get pods
```
which should show something like:
```text
NAME READY STATUS RESTARTS AGE
busybox-cc 1/1 Running 0 54s
```
- As this point you should have a Kubernetes pod and container running, that is using the Kata
confidential containers runtime.
You can [validate that the container image was pulled on the guest](#validate-that-the-container-image-was-pulled-on-the-guest)
or [using the Kata pod sandbox for testing with `agent-ctl` or `ctr shim`](#using-a-kata-pod-sandbox-for-testing-with-agent-ctl-or-ctr-shim)
#### Clean up the Kubernetes pod sandbox and container
- When the testing is complete you can delete the container and pod by running:
```bash
$ ~/ccv0.sh kubernetes_delete_cc_pod
```
### Validate that the container image was pulled on the guest
There are a couple of ways we can check that the container pull image action was offloaded to the guest, by checking
the guest's file system for the unpacked bundle and checking the host's directories to ensure it wasn't also pulled
there.
- To check the guest's file system:
- Open a shell into the Kata guest with:
```bash
$ ~/ccv0.sh open_kata_shell
```
- List the files in the directory that the container image bundle should have been unpacked to with:
```bash
$ ls -ltr /run/kata-containers/confidential-containers_signed/
```
- This should give something like
```
total 72
-rw-r--r-- 1 root root 2977 Jan 20 10:03 config.json
-rw-r--r-- 1 root root 372 Jan 20 10:03 umoci.json
-rw-r--r-- 1 root root 63584 Jan 20 10:03 sha256_be9faa75035c20288cde7d2cdeb6cd1f5f4dbcd845d3f86f7feab61c4eff9eb5.mtree
drwxr-xr-x 12 root root 240 Jan 20 10:03 rootfs
```
which shows how the image has been pulled and then unbundled on the guest.
- Leave the Kata guest shell by running:
```bash
$ exit
```
- To verify that the image wasn't pulled on the host system we can look at the shared sandbox on the host and we
should only see a single bundle for the pause container as the `busybox` based container image should have been
pulled on the guest:
- Find all the `rootfs` directories under in the pod's shared directory with:
```bash
$ pod_id=$(ps -ef | grep containerd-shim-kata-v2 | egrep -o "id [^,][^,].* " | awk '{print $2}')
$ sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs
```
which should only show a single `rootfs` directory if the container image was pulled on the guest, not the host
- Looking that `rootfs` directory with
```bash
$ sudo ls -ltr $(sudo find /run/kata-containers/shared/sandboxes/${pod_id}/shared -name rootfs)
```
shows something similar to
```
total 668
-rwxr-xr-x 1 root root 682696 Aug 25 13:58 pause
drwxr-xr-x 2 root root 6 Jan 20 02:01 proc
drwxr-xr-x 2 root root 6 Jan 20 02:01 dev
drwxr-xr-x 2 root root 6 Jan 20 02:01 sys
drwxr-xr-x 2 root root 25 Jan 20 02:01 etc
```
which is clearly the pause container indicating that the `busybox` based container image is not exposed to the host.
### Using a Kata pod sandbox for testing with `agent-ctl` or `ctr shim`
Once you have a kata pod sandbox created as described above, either using
[`crictl`](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image), or [Kubernetes](#using-kubernetes-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
, you can use this to test specific components of the Kata confidential
containers architecture. This can be useful for development and debugging to isolate and test features
that aren't broadly supported end-to-end. Here are some examples:
- In the first terminal run the pull image on guest command against the Kata agent, via the shim (`containerd-shim-kata-v2`).
This can be achieved using the [containerd](https://github.com/containerd/containerd) CLI tool, `ctr`, which can be used to
interact with the shim directly. The command takes the form
`ctr --namespace k8s.io shim --id <sandbox-id> pull-image <image> <new-container-id>` and can been run directly, or through
the `ccv0.sh` script to automatically fill in the variables:
- Optionally, set up some environment variables to set the image and credentials used:
- By default the shim pull image test in `ccv0.sh` will use the `busybox:1.33.1` based test image
`quay.io/kata-containers/confidential-containers:signed` which requires no authentication. To use a different
image, set the `PULL_IMAGE` environment variable e.g.
```bash
$ export PULL_IMAGE="docker.io/library/busybox:latest"
```
Currently the containerd shim pull image
code doesn't support using a container registry that requires authentication, so if this is required, see the
below steps to run the pull image command against the agent directly.
- Run the pull image agent endpoint with:
```bash
$ ~/ccv0.sh shim_pull_image
```
which we print the `ctr shim` command for reference
- Alternatively you can issue the command directly to the `kata-agent` pull image endpoint, which also supports
credentials in order to pull from an authenticated registry:
- Optionally set up some environment variables to set the image and credentials used:
- Set the `PULL_IMAGE` environment variable e.g. `export PULL_IMAGE="docker.io/library/busybox:latest"`
if a specific container image is required.
- If the container registry for the image requires authentication then this can be set with an environment
variable `SOURCE_CREDS`. For example to use Docker Hub (`docker.io`) as an authenticated user first run
`export SOURCE_CREDS="<dockerhub username>:<dockerhub api key>"`
> **Note**: the credentials support on the agent request is a tactical solution for the short-term
proof of concept to allow more images to be pulled and tested. Once we have support for getting
keys into the Kata guest image using the attestation-agent and/or KBS I'd expect container registry
credentials to be looked up using that mechanism.
- Run the pull image agent endpoint with
```bash
$ ~/ccv0.sh agent_pull_image
```
and you should see output which includes `Command PullImage (1 of 1) returned (Ok(()), false)` to indicate
that the `PullImage` request was successful e.g.
```
Finished release [optimized] target(s) in 0.21s
{"msg":"announce","level":"INFO","ts":"2021-09-15T08:40:14.189360410-07:00","subsystem":"rpc","name":"kata-agent-ctl","pid":"830920","version":"0.1.0","source":"kata-agent-ctl","config":"Config { server_address: \"vsock://1970354082:1024\", bundle_dir: \"/tmp/bundle\", timeout_nano: 0, interactive: false, ignore_errors: false }"}
{"msg":"client setup complete","level":"INFO","ts":"2021-09-15T08:40:14.193639057-07:00","pid":"830920","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","server-address":"vsock://1970354082:1024"}
{"msg":"Run command PullImage (1 of 1)","level":"INFO","ts":"2021-09-15T08:40:14.196643765-07:00","pid":"830920","source":"kata-agent-ctl","subsystem":"rpc","name":"kata-agent-ctl","version":"0.1.0"}
{"msg":"response received","level":"INFO","ts":"2021-09-15T08:40:43.828200633-07:00","source":"kata-agent-ctl","name":"kata-agent-ctl","subsystem":"rpc","version":"0.1.0","pid":"830920","response":""}
{"msg":"Command PullImage (1 of 1) returned (Ok(()), false)","level":"INFO","ts":"2021-09-15T08:40:43.828261708-07:00","subsystem":"rpc","pid":"830920","source":"kata-agent-ctl","version":"0.1.0","name":"kata-agent-ctl"}
```
> **Note**: The first time that `~/ccv0.sh agent_pull_image` is run, the `agent-ctl` tool will be built
which may take a few minutes.
- To validate that the image pull was successful, you can open a shell into the Kata guest with:
```bash
$ ~/ccv0.sh open_kata_shell
```
- Check the `/run/kata-containers/` directory to verify that the container image bundle has been created in a directory
named either `01234556789` (for the container id), or the container image name, e.g.
```bash
$ ls -ltr /run/kata-containers/confidential-containers_signed/
```
which should show something like
```
total 72
drwxr-xr-x 10 root root 200 Jan 1 1970 rootfs
-rw-r--r-- 1 root root 2977 Jan 20 16:45 config.json
-rw-r--r-- 1 root root 372 Jan 20 16:45 umoci.json
-rw-r--r-- 1 root root 63584 Jan 20 16:45 sha256_be9faa75035c20288cde7d2cdeb6cd1f5f4dbcd845d3f86f7feab61c4eff9eb5.mtree
```
- Leave the Kata shell by running:
```bash
$ exit
```
## Verifying signed images
For this sample demo, we use local attestation to pass through the required
configuration to do container image signature verification. Due to this, the ability to verify images is limited
to a pre-created selection of test images in our test
repository [`quay.io/kata-containers/confidential-containers`](https://quay.io/repository/kata-containers/confidential-containers?tab=tags).
For pulling images not in this test repository (called an *unprotected* registry below), we fall back to the behaviour
of not enforcing signatures. More documentation on how to customise this to match your own containers through local,
or remote attestation will be available in future.
In our test repository there are three tagged images:
| Test Image | Base Image used | Signature status | GPG key status |
| --- | --- | --- | --- |
| `quay.io/kata-containers/confidential-containers:signed` | `busybox:1.33.1` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/signatures.tar) embedded in kata rootfs | [public key](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/public.gpg) embedded in kata rootfs |
| `quay.io/kata-containers/confidential-containers:unsigned` | `busybox:1.33.1` | not signed | not signed |
| `quay.io/kata-containers/confidential-containers:other_signed` | `nginx:1.21.3` | [signature](https://github.com/kata-containers/tests/tree/CCv0/integration/confidential/fixtures/quay_verification/signatures.tar) embedded in kata rootfs | GPG key not kept |
Using a standard unsigned `busybox` image that can be pulled from another, *unprotected*, `quay.io` repository we can
test a few scenarios.
In this sample, with local attestation, we pass in the the public GPG key and signature files, and the [`offline_fs_kbc`
configuration](https://github.com/confidential-containers/attestation-agent/blob/main/src/kbc_modules/offline_fs_kbc/README.md)
into the guest image which specifies that any container image from `quay.io/kata-containers`
must be signed with the embedded GPG key and the agent configuration needs updating to enable this.
With this policy set a few tests of image verification can be done to test different scenarios by attempting
to create containers from these images using `crictl`:
- If you don't already have the Kata Containers CC code built and configured for `crictl`, then follow the
[instructions above](#using-crictl-for-end-to-end-provisioning-of-a-kata-confidential-containers-pod-with-an-unencrypted-image)
up to the `~/ccv0.sh crictl_create_cc_pod` command.
- In order to enable the guest image, you will need to setup the required configuration, policy and signature files
needed by running
`~/ccv0.sh copy_signature_files_to_guest` and then run `~/ccv0.sh crictl_create_cc_pod` which will delete and recreate
your pod - adding in the new files.
- To test the fallback behaviour works using an unsigned image from an *unprotected* registry we can pull the `busybox`
image by running:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-unprotected.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This finishes showing the running container e.g.
```text
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
98c70fefe997a quay.io/prometheus/busybox:latest Less than a second ago Running prometheus-busybox-signed 0 70119e0539238
```
- To test that an unsigned image from our *protected* test container registry is rejected we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_unsigned-protected.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This correctly results in an error message from `crictl`:
`PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [Match reference failed.]" image="quay.io/kata-containers/confidential-containers:unsigned"`
- To test that the signed image our *protected* test container registry is accepted we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- This finishes by showing a new `kata-cc-busybox-signed` running container e.g.
```text
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b4d85c2132ed9 quay.io/kata-containers/confidential-containers:signed Less than a second ago Running kata-cc-busybox-signed 0 70119e0539238
...
```
- Finally to check the image with a valid signature, but invalid GPG key (the real trusted piece of information we really
want to protect with the attestation agent in future) fails we can run:
```bash
$ export CONTAINER_CONFIG_FILE=container-config_signed-protected-other.yaml
$ ~/ccv0.sh crictl_create_cc_container
```
- Again this results in an error message from `crictl`:
`"PullImage from image service failed" err="rpc error: code = Internal desc = Security validate failed: Validate image failed: The signatures do not satisfied! Reject reason: [signature verify failed! There is no pubkey can verify the signature!]" image="quay.io/kata-containers/confidential-containers:other_signed"`
### Using Kubernetes to create a Kata confidential containers pod from the encrypted ssh demo sample image
The [ssh-demo](https://github.com/confidential-containers/documentation/tree/main/demos/ssh-demo) explains how to
demonstrate creating a Kata confidential containers pod from an encrypted image with the runtime created by the
[confidential-containers operator](https://github.com/confidential-containers/documentation/blob/main/demos/operator-demo).
To be fully confidential, this should be run on a Trusted Execution Environment, but it can be tested on generic
hardware as well.
If you wish to build the Kata confidential containers runtime to do this yourself, then you can using the following
steps:
- Run the full build process with the Kubernetes environment variable set to `"yes"`, so the Kubernetes cluster is
configured and created using the VM as a single node cluster and with `AA_KBC` set to `offline_fs_kbc`.
```bash
$ export KUBERNETES="yes"
$ export AA_KBC=offline_fs_kbc
$ ~/ccv0.sh build_and_install_all
```
- The `AA_KBC=offline_fs_kbc` mode will ensure that, when creating the rootfs of the Kata guest, the
[attestation-agent](https://github.com/confidential-containers/attestation-agent) will be added along with the
[sample offline KBC](https://github.com/confidential-containers/documentation/blob/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json)
and an agent configuration file
> **Note**: Depending on how where your VMs are hosted and how IPs are shared you might get an error from docker
during matching `ERROR: toomanyrequests: Too Many Requests`. To get past
this, login into Docker Hub and pull the images used with:
> ```bash
> $ sudo docker login
> $ sudo docker pull registry:2
> $ sudo docker pull ubuntu:20.04
> ```
> then re-run the command.
- Check that your Kubernetes cluster has been correctly set-up by running :
```bash
$ kubectl get nodes
```
and checking that you see a single node e.g.
```text
NAME STATUS ROLES AGE VERSION
stevenh-ccv0-k8s1.fyre.ibm.com Ready control-plane,master 43s v1.22.0
```
- Create a sample Kata confidential containers ssh pod by running:
```bash
$ ~/ccv0.sh kubernetes_create_ssh_demo_pod
```
- As this point you should have a Kubernetes pod running the Kata confidential containers runtime that has pulled
the [sample image](https://hub.docker.com/r/katadocker/ccv0-ssh) which was encrypted by the key file that we included
in the rootfs.
During the pod deployment the image was pulled and then decrypted using the key file, on the Kata guest image, without
it ever being available to the host.
- To validate that the container is working you, can connect to the image via SSH by running:
```bash
$ ~/ccv0.sh connect_to_ssh_demo_pod
```
- During this connection the host key fingerprint is shown and should match:
`ED25519 key fingerprint is SHA256:wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0.`
- After you are finished connecting then run:
```bash
$ exit
```
- To delete the sample SSH demo pod run:
```bash
$ ~/ccv0.sh kubernetes_delete_ssh_demo_pod
```
## Additional script usage
As well as being able to use the script as above to build all of `kata-containers` from scratch it can be used to just
re-build bits of it by running the script with different parameters. For example after the first build you will often
not need to re-install the dependencies, the hypervisor or the Guest kernel, but just test code changes made to the
runtime and agent. This can be done by running `~/ccv0.sh rebuild_and_install_kata`. (*Note this does a hard checkout*
*from git, so if your changes are only made locally it is better to do the individual steps e.g.*
`~/ccv0.sh build_kata_runtime && ~/ccv0.sh build_and_add_agent_to_rootfs && ~/ccv0.sh build_and_install_rootfs`).
There are commands for a lot of steps in building, setting up and testing and the full list can be seen by running
`~/ccv0.sh help`:
```
$ ~/ccv0.sh help
Overview:
Build and test kata containers from source
Optionally set kata-containers and tests repo and branch as exported variables before running
e.g. export katacontainers_repo=github.com/stevenhorsman/kata-containers && export katacontainers_branch=kata-ci-from-fork && export tests_repo=github.com/stevenhorsman/tests && export tests_branch=kata-ci-from-fork && ~/ccv0.sh build_and_install_all
Usage:
ccv0.sh [options] <command>
Commands:
- help: Display this help
- all: Build and install everything, test kata with containerd and capture the logs
- build_and_install_all: Build and install everything
- initialize: Install dependencies and check out kata-containers source
- rebuild_and_install_kata: Rebuild the kata runtime and agent and build and install the image
- build_kata_runtime: Build and install the kata runtime
- configure: Configure Kata to use rootfs and enable debug
- create_rootfs: Create a local rootfs
- build_and_add_agent_to_rootfs:Builds the kata-agent and adds it to the rootfs
- build_and_install_rootfs: Builds and installs the rootfs image
- install_guest_kernel: Setup, build and install the guest kernel
- build_cloud_hypervisor Checkout, patch, build and install Cloud Hypervisor
- build_qemu: Checkout, patch, build and install QEMU
- init_kubernetes: initialize a Kubernetes cluster on this system
- crictl_create_cc_pod Use crictl to create a new kata cc pod
- crictl_create_cc_container Use crictl to create a new busybox container in the kata cc pod
- crictl_delete_cc Use crictl to delete the kata cc pod sandbox and container in it
- kubernetes_create_cc_pod: Create a Kata CC runtime busybox-based pod in Kubernetes
- kubernetes_delete_cc_pod: Delete the Kata CC runtime busybox-based pod in Kubernetes
- open_kata_shell: Open a shell into the kata runtime
- agent_pull_image: Run PullImage command against the agent with agent-ctl
- shim_pull_image: Run PullImage command against the shim with ctr
- agent_create_container: Run CreateContainer command against the agent with agent-ctl
- test: Test using kata with containerd
- test_capture_logs: Test using kata with containerd and capture the logs in the user's home directory
Options:
-d: Enable debug
-h: Display this help
```

View File

@@ -1,44 +0,0 @@
# Generating a Kata Containers payload for the Confidential Containers Operator
[Confidential Containers
Operator](https://github.com/confidential-containers/operator) consumes a Kata
Containers payload, generated from the `CCv0` branch, and here one can find all
the necessary info on how to build such a payload.
## Requirements
* `make` installed in the machine
* Docker installed in the machine
* `sudo` access to the machine
## Process
* Clone [Kata Containers](https://github.com/kata-containers/kata-containers)
```sh
git clone --branch CCv0 https://github.com/kata-containers/kata-containers
```
* In case you've already cloned the repo, make sure to switch to the `CCv0` branch
```sh
git checkout CCv0
```
* Ensure your tree is clean and in sync with upstream `CCv0`
```sh
git clean -xfd
git reset --hard <upstream>/CCv0
```
* Make sure you're authenticated to `quay.io`
```sh
sudo docker login quay.io
```
* From the top repo directory, run:
```sh
sudo make cc-payload
```
* Make sure the image was upload to the [Confidential Containers
runtime-payload
registry](https://quay.io/repository/confidential-containers/runtime-payload?tab=tags)
## Notes
Make sure to run it on a machine that's not the one you're hacking on, prepare a
cup of tea, and get back to it an hour later (at least).

View File

@@ -4,7 +4,7 @@
This document describes how to import Kata Containers logs into [Fluentd](https://www.fluentd.org/),
typically for importing into an
Elastic/Fluentd/Kibana([EFK](https://github.com/kubernetes-sigs/instrumentation-addons/tree/master/fluentd-elasticsearch#running-efk-stack-in-production))
Elastic/Fluentd/Kibana([EFK](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch#running-efk-stack-in-production))
or Elastic/Logstash/Kibana([ELK](https://www.elastic.co/elastic-stack)) stack.
The majority of this document focusses on CRI-O based (classic) Kata runtime. Much of that information
@@ -68,7 +68,7 @@ the Kata logs import to the EFK stack.
> stack they are able to utilise in order to modify and test as necessary.
Minikube by default
[configures](https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/minikube/x86_64/rootfs-overlay/etc/systemd/journald.conf)
[configures](https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/coreos/minikube/rootfs-overlay/etc/systemd/journald.conf)
the `systemd-journald` with the
[`Storage=volatile`](https://www.freedesktop.org/software/systemd/man/journald.conf.html) option,
which results in the journal being stored in `/run/log/journal`. Unfortunately, the Minikube EFK
@@ -163,7 +163,7 @@ sub-filter on, for instance, the `SYSLOG_IDENTIFIER` to differentiate the Kata c
on the `PRIORITY` to filter out critical issues etc.
Kata generates a significant amount of Kata specific information, which can be seen as
[`logfmt`](../../src/tools/log-parser/README.md#logfile-requirements).
[`logfmt`](https://github.com/kata-containers/tests/tree/main/cmd/log-parser#logfile-requirements).
data contained in the `MESSAGE` field. Imported as-is, there is no easy way to filter on that data
in Kibana:
@@ -257,14 +257,14 @@ go directly to a full Kata specific JSON format logfile test.
Kata runtime has the ability to generate JSON logs directly, rather than its default `logfmt` format. Passing
the `--log-format=json` argument to the Kata runtime enables this. The easiest way to pass in this extra
parameter from a [Kata deploy](../../tools/packaging/kata-deploy) installation
parameter from a [Kata deploy](https://github.com/kata-containers/kata-containers/tree/main/tools/packaging/kata-deploy) installation
is to edit the `/opt/kata/bin/kata-qemu` shell script.
At the same time, we will add the `--log=/var/log/kata-runtime.log` argument to store the Kata logs in their
own file (rather than into the system journal).
```bash
#!/usr/bin/env bash
#!/bin/bash
/opt/kata/bin/kata-runtime --config "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" --log-format=json --log=/var/log/kata-runtime.log $@
```

View File

@@ -1,141 +0,0 @@
# How to run Docker in Docker with Kata Containers
This document describes the why and how behind running Docker in a Kata Container.
> **Note:** While in other environments this might be described as "Docker in Docker", the new architecture of Kata 2.x means [Docker can no longer be used to create containers using a Kata Containers runtime](https://github.com/kata-containers/kata-containers/issues/722).
## Requirements
- A working Kata Containers installation
## Install and configure Kata Containers
Follow the [Kata Containers installation guide](../install/README.md) to Install Kata Containers on your Kubernetes cluster.
## Background
Docker in Docker ("DinD") is the colloquial name for the ability to run `docker` from inside a container.
You can learn more about about Docker-in-Docker at the following links:
- [The original announcement of DinD](https://www.docker.com/blog/docker-can-now-run-within-docker/)
- [`docker` image Docker Hub page](https://hub.docker.com/_/docker/) (this page lists the `-dind` releases)
While normally DinD refers to running `docker` from inside a Docker container,
Kata Containers 2.x allows only [supported runtimes][kata-2.x-supported-runtimes] (such as [`containerd`](../install/container-manager/containerd/containerd-install.md)).
Running `docker` in a Kata Container implies creating Docker containers from inside a container managed by `containerd` (or another supported container manager), as illustrated below:
```
container manager -> Kata Containers shim -> Docker Daemon -> Docker container
(containerd) (containerd-shim-kata-v2) (dockerd) (busybox sh)
```
[OverlayFS][OverlayFS] is the preferred storage driver for most container runtimes on Linux ([including Docker](https://docs.docker.com/storage/storagedriver/select-storage-driver)).
> **Note:** While in the past Kata Containers did not contain the [`overlay` kernel module (aka OverlayFS)][OverlayFS], the kernel modules have been included since the [Kata Containers v2.0.0 release][v2.0.0].
[OverlayFS]: https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html
[v2.0.0]: https://github.com/kata-containers/kata-containers/releases/tag/2.0.0
[kata-2.x-supported-runtimes]: ../install/container-manager/containerd/containerd-install.md
## Why Docker in Kata Containers 2.x requires special measures
Running Docker containers Kata Containers requires care because `VOLUME`s specified in `Dockerfile`s run by Kata Containers are given the `kataShared` mount type by default, which applies to the root directory `/`:
```console
/ # mount
kataShared on / type virtiofs (rw,relatime,dax)
```
`kataShared` mount types are powered by [`virtio-fs`](https://virtio-fs.gitlab.io/), a marked improvement over `virtio-9p`, thanks to [PR #1016](https://github.com/kata-containers/runtime/pull/1016). While `virtio-fs` is normally an excellent choice, in the case of DinD workloads `virtio-fs` causes an issue -- [it *cannot* be used as a "upper layer" of `overlayfs` without a custom patch](http://lists.katacontainers.io/pipermail/kata-dev/2020-January/001216.html).
As `/var/lib/docker` is a `VOLUME` specified by DinD (i.e. the `docker` images tagged `*-dind`/`*-dind-rootless`), `docker` will fail to start (or even worse, silently pick a worse storage driver like `vfs`) when started in a Kata Container. Special measures must be taken when running DinD-powered workloads in Kata Containers.
## Workarounds/Solutions
Thanks to various community contributions (see [issue references below](#references)) the following options, with various trade-offs have been uncovered:
### Use a memory backed volume
For small workloads (small container images, without much generated filesystem load), a memory-backed volume is sufficient. Kubernetes supports a variant of [the `EmptyDir` volume](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir), which allows for memdisk-backed storage -- the the `medium: Memory`. An example of a `Pod` using such a setup [was contributed](https://github.com/kata-containers/runtime/issues/1429#issuecomment-477385283), and is reproduced below:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dind
spec:
runtimeClassName: kata
containers:
- name: dind
securityContext:
privileged: true
image: docker:20.10-dind
args: ["--storage-driver=overlay2"]
resources:
limits:
memory: "3G"
volumeMounts:
- mountPath: /var/run/
name: dockersock
- mountPath: /var/lib/docker
name: docker
volumes:
- name: dockersock
emptyDir: {}
- name: docker
emptyDir:
medium: Memory
```
Inside the container you can view the mount:
```console
/ # mount | grep lib\/docker
tmpfs on /var/lib/docker type tmpfs (rw,relatime)
```
As is mentioned in the comment encapsulating this code, using volatile memory for container storage backing is a risky and could be possibly wasteful on machines that do not have a lot of RAM.
### Use a loop mounted disk
Using a loop mounted disk that is provisioned shortly before starting of the container workload is another approach that yields good performance.
Contributors provided [an example in issue #1888](https://github.com/kata-containers/runtime/issues/1888#issuecomment-739057384), which is reproduced in part below:
```yaml
spec:
containers:
- name: docker
image: docker:20.10-dind
command: ["sh", "-c"]
args:
- if [[ $(df -PT /var/lib/docker | awk 'NR==2 {print $2}') == virtiofs ]]; then
apk add e2fsprogs &&
truncate -s 20G /tmp/disk.img &&
mkfs.ext4 /tmp/disk.img &&
mount /tmp/disk.img /var/lib/docker; fi &&
dockerd-entrypoint.sh;
securityContext:
privileged: true
```
Note that loop mounted disks are often sparse, which means they *do not* take up the full amount of space that has been provisioned. This solution seems to produce the best performance and flexibility, at the expense of increased complexity and additional required setup.
### Build a custom kernel
It's possible to [modify the kernel](https://github.com/kata-containers/runtime/issues/1888#issuecomment-616872558) (in addition to applying the earlier mentioned mailing list patch) to support using `virtio-fs` as an upper. Note that if you modify your kernel and use `virtio-fs` you may require [additional changes](https://github.com/kata-containers/runtime/issues/1888#issuecomment-739057384) for decent performance and to address other issues.
> **NOTE:** A future kernel release may rectify the usability and performance issues of using `virtio-fs` as an OverlayFS upper layer.
## References
The solutions proposed in this document are an amalgamation of thoughtful contributions from the Kata Containers community.
Find links to issues & related discussion and the fruits therein below:
- [How to run Docker in Docker with Kata Containers (#2474)](https://github.com/kata-containers/kata-containers/issues/2474)
- [Does Kata-container support AUFS/OverlayFS? (#2493)](https://github.com/kata-containers/runtime/issues/2493)
- [Unable to start docker in docker with virtio-fs (#1888)](https://github.com/kata-containers/runtime/issues/1888)
- [Not using native diff for overlay2 (#1429)](https://github.com/kata-containers/runtime/issues/1429)

View File

@@ -1,159 +0,0 @@
# Kata Containers with AMD SEV-SNP VMs
## Disclaimer
This guide is designed for developers and is - same as the Developer Guide - not intended for production systems or end users. It is advisable to only follow this guide on non-critical development systems.
## Prerequisites
To run Kata Containers in SNP-VMs, the following software stack is used.
![Kubernetes integration with shimv2](./images/SNP-stack.svg)
The host BIOS and kernel must be capable of supporting AMD SEV-SNP and configured accordingly. For Kata Containers, the host kernel with branch [`sev-snp-iommu-avic_5.19-rc6_v3`](https://github.com/AMDESE/linux/tree/sev-snp-iommu-avic_5.19-rc6_v3) and commit [`3a88547`](https://github.com/AMDESE/linux/commit/3a885471cf89156ea555341f3b737ad2a8d9d3d0) is known to work in conjunction with SEV Firmware version 1.51.3 (0xh\_1.33.03) available on AMD's [SEV developer website](https://developer.amd.com/sev/). See [AMD's guide](https://github.com/AMDESE/AMDSEV/tree/sev-snp-devel) to configure the host accordingly. Verify that you are able to run SEV-SNP encrypted VMs first. The guest components required for Kata Containers are built as described below.
**Tip**: It is easiest to first have Kata Containers running on your system and then modify it to run containers in SNP-VMs. Follow the [Developer guide](../Developer-Guide.md#warning) and then follow the below steps. Nonetheless, you can just follow this guide from the start.
## How to build
Follow all of the below steps to install Kata Containers with SNP-support from scratch. These steps mostly follow the developer guide with modifications to support SNP
__Steps from the Developer Guide:__
- Get all the [required components](../Developer-Guide.md#requirements-to-build-individual-components) for building the kata-runtime
- [Build the and install kata-runtime](../Developer-Guide.md#build-and-install-the-kata-containers-runtime)
- [Build a custom agent](../Developer-Guide.md#build-a-custom-kata-agent---optional)
- [Create an initrd image](../Developer-Guide.md#create-an-initrd-image---optional) by first building a rootfs, then building the initrd based on the rootfs, use a custom agent and install. `ubuntu` works as the distribution of choice.
- Get the [required components](../../tools/packaging/kernel/README.md#requirements) to build a custom kernel
__SNP-specific steps:__
- Build the SNP-specific kernel as shown below (see this [guide](../../tools/packaging/kernel/README.md#build-kata-containers-kernel) for more information)
```bash
$ pushd kata-containers/tools/packaging/kernel/
$ ./build-kernel.sh -a x86_64 -x snp setup
$ ./build-kernel.sh -a x86_64 -x snp build
$ sudo -E PATH="${PATH}" ./build-kernel.sh -x snp install
$ popd
```
- Build a current OVMF capable of SEV-SNP:
```bash
$ pushd kata-containers/tools/packaging/static-build/ovmf
$ ./build.sh
$ tar -xvf edk2-x86_64.tar.gz
$ popd
```
- Build a custom QEMU
```bash
$ source kata-containers/tools/packaging/scripts/lib.sh
$ qemu_url="$(get_from_kata_deps "assets.hypervisor.qemu.snp.url")"
$ qemu_branch="$(get_from_kata_deps "assets.hypervisor.qemu.snp.branch")"
$ qemu_commit="$(get_from_kata_deps "assets.hypervisor.qemu.snp.commit")"
$ git clone -b "${qemu_branch}" "${qemu_url}"
$ pushd qemu
$ git checkout "${qemu_commit}"
$ ./configure --enable-virtfs --target-list=x86_64-softmmu --enable-debug
$ make -j "$(nproc)"
$ popd
```
### Kata Containers Configuration for SNP
The configuration file located at `/etc/kata-containers/configuration.toml` must be adapted as follows to support SNP-VMs:
- Use the SNP-specific kernel for the guest VM (change path)
```toml
kernel = "/usr/share/kata-containers/vmlinuz-snp.container"
```
- Enable the use of an initrd (uncomment)
```toml
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
```
- Disable the use of a rootfs (comment out)
```toml
# image = "/usr/share/kata-containers/kata-containers.img"
```
- Use the custom QEMU capable of SNP (change path)
```toml
path = "/path/to/qemu/build/qemu-system-x86_64"
```
- Use `virtio-9p` device since `virtio-fs` is unsupported due to bugs / shortcomings in QEMU version [`snp-v3`](https://github.com/AMDESE/qemu/tree/snp-v3) for SEV and SEV-SNP (change value)
```toml
shared_fs = "virtio-9p"
```
- Disable `virtiofsd` since it is no longer required (comment out)
```toml
# virtio_fs_daemon = "/usr/libexec/virtiofsd"
```
- Disable NVDIMM (uncomment)
```toml
disable_image_nvdimm = true
```
- Disable shared memory (uncomment)
```toml
file_mem_backend = ""
```
- Enable confidential guests (uncomment)
```toml
confidential_guest = true
```
- Enable SNP-VMs (uncomment)
```toml
sev_snp_guest = true
```
- Configure an OVMF (add path)
```toml
firmware = "/path/to/kata-containers/tools/packaging/static-build/ovmf/opt/kata/share/ovmf/OVMF.fd"
```
## Test Kata Containers with Containerd
With Kata Containers configured to support SNP-VMs, we use containerd to test and deploy containers in these VMs.
### Install Containerd
If not already present, follow [this guide](./containerd-kata.md#install) to install containerd and its related components including `CNI` and the `cri-tools` (skip Kata Containers since we already installed it)
### Containerd Configuration
Follow [this guide](./containerd-kata.md#configuration) to configure containerd to use Kata Containers
## Run Kata Containers in SNP-VMs
Run the below commands to start a container. See [this guide](./containerd-kata.md#run) for more information
```bash
$ sudo ctr image pull docker.io/library/busybox:latest
$ sudo ctr run --cni --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh
```
### Check for active SNP:
Inside the running container, run the following commands to check if SNP is active. It should look something like this:
```
/ # dmesg | grep -i sev
[ 0.299242] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
[ 0.472286] SEV: Using SNP CPUID table, 31 entries present.
[ 0.514574] SEV: SNP guest platform device initialized.
[ 0.885425] sev-guest sev-guest: Initialized SEV guest driver (using vmpck_id 0)
```
### Obtain an SNP Attestation Report
To obtain an attestation report inside the container, the `/dev/sev-guest` must first be configured. As of now, the VM does not perform this step, however it can be performed inside the container, either in the terminal or in code.
Example for shell:
```
/ # SNP_MAJOR=$(cat /sys/devices/virtual/misc/sev-guest/dev | awk -F: '{print $1}')
/ # SNP_MINOR=$(cat /sys/devices/virtual/misc/sev-guest/dev | awk -F: '{print $2}')
/ # mknod -m 600 /dev/sev-guest c "${SNP_MAJOR}" "${SNP_MINOR}"
```
## Known Issues
- Support for cgroups v2 is still [work in progress](https://github.com/kata-containers/kata-containers/issues/927). If issues occur due to cgroups v2 becoming the default in newer systems, one possible solution is to downgrade cgroups to v1:
```bash
sudo sed -i 's/^\(GRUB_CMDLINE_LINUX=".*\)"/\1 systemd.unified_cgroup_hierarchy=0"/' /etc/default/grub
sudo update-grub
sudo reboot
```
- If both SEV and SEV-SNP are supported by the host, Kata Containers uses SEV-SNP by default. You can verify what features are enabled by checking `/sys/module/kvm_amd/parameters/sev` and `sev_snp`. This means that Kata Containers can not run both SEV-SNP-VMs and SEV-VMs at the same time. If SEV is to be used by Kata Containers instead, reload the `kvm_amd` kernel module without SNP-support, this will disable SNP-support for the entire platform.
```bash
sudo rmmod kvm_amd && sudo modprobe kvm_amd sev_snp=0
```

View File

@@ -19,7 +19,7 @@ Also you should ensure that `kubectl` working correctly.
> **Note**: More information about Kubernetes integrations:
> - [Run Kata Containers with Kubernetes](run-kata-with-k8s.md)
> - [How to use Kata Containers and Containerd](containerd-kata.md)
> - [How to use Kata Containers and containerd with Kubernetes](how-to-use-k8s-with-containerd-and-kata.md)
> - [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](how-to-use-k8s-with-cri-containerd-and-kata.md)
## Configure Prometheus

Some files were not shown because too many files have changed in this diff Show More