Compare commits

...

83 Commits

Author SHA1 Message Date
Eric Ernst
9b969bb7da packaging: fix image build script
Relative paths are error prone. Fix error.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:57:28 -07:00
Eric Ernst
fb2f3cfce2 release: Kata Containers 2.0.0-rc1
ae6ccbe8 rust-agent: Update README
3faef791 docs: drop docker installation guide
f3466b87 docs: fix static check errors in docs/install/README.md
89ec614d docs: update architecture.md
1ed73179 qemu: upgrade qemu version to 5.1.0 for arm64.
cb79dddf agent: Fix OCI Windows network shared container name typo
c50aee9d github: Remove issue template and use central one
2a4c3e6a docs: fix broken links
9e2a314e Packaging: release notes script using error kernel path urls
aed20f43 rust-agent: Replaces improper use of match for non-constant patterns
868d0248 devices: fix go test warning in manager_test.go
14164392 action: Allow long lines if non-alphabetic
2ece152c agent: remove unreachable code
033925f9 agent: Change do_exec return type to ! because it will never return
c90fff82 agent: propagate the internal detail errors to users
c0ea9102 packaging: Stop providing OBS packages
ca54edef install: Add contacts to the distribution packages
b5ece037 install: Update information about Community Packages
378e429d install: Update SUSE information
567f8587 install: Update openSUSE information
18f32d13 install: Update RHEL information
8280523c install: Update Fedora information
578db2fc install: Update CentOS information
781d6eca ci: fix clone_tests_repo function
c18c5e2c agent: Set LIBC=gnu for ppc64le arch by default
a378ba53 fc: integrate Firecracker's metrics
9991f4b5 static-build/qemu-virtiofs: Refactor apply virtiofs patches
4a0fd6c2 packaging/qemu: Add common code to apply patches
37acc030 static-build/qemu-virtiofs: Fix to apply QEMU patches
6c275c92 runtime: fix TestNewConsole UT failure
0479a4cb travis: skip static checker for ppc64
b3e52844 runtime: fix golint errors
d36d3486 agent: fix cargo fmt
e1094d7f ci: always checkout 2.0-dev of test repository
c8ba30f9 docs: fix static check errors
eaa5c433 runtime: fix make check
07caa2f2 gitignore: ignore agent service file
f34e2e66 agent: fix UT failures due to chdir
442e5906 agent: Only allow proc mount if it is procfs
f2850668 rustjail: make the mount error info much more clear
73414554 runtime: add enable_debug_console configuration item for agent
0b62f5a9 runtime: add debug console service
c23a401e runtime: Call s.newStore.Destroy if globalSandboxList.addSandbox
80879197 shimv2: add a comment in checkAndMount()
b6066cbc osbuilder: specify default toolchain verion in rust-init.
1290d007 runtime: Update cloud-hypervisor client pkg to version v0.10.0
afeece42 agent/oci: Don't use deprecated Error::description() method
a4075f0f runtime: Fix linter errors in release files
01df3c1d packaging: Build from source if the clh release binary is missing
bacd41bb runtime: add podman configuration to data collection script
d9746f31 ci: use export command to export envs instead of env config item
ca2a1176 ci: use Travis cache to reduce build time
67af593a agent: update cgroups crate
cabc60f3 docs: Update the reference path of kata-deploy in the packaging
a5859197 runtime: make kata-check check for newer release
08d194b8 how-to: add privileged_without_host_devices to containerd guide
89ade8f3 travis: enable RUST_BACKTRACE
4b30001d agent/rustjail: add more unit tests
232c8213 agent/rustjail: remove makedev function
74bcd510 agent/rustjail: add unit tests for ms_move_rootfs and mask_path
a36f93c9 agent/rustjail: implement functions to chroot
fe0f2198 agent/rustjail: add unit test for pivot_rootfs
5770c2a2 agent/rustjail: implement functions to pivot_root
838b1794 agent/rustjail: add unit test for mount_cgroups
1a60c1de agent/rustjail: add unit test for init_rootfs
77ecfed2 agent/rustjail/mount: don't use unwrap
fa7079bc agent/rustjail: add tempfile crate as depedency
c23bac5c rustjail: implement functions to mount and umount files
e99f3e79 docs: Fix the kata-pkgsync tool's docs script path
d05a7cda docs: fix k8s containerd howto links
f6877fa4 docs: fix up developer guide for 2.0
6d326f21 gitignore: ignore agent version.rs
407cb9a3 agent: fix agent panic running as init
38eb1df4 packaging: use local version file for kata 2.0 in Makefile
313dfee3 docs: fix release process doc
0c4e7b21 packaging: fix release notes

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2020-10-06 17:54:13 -07:00
Eric Ernst
f32a741c76 actions: add kata deploy test
Pull over kata-deploy-test from the 1.x packaging repository. This is
intended to be used for testing any changes to the kata-deploy
scripting, and does not exercise any new source code changes.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:54:13 -07:00
Eric Ernst
512e79f61a packaging: cleaning, updating based on new filepaths
Update scripts to take into account some files being moved, and some
general cleanup.

Fixes: #866

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:54:13 -07:00
Eric Ernst
aa70080423 packaging: remove obs-packaging
No longer required -- let's remove them.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:54:13 -07:00
Eric Ernst
34015bae12 packaging: pull versions, build-image out from obs dir
These are still required; let's pull them out.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:54:13 -07:00
Eric Ernst
93b60a8327 packaging: Revert "packaging: Stop providing OBS packages"
This reverts commit c0ea910273.

Two scripts are still required for release and testing, which should
have never been under obs-packaging dir in the first place.  Let's
revert, move the scripts / update references to it, and then we can
remove the remaining obs-packaging/ tooling.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2020-10-06 17:54:13 -07:00
Yang Bo
aa9951f2cd rust-agent: Update README
rust agent does not use grpc as submodule for a while, update README
to reflect the change.

Fixes: #196
Signed-off-by: Yang Bo <bo@hyper.sh>
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
9d8c72998b docs: drop docker installation guide
We have removed cli support and that means dockder support is dropped
for now. Also it doesn't make sense to have so many duplications on each
distribution as we can simply refer to the official docker guide on how
to install docker.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
033ed13202 docs: fix static check errors in docs/install/README.md
It was merged in while the static checker is disabled.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
c058d04b94 docs: update architecture.md
To match the current architecture of Kata Containers 2.0.

Fixes: #831
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Edmond AK Dantes
9d2bb0c452 qemu: upgrade qemu version to 5.1.0 for arm64.
Now, the qemu version used in arm is so old. As some new features have merged
in current qemu, so it's time to upgrade it. As obs-packaging has been removed,
I put the qemu patch under qemu/patch/5.1.x.
As vxfs has been Deprecated in qemu-5.1, it will be no longer exist in
configuration-hyperversior.sh when qemu version larger than 5.0.

Fixes: #816
Signed-off-by: Edmond AK Dantes <edmond.dantes.ak47@outlook.com>
2020-10-06 17:54:13 -07:00
James O. D. Hunt
627d062fb2 agent: Fix OCI Windows network shared container name typo
Correct the typo which would break the Windows-specific OCI network
shared container name feature.

See:

- https://github.com/opencontainers/runtime-spec/blob/master/config-windows.md#network

Fixes: #685.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2020-10-06 17:54:13 -07:00
James O. D. Hunt
96afe62576 github: Remove issue template and use central one
Remove the GitHub issue template from this repository. We already have a
central set of templates [1] that are being used so the template in this
repository is redundant.

[1] - https://github.com/kata-containers/.github/tree/master/.github/ISSUE_TEMPLATE/

Fixes: #728.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
d946016eb7 docs: fix broken links
Some sections and files were removed in a previous commit,
remove all reference to such sections and files to fix the
check-markdown test.

fixes #826

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Ychau Wang
37f1a77a6a Packaging: release notes script using error kernel path urls
2.0 Packaging runtime-release-notes.sh script is using 1.x Packaging
kernel urls. Fix these urls to 2.0 branch Packaging urls.

Fixes: #829

Signed-off-by: Ychau Wang <wangyongchao.bj@inspur.com>
2020-10-06 17:54:13 -07:00
Christophe de Dinechin
450a81cc54 rust-agent: Replaces improper use of match for non-constant patterns
The code used `match` as a switch with variable patterns `ev_fd` and
`cf_fd`, but the way Rust interprets the code is that the first
pattern matches all values. The code does not perform as expected.

This addresses the following warning:

   warning: unreachable pattern
      --> rustjail/src/cgroups/notifier.rs:114:21
       |
   107 |                     ev_fd => {
       |                     ----- matches any value
   ...
   114 |                     cg_fd => {
       |                     ^^^^^ unreachable pattern
       |
       = note: `#[warn(unreachable_patterns)]` on by default

Fixes: #750
Fixes: #793

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2020-10-06 17:54:13 -07:00
zhanghj
c09f02e6f6 devices: fix go test warning in manager_test.go
Create "class" and "config" file in temporary device BDF dir,
and remove dir created  by ioutil.TempDir() when test finished.

fixes: #746

Signed-off-by: zhanghj <zhanghj.lc@inspur.com>
2020-10-06 17:54:13 -07:00
James O. D. Hunt
58c7469110 action: Allow long lines if non-alphabetic
Overly long commit lines are annoying. But sometimes,
we need to be able to force the use of long lines
(for example to reference a URL).

Ironically, I can't refer to the URL that explains this
because of ... the long line check! Hence:

```sh
$ cat <<EOT | tr -d '\n'; echo
See: https://github.com/kata-containers/tests/tree/master/
cmd/checkcommits#handling-long-lines
EOT
```

Maximum body length updated to 150 bytes for parity with:

https://github.com/kata-containers/tests/pull/2848

Fixes: #687.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2020-10-06 17:54:13 -07:00
Tim Zhang
c36ea0968d agent: remove unreachable code
The code in the end of init_child is unreachable and need to be removed.
The code after do_exec is unreachable and need to be removed.

Signed-off-by: Tim Zhang <tim@hyper.sh>
2020-10-06 17:54:13 -07:00
Tim Zhang
ba197302e2 agent: Change do_exec return type to ! because it will never return
Indicates unreachable code.

Fixes #819

Signed-off-by: Tim Zhang <tim@hyper.sh>
2020-10-06 17:54:13 -07:00
fupan.lfp
725ad067c1 agent: propagate the internal detail errors to users
It's should propagate the detail errors to users when
the rpc call failed.

Fixes: #824

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
9858c23c59 packaging: Stop providing OBS packages
The community has discussed and took the decision in favour of promoting
kata-deploy as the way of distributing and using kata for distros that
officially don't maintain the project.

Fixes: #623
Fixes: https://github.com/kata-containers/packaging/issues/1120

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
fc8f1ff03c install: Add contacts to the distribution packages
Let's add a new column to the Official packages table, and let the
maintainers of the official distro packages to jump in and add their
names there.

This will help us to ping & redirect to the right people possible issues
that are reported against the official packages.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
f7b4f76082 install: Update information about Community Packages
Kata Containers will stop distributing the community packages in favour
of kata-deploy.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
4fd66fa689 install: Update SUSE information
Following up a conversation with Ralf Haferkamp, we can safely drop the
instructions for using Kata Containers on SLES 12 SP3 in favour of using
the official builds provided for SLE 15 SP1, and SLE 15 SP2.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
e6ff42b8ad install: Update openSUSE information
Let's update the openSUSE Installation Guide to reflect the current
information on how to install kata packages provided by the distro
itself.

The official packages are present on Leap 15.2 and Tumbleweed, and can
be just installed. Leap 15.1 is slightly different, as the .repo file
has to be added before the packages can be installed.

Leap 15.0 has been removed as it already reached its EOL.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
6710d87c6a install: Update RHEL information
Although the community packages are present for RHEL, everything about
them is extremely unsupported on the Red Hat side.

Knowing this, we'd be better to simply not mentioned those and, if users
really want to try kata-containers on RHEL, they can simply follow the
CentOS installation guide.

In the future, if the Fedora packages make their way to RHEL, we can add
the information here. However, if we're recommending something
unsupported we'd be better recommending kata-deploy instead.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
178b79f122 install: Update Fedora information
Let's update the Fedora Installation Guide to reflect the current
information on how to install kata packages provided by the distro
itself.

These are official packages and we, as Fedora members, recommend using
kata-containers on Fedora 32 and onwards, as from this version
everything works out-of-the-box. Also, Fedora 31 will reach its EOL as
soon as Fedora 33 is out, which should happen on October.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Fabiano Fidêncio
bc545c6549 install: Update CentOS information
Let's update the CentOS Installation Guide to reflect the current
information on how to install kata packages provided by the
Virtualiation Special Interest Group.

These are not official CentOS packages, as those are not coming from Red
Hat Enterprise Linux. These are the same packages we have on Fedora and
we have decided to keep them up-to-date and sync'ed on both Fedora and
CentOS, so people can give Kata Containers a try also on CentOS.

The nature of these packages makes me think that those are "as official
as they can be", so that's the reason I've decided to add the
instructions to the "official" table.

Together with the change in the Installation Guide, let's also update
the README and reflect the fact we **strongly recommend** using CentOS
8, with the packages provided by the Virtualization Special Interest
Group, instead of using the CentOS 7 with packages built on OBS.

Fixes: #623

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2020-10-06 17:54:13 -07:00
Salvador Fuentes
585481990a ci: fix clone_tests_repo function
We should not checkout to 2.0-dev branch in the clone_tests_repo
function when running in Jenkins CI as it discards changes from
tests repo.

Fixes: #818.

Signed-off-by: Salvador Fuentes <salvador.fuentes@intel.com>
2020-10-06 17:54:13 -07:00
Pradipta Kr. Banerjee
0057f86cfa agent: Set LIBC=gnu for ppc64le arch by default
Fixes: #812

Signed-off-by: Pradipta Kr. Banerjee <pradipta.banerjee@gmail.com>
2020-10-06 17:54:13 -07:00
bin liu
fa0401793f fc: integrate Firecracker's metrics
Firecracker expose metrics through fifo file
and using a JSON format. This PR will parse the
Firecracker's metrics and convert to Prometheus metrics.

Fixes: #472

Signed-off-by: bin liu <bin@hyper.sh>
2020-10-06 17:54:13 -07:00
Wainer dos Santos Moschetta
60b7265961 static-build/qemu-virtiofs: Refactor apply virtiofs patches
In static-build/qemu-virtiofs/Dockerfile the code which
applies the virtiofs specific patches is spread in several
RUN instructions. Refactor this code so that it runs in a
single RUN and produce a single overlay image.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2020-10-06 17:54:13 -07:00
Wainer dos Santos Moschetta
57b53dbae8 packaging/qemu: Add common code to apply patches
The qemu and qemu-virtiofs Dockerfile files repeat the code to apply
patches based on QEMU stable branch being built. Instead, this adds
a common script (qemu/apply_patches.sh) and make it called by the
respective Dockerfile files.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2020-10-06 17:54:13 -07:00
Wainer dos Santos Moschetta
ddf1a545d1 static-build/qemu-virtiofs: Fix to apply QEMU patches
Fix a bug on qemu-virtiofs Dockerfile which end up not applying
the QEMU patches.

Fixes #786

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2020-10-06 17:54:13 -07:00
Peng Tao
cbdf6400ae runtime: fix TestNewConsole UT failure
It needs root.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
ceeecf9c66 travis: skip static checker for ppc64
As we have already run it on x64.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
7c53baea8a runtime: fix golint errors
Need to run gofmt -s on them.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
b549d354bf agent: fix cargo fmt
Otherwise travis fails.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
9f3113e1f6 ci: always checkout 2.0-dev of test repository
We use 2.0-dev in the tests repository now. Always make sure
we use the right branch.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
ef94742320 docs: fix static check errors
Somehow we are not running static checks for a long time.
And that ended up with a lot for errors.

* Ensure debug options are valid is dropped
* fix snap links
* drop extra CONTRIBUTING.md
* reference kata-pkgsync
* move CODEOWNERS to proper place
* remove extra CODE_OF_CONDUCT.md.
* fix spell checker error on Developer-Guide.md

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
d71764985d runtime: fix make check
Need to use the correct script path.

Fixes: #802
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
0fc04a269d gitignore: ignore agent service file
As it is auto-generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
8d7ac5f01c agent: fix UT failures due to chdir
Current working directory is a process level resource. We cannot call
chdir in parallel from multiple threads, which would cause cwd confusion
and result in UT failures.

The agent code itself is correct that chdir is only called from spawned
child init process. Well, there is one exception that it is also called
in do_create_container() but it is safe to assume that containers are
never created in parallel (at least for now).

Fixes: #782
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
fupan.lfp
612acbe319 agent: Only allow proc mount if it is procfs
This only allows some whitelists files bind mounted under proc
and prevent other malicious mount to procfs.

Fixes: #807

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-10-06 17:54:13 -07:00
fupan.lfp
f3a487cd41 rustjail: make the mount error info much more clear
Make the invalid mount destination's error info much
more clear.

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-10-06 17:54:13 -07:00
bin liu
3a559521d1 runtime: add enable_debug_console configuration item for agent
Set enable_debug_console=true in Kata's congiguration file,
runtime will pass `agent.debug_console`
and `agent.debug_console_vport=1026` to agent.

Fixes: #245

Signed-off-by: bin liu <bin@hyper.sh>
2020-10-06 17:54:13 -07:00
bin liu
567daf5a42 runtime: add debug console service
Add `kata-runtime exec` to enter guest OS
through shell started by agent

Fixes: #245

Signed-off-by: bin liu <bin@hyper.sh>
2020-10-06 17:54:13 -07:00
Shukui Yang
c7d913f436 runtime: Call s.newStore.Destroy if globalSandboxList.addSandbox
Fixes: #696

Signed-off-by: Shukui Yang <keloyangsk@gmail.com>
2020-10-06 17:54:13 -07:00
Qian Cai
7bd410c725 shimv2: add a comment in checkAndMount()
In checkAndMount(), it is not clear why we check IsBlockDevice() and if
DisableBlockDeviceUse == false and then only return "false, nil" instead
of "false, err". Adding a comment to make it a bit more readable.

Fixes: #732
Signed-off-by: Qian Cai <cai@redhat.com>
2020-10-06 17:54:13 -07:00
zhanghj
7fbc789855 osbuilder: specify default toolchain verion in rust-init.
Specify default toolchain version in rust-init.

Fixes: #799

Signed-off-by: zhanghj <zhanghj.lc@inspur.com>
2020-10-06 17:54:13 -07:00
Bo Chen
7fc41a771a runtime: Update cloud-hypervisor client pkg to version v0.10.0
The latest release of cloud-hypervisor v0.10.0 contains the following
updates: 1) `virtio-block` Support for Multiple Descriptors; 2) Memory
Zones; 3) `Seccomp` Sandbox Improvements; 4) Preliminary KVM HyperV
Emulation Control; 5) various bug fixes and refactoring.

Note that this patch updates the client code of clh's HTTP API in kata,
while the 'versions.yaml' file was updated in an earlier PR.

Fixes: #789

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-10-06 17:54:13 -07:00
David Gibson
a31d82fec2 agent/oci: Don't use deprecated Error::description() method
We shouldn't use it, and we don't need to implement it.

fixes #791

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2020-10-06 17:54:13 -07:00
James O. D. Hunt
9ef4c80340 runtime: Fix linter errors in release files
Fix the linter errors caught in the `runtime` repos `master` branch [1],
but not in the `2.0-dev` branch [2]. See [3] for further details.

[1] - https://github.com/kata-containers/runtime/pull/2976
[2] - https://github.com/kata-containers/kata-containers/pull/735
[3] - https://github.com/kata-containers/tests/issues/2870

Fixes: #783.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2020-10-06 17:54:13 -07:00
Bo Chen
6a4e413758 packaging: Build from source if the clh release binary is missing
This patch add fall-back code path that builds cloud-hypervisor static
binary from source, when the downloading of cloud-hypervisor binary is
failing. This is useful when we experience network issues, and also
useful for upgrading clh to non-released version.

Together with the changes in the tests repo
(https://github.com/kata-containers/tests/pull/2862), the Jenkins config
file is also updated with new Execute shell script for the clh CI in the
kata-containers repo. Those two changes fix the regression on clh CI
here. Please check details in the issue below.

Fixes: #781
Fixes: https://github.com/kata-containers/tests/issues/2858

Signed-off-by: Bo Chen <chen.bo@intel.com>
2020-10-06 17:54:13 -07:00
Francesco Giudici
678d4d189d runtime: add podman configuration to data collection script
Be more verbose about podman configuration in the output of the data
collection script: get the system configuration as seen by podman and
dump the configuration files when present.

Fixes: #243
Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
2020-10-06 17:54:13 -07:00
bin liu
718f718764 ci: use export command to export envs instead of env config item
Config item env is used as a Matrix Expansion key, so these envs
will export to build jobs individually.

Signed-off-by: bin liu <bin@hyper.sh>
2020-10-06 17:54:13 -07:00
bin liu
d860ded3f0 ci: use Travis cache to reduce build time
This PR includes these changes:
- use Rust installed by Travis
- install x86_64-unknown-linux-musl
- install rustfmt
- use Travis cache
- delete ci/install_vc.sh

Fixes: #748

Signed-off-by: bin liu <bin@hyper.sh>
2020-10-06 17:54:13 -07:00
fupan.lfp
a141da8a20 agent: update cgroups crate
Update cgroups crate to fix the building issue
on Aarch64.

Fixes: #770

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2020-10-06 17:54:13 -07:00
Ychau Wang
aaaaee7a4b docs: Update the reference path of kata-deploy in the packaging
Use the relative path of kata-deploy to replace the 1.x packaging url in
the kata-deploy/README.md file. Fixed the path issue, producted by
creating new branch.

Fixes: #777

Signed-off-by: Ychau Wang <wangyongchao.bj@inspur.com>
2020-10-06 17:54:13 -07:00
James O. D. Hunt
21efaf1fca runtime: make kata-check check for newer release
Update `kata-check` to see if there is a newer version available for
download. Useful for users installing static packages (without a package
manager).

Fixes: #734.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2020-10-06 17:54:13 -07:00
Peng Tao
2056623e13 how-to: add privileged_without_host_devices to containerd guide
It should be set by default for Kata containers working with containerd.

Fixes: #775
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Julio Montes
34126ee704 travis: enable RUST_BACKTRACE
RUST_BACKTRACE=1 will help us a lot to debug unit tests when
a test is failing

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
980a338454 agent/rustjail: add more unit tests
Add unit tests for finish_root, read_only_path and mknod_dev
increasing code coverage of mount.rs

fixes #284

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
e14f766895 agent/rustjail: remove makedev function
remove `makedev` function, use `nix`'s implementation instead

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
2e0731f479 agent/rustjail: add unit tests for ms_move_rootfs and mask_path
Increase code coverage of mount.rs

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
addf62087c agent/rustjail: implement functions to chroot
Use conditional compilation (#[cfg]) to change chroot behaviour
at compilation time. For example, such function will just return
`Ok(())` when the unit tests are being compiled, otherwise real
chroot operation is performed.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
c24b68dc4f agent/rustjail: add unit test for pivot_rootfs
Add unit test for pivot_rootfs increasing the code coverage of
mount.rs

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
24677d7484 agent/rustjail: implement functions to pivot_root
Use conditional compilation (#[cfg]) to change pivot_root behaviour
at compilation time. For example, such function will just return
`Ok(())` when the unit tests are being compiled, otherwise real
pivot_root operation is performed.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
9e74c28158 agent/rustjail: add unit test for mount_cgroups
Add a unit test for `mount_cgroups` increasing the code coverage
of mount.rs from 44% to 52%

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
b7aae33cc1 agent/rustjail: add unit test for init_rootfs
Add a unit test for `init_rootfs` increasing the code coverage
of mount.rs from 0% to 44%.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
6d9d58278e agent/rustjail/mount: don't use unwrap
Don't use unwrap in `init_rootfs` instead return an Error, this way
we can write unit tests that don't panic.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
1bc6fbda8c agent/rustjail: add tempfile crate as depedency
Add tempfile crate as depedency, it will be used in the following
commits to create temporary directories for unit testing.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Julio Montes
d39f5a85e6 rustjail: implement functions to mount and umount files
Use conditional compilation (#[cfg]) to change mount and umount
behaviours at compilation time. For example, such functions will just
return `Ok(())` when the unit tests are being compiled, otherwise real
mount and umount operations are performed.

Signed-off-by: Julio Montes <julio.montes@intel.com>
2020-10-06 17:54:13 -07:00
Ychau Wang
d90a0eefbe docs: Fix the kata-pkgsync tool's docs script path
Fix the kata-pkgsync tool's docs, change the download path of the
packaging tool in 2.0 release.

Fixes: #773

Signed-off-by: Ychau Wang <wangyongchao.bj@inspur.com>
2020-10-06 17:54:13 -07:00
Peng Tao
2618c014a0 docs: fix k8s containerd howto links
It should points to the internal versions.yaml file.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
5c4878f37e docs: fix up developer guide for 2.0
1. Until we restore docker/moby support, we should use crictl as
developer example.
2. Most of the hyperlinks should point to kata-containers repository.
3. There is no more standalone mode.

Fixes: #767
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
bd6b169e98 gitignore: ignore agent version.rs
It is auto-generated.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
5770336572 agent: fix agent panic running as init
We should mount procfs before trying to parse kernel command lines.

Fixes: #771
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
zhanghj
45daec7b37 packaging: use local version file for kata 2.0 in Makefile
Use local version file instead of downloading from upstream repo.

Fixes: #756

Signed-off-by: zhanghj <zhanghj.lc@inspur.com>
2020-10-06 17:54:13 -07:00
Peng Tao
ed5a7dc022 docs: fix release process doc
We no longer build OBS packages. And we use
kata-containers/tools/packaging/release to do release.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
Peng Tao
6fc7c77721 packaging: fix release notes
Should mention the 2.0 branch docs.

Fixes: #763
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2020-10-06 17:54:13 -07:00
192 changed files with 4931 additions and 7473 deletions

View File

@@ -1,17 +0,0 @@
# Description of problem
(replace this text with the list of steps you followed)
# Expected result
(replace this text with an explanation of what you thought would happen)
# Actual result
(replace this text with details of what actually happened)
---
(replace this text with the output of the `kata-collect-data.sh` script, after
you have reviewed its content to ensure it does not contain any private
information).

View File

@@ -48,7 +48,25 @@ jobs:
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^.+(\n.{0,72})*$|^.+\n\s*[^a-zA-Z\s\n]|^.+\n\S+$'
# Notes:
#
# - The subject line is not enforced here (see other check), but has
# to be specified at the start of the regex as the action is passed
# the entire commit message.
#
# - Body lines *can* be longer than the maximum if they start
# with a non-alphabetic character.
#
# This allows stack traces, log files snippets, emails, long URLs,
# etc to be specified. Some of these naturally "work" as they start
# with numeric timestamps or addresses. Emails can but quoted using
# the normal ">" character, markdown bullets ("-", "*") are also
# useful for lists of URLs, but it is always possible to override
# the check by simply space indenting the content you need to add.
#
# - A SoB comment can be any length (as it is unreasonable to penalise
# people with long names/email addresses :)
pattern: '^.+(\n([a-zA-Z].{0,149}|[^a-zA-Z\n].*|Signed-off-by:.*|))+$'
error: 'Body line too long (max 72)'
post_error: ${{ env.error_msg }}

View File

@@ -18,9 +18,9 @@ main() {
fi
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE/tools/packaging/obs-packaging
pushd $GITHUB_WORKSPACE/tools/packaging
git checkout $tag
./gen_versions_txt.sh $tag
./scripts/gen_versions_txt.sh $tag
popd
pushd $GITHUB_WORKSPACE/tools/packaging/release

53
.github/workflows/kata-deploy-test.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
on: issue_comment
name: test-kata-deploy
jobs:
check_comments:
runs-on: ubuntu-latest
steps:
- name: Check for Command
id: command
uses: kata-containers/slash-command-action@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: "test-kata-deploy"
reaction: "true"
reaction-type: "eyes"
allow-edits: "false"
permission-level: admin
- name: verify command arg is kata-deploy
run: |
echo "The command was '${{ steps.command.outputs.command-name }}' with arguments '${{ steps.command.outputs.command-arguments }}'"
create-and-test-container:
needs: check_comments
runs-on: ubuntu-latest
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
echo "reference for PR: " ${ref}
echo "##[set-output name=pr-ref;]${ref}"
- uses: actions/checkout@v2-beta
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
- name: build-container-image
id: build-container-image
run: |
PR_SHA=$(git log --format=format:%H -n1)
VERSION=$(curl https://raw.githubusercontent.com/kata-containers/kata-containers/2.0-dev/VERSION)
ARTIFACT_URL="https://github.com/kata-containers/kata-containers/releases/download/${VERSION}/kata-static-${VERSION}-x86_64.tar.xz"
wget "${ARTIFACT_URL}" -O ./kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t katadocker/kata-deploy-ci:${PR_SHA} ./kata-deploy
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker push katadocker/kata-deploy-ci:$PR_SHA
echo "##[set-output name=pr-sha;]${PR_SHA}"
- name: test-kata-deploy-ci-in-aks
uses: ./kata-deploy/action
with:
packaging-sha: ${{ steps.build-container-image.outputs.pr-sha }}
env:
PKG_SHA: ${{ steps.build-container-image.outputs.pr-sha }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_PASSWORD: ${{ secrets.AZ_PASSWORD }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}

2
.gitignore vendored
View File

@@ -3,3 +3,5 @@
**/*.rej
**/target
**/.vscode
src/agent/src/version.rs
src/agent/kata-agent.service

View File

@@ -5,28 +5,43 @@
dist: bionic
os: linux
language: go
go: 1.14.4
env: target_branch=$TRAVIS_BRANCH
# set cache directories manually, because
# we are using a non-standard directory struct
# cargo root is in srs/agent
#
# If needed, caches can be cleared
# by ways documented in
# https://docs.travis-ci.com/user/caching#clearing-caches
language: rust
rust:
- 1.44.1
cache:
cargo: true
directories:
- src/agent/target
before_install:
- git remote set-branches --add origin "${TRAVIS_BRANCH}"
- git fetch
- export RUST_BACKTRACE=1
- export target_branch=$TRAVIS_BRANCH
- "ci/setup.sh"
# we use install to run check agent
# so that it is easy to skip for non-amd64 platform
install:
- "ci/install_rust.sh"
- export PATH=$PATH:"$HOME/.cargo/bin"
- export RUST_AGENT=yes
- rustup target add x86_64-unknown-linux-musl
- sudo ln -sf /usr/bin/g++ /bin/musl-g++
- rustup component add rustfmt
- make -C ${TRAVIS_BUILD_DIR}/src/agent
- make -C ${TRAVIS_BUILD_DIR}/src/agent check
- sudo -E PATH=$PATH make -C ${TRAVIS_BUILD_DIR}/src/agent check
before_script:
- "ci/install_go.sh"
- "ci/install_vc.sh"
- make -C ${TRAVIS_BUILD_DIR}/src/runtime
- make -C ${TRAVIS_BUILD_DIR}/src/runtime test
- sudo -E PATH=$PATH GOPATH=$GOPATH make -C ${TRAVIS_BUILD_DIR}/src/runtime test
@@ -41,6 +56,7 @@ jobs:
- name: ppc64le test
os: linux-ppc64le
install: skip
script: skip
allow_failures:
- name: ppc64le test
fast_finish: true

View File

@@ -1,4 +1,4 @@
# Copyright 2019 Intel Corporation.
# Copyright (c) 2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -10,4 +10,3 @@
# used. See https://help.github.com/articles/about-code-owners/
*.md @kata-containers/documentation

View File

@@ -1 +1 @@
2.0.0-rc0
2.0.0-rc1

30
ci/go-no-os-exit.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
# Copyright (c) 2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
# Check there are no os.Exit() calls creeping into the code
# We don't use that exit path in the Kata codebase.
# Allow the path to check to be over-ridden.
# Default to the current directory.
go_packages=${1:-.}
echo "Checking for no os.Exit() calls for package [${go_packages}]"
candidates=`go list -f '{{.Dir}}/*.go' $go_packages`
for f in $candidates; do
filename=`basename $f`
# skip all go test files
[[ $filename == *_test.go ]] && continue
# skip exit.go where, the only file we should call os.Exit() from.
[[ $filename == "exit.go" ]] && continue
files="$f $files"
done
[ -z "$files" ] && echo "No files to check, skipping" && exit 0
if egrep -n '\<os\.Exit\>' $files; then
echo "Direct calls to os.Exit() are forbidden, please use exit() so atexit() works"
exit 1
fi

View File

@@ -5,26 +5,26 @@
export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
export tests_repo_dir="$GOPATH/src/$tests_repo"
export branch="${branch:-2.0-dev}"
clone_tests_repo()
{
# KATA_CI_NO_NETWORK is (has to be) ignored if there is
# no existing clone.
if [ -d "$tests_repo_dir" -a -n "$KATA_CI_NO_NETWORK" ]
if [ -d "$tests_repo_dir" -a -n "$CI" ]
then
return
fi
go get -d -u "$tests_repo" || true
if [ -n "${TRAVIS_BRANCH:-}" ]; then
( cd "${tests_repo_dir}" && git checkout "${TRAVIS_BRANCH}" )
fi
pushd "${tests_repo_dir}" && git checkout "${branch}" && popd
}
run_static_checks()
{
clone_tests_repo
# Make sure we have the targeting branch
git remote set-branches --add origin "${branch}"
git fetch -a
bash "$tests_repo_dir/.ci/static-checks.sh" "github.com/kata-containers/kata-containers"
}

View File

@@ -1,3 +0,0 @@
## Kata Containers Documentation Code of Conduct
Kata Containers follows the [OpenStack Foundation Code of Conduct](https://www.openstack.org/legal/community-code-of-conduct/).

View File

@@ -1,5 +0,0 @@
# Contributing
## This repo is part of [Kata Containers](https://katacontainers.io)
For details on how to contribute to the Kata Containers project, please see the main [contributing document](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).

View File

@@ -13,7 +13,6 @@
* [journald rate limiting](#journald-rate-limiting)
* [`systemd-journald` suppressing messages](#systemd-journald-suppressing-messages)
* [Disabling `systemd-journald` rate limiting](#disabling-systemd-journald-rate-limiting)
* [Build and install Kata shim](#build-and-install-kata-shim)
* [Create and install rootfs and initrd image](#create-and-install-rootfs-and-initrd-image)
* [Build a custom Kata agent - OPTIONAL](#build-a-custom-kata-agent---optional)
* [Get the osbuilder](#get-the-osbuilder)
@@ -30,26 +29,24 @@
* [Install a hypervisor](#install-a-hypervisor)
* [Build a custom QEMU](#build-a-custom-qemu)
* [Build a custom QEMU for aarch64/arm64 - REQUIRED](#build-a-custom-qemu-for-aarch64arm64---required)
* [Run Kata Containers with Docker](#run-kata-containers-with-docker)
* [Update the Docker systemd unit file](#update-the-docker-systemd-unit-file)
* [Create a container using Kata](#create-a-container-using-kata)
* [Run Kata Containers with Containerd](#run-kata-containers-with-containerd)
* [Run Kata Containers with Kubernetes](#run-kata-containers-with-kubernetes)
* [Troubleshoot Kata Containers](#troubleshoot-kata-containers)
* [Appendices](#appendices)
* [Checking Docker default runtime](#checking-docker-default-runtime)
* [Set up a debug console](#set-up-a-debug-console)
* [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell)
* [Create a debug systemd service](#create-a-debug-systemd-service)
* [Build the debug image](#build-the-debug-image)
* [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image)
* [Ensure debug options are valid](#ensure-debug-options-are-valid)
* [Create a container](#create-a-container)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Obtain details of the image](#obtain-details-of-the-image)
* [Simple debug console setup](#simple-debug-console-setup)
* [Enable agent debug console](#enable-agent-debug-console)
* [Start `kata-monitor`](#start-kata-monitor)
* [Connect to debug console](#connect-to-debug-console)
* [Traditional debug console setup](#traditional-debug-console-setup)
* [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell)
* [Create a debug systemd service](#create-a-debug-systemd-service)
* [Build the debug image](#build-the-debug-image)
* [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image)
* [Create a container](#create-a-container)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Capturing kernel boot logs](#capturing-kernel-boot-logs)
* [Running standalone](#running-standalone)
* [Create an OCI bundle](#create-an-oci-bundle)
* [Launch the runtime to create a container](#launch-the-runtime-to-create-a-container)
# Warning
@@ -66,7 +63,7 @@ The recommended way to create a development environment is to first
to create a working system.
The installation guide instructions will install all required Kata Containers
components, plus Docker*, the hypervisor, and the Kata Containers image and
components, plus *Docker*, the hypervisor, and the Kata Containers image and
guest kernel.
# Requirements to build individual components
@@ -76,7 +73,7 @@ You need to install the following to build Kata Containers components:
- [golang](https://golang.org/dl)
To view the versions of go known to work, see the `golang` entry in the
[versions database](https://github.com/kata-containers/runtime/blob/master/versions.yaml).
[versions database](../versions.yaml).
- `make`.
- `gcc` (required for building the shim and runtime).
@@ -84,14 +81,14 @@ You need to install the following to build Kata Containers components:
# Build and install the Kata Containers runtime
```
$ go get -d -u github.com/kata-containers/runtime
$ cd $GOPATH/src/github.com/kata-containers/runtime
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/runtime
$ make && sudo -E PATH=$PATH make install
```
The build will create the following:
- runtime binary: `/usr/local/bin/kata-runtime`
- runtime binary: `/usr/local/bin/kata-runtime` and `/usr/local/bin/containerd-shim-kata-v2`
- configuration file: `/usr/share/defaults/kata-containers/configuration.toml`
# Check hardware requirements
@@ -242,13 +239,6 @@ Restart `systemd-journald` for the changes to take effect:
$ sudo systemctl restart systemd-journald
```
# Build and install Kata shim
```
$ go get -d -u github.com/kata-containers/shim
$ cd $GOPATH/src/github.com/kata-containers/shim && make && sudo make install
```
# Create and install rootfs and initrd image
## Build a custom Kata agent - OPTIONAL
@@ -258,14 +248,15 @@ $ cd $GOPATH/src/github.com/kata-containers/shim && make && sudo make install
> - You should only do this step if you are testing with the latest version of the agent.
```
$ go get -d -u github.com/kata-containers/agent
$ cd $GOPATH/src/github.com/kata-containers/agent && make
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/agent && make
```
## Get the osbuilder
```
$ go get -d -u github.com/kata-containers/osbuilder
$ go get -d -u github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder
```
## Create a rootfs image
@@ -276,16 +267,16 @@ able to run the `rootfs.sh` script with `USE_DOCKER=true` as expected in
the following example.
```
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/osbuilder/rootfs-builder/rootfs
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs
$ sudo rm -rf ${ROOTFS_DIR}
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/rootfs-builder
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SECCOMP=no ./rootfs.sh ${distro}'
```
You MUST choose one of `alpine`, `centos`, `clearlinux`, `debian`, `euleros`, `fedora`, `suse`, and `ubuntu` for `${distro}`. By default `seccomp` packages are not included in the rootfs image. Set `SECCOMP` to `yes` to include them.
> **Note:**
>
> - Check the [compatibility matrix](https://github.com/kata-containers/osbuilder#platform-distro-compatibility-matrix) before creating rootfs.
> - Check the [compatibility matrix](../tools/osbuilder/README.md#platform-distro-compatibility-matrix) before creating rootfs.
> - You must ensure that the *default Docker runtime* is `runc` to make use of
> the `USE_DOCKER` variable. If that is not the case, remove the variable
> from the previous command. See [Checking Docker default runtime](#checking-docker-default-runtime).
@@ -305,7 +296,7 @@ $ sudo install -o root -g root -m 0440 ../../agent/kata-containers.target ${ROOT
### Build a rootfs image
```
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/image-builder
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
$ script -fec 'sudo -E USE_DOCKER=true ./image_builder.sh ${ROOTFS_DIR}'
```
@@ -332,9 +323,9 @@ $ (cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
## Create an initrd image - OPTIONAL
### Create a local rootfs for initrd image
```
$ export ROOTFS_DIR="${GOPATH}/src/github.com/kata-containers/osbuilder/rootfs-builder/rootfs"
$ export ROOTFS_DIR="${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs"
$ sudo rm -rf ${ROOTFS_DIR}
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/rootfs-builder
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ script -fec 'sudo -E GOPATH=$GOPATH AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh ${distro}'
```
`AGENT_INIT` controls if the guest image uses the Kata agent as the guest `init` process. When you create an initrd image,
@@ -344,7 +335,7 @@ You MUST choose one of `alpine`, `centos`, `clearlinux`, `euleros`, and `fedora`
> **Note:**
>
> - Check the [compatibility matrix](https://github.com/kata-containers/osbuilder#platform-distro-compatibility-matrix) before creating rootfs.
> - Check the [compatibility matrix](../tools/osbuilder/README.md#platform-distro-compatibility-matrix) before creating rootfs.
Optionally, add your custom agent binary to the rootfs with the following:
```
@@ -354,7 +345,7 @@ $ sudo install -o root -g root -m 0550 -T ../../agent/kata-agent ${ROOTFS_DIR}/s
### Build an initrd image
```
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/initrd-builder
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
$ script -fec 'sudo -E AGENT_INIT=yes USE_DOCKER=true ./initrd_builder.sh ${ROOTFS_DIR}'
```
@@ -382,7 +373,7 @@ Your QEMU directory need to be prepared with source code. Alternatively, you can
```
$ go get -d github.com/kata-containers/qemu
$ qemu_branch=$(grep qemu-lite- ${GOPATH}/src/github.com/kata-containers/runtime/versions.yaml | cut -d '"' -f2)
$ qemu_branch=$(grep qemu-lite- ${GOPATH}/src/github.com/kata-containers/kata-containers/versions.yaml | cut -d '"' -f2)
$ cd ${GOPATH}/src/github.com/kata-containers/qemu
$ git checkout -b $qemu_branch remotes/origin/$qemu_branch
$ your_qemu_directory=${GOPATH}/src/github.com/kata-containers/qemu
@@ -391,9 +382,9 @@ $ your_qemu_directory=${GOPATH}/src/github.com/kata-containers/qemu
To build a version of QEMU using the same options as the default `qemu-lite` version , you could use the `configure-hypervisor.sh` script:
```
$ go get -d github.com/kata-containers/packaging
$ go get -d github.com/kata-containers/kata-containers/tools/packaging
$ cd $your_qemu_directory
$ ${GOPATH}/src/github.com/kata-containers/packaging/scripts/configure-hypervisor.sh qemu > kata.cfg
$ ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/scripts/configure-hypervisor.sh qemu > kata.cfg
$ eval ./configure "$(cat kata.cfg)"
$ make -j $(nproc)
$ sudo -E make install
@@ -412,27 +403,11 @@ $ go get -d github.com/kata-containers/tests
$ script -fec 'sudo -E ${GOPATH}/src/github.com/kata-containers/tests/.ci/install_qemu.sh'
```
# Run Kata Containers with Docker
## Update the Docker systemd unit file
```
$ dockerUnit=$(systemctl show -p FragmentPath docker.service | cut -d "=" -f 2)
$ unitFile=${dockerUnit:-/etc/systemd/system/docker.service.d/kata-containers.conf}
$ test -e "$unitFile" || { sudo mkdir -p "$(dirname $unitFile)"; echo -e "[Service]\nType=simple\nExecStart=\nExecStart=/usr/bin/dockerd -D --default-runtime runc" | sudo tee "$unitFile"; }
$ grep -q "kata-runtime=" $unitFile || sudo sed -i 's!^\(ExecStart=[^$].*$\)!\1 --add-runtime kata-runtime=/usr/local/bin/kata-runtime!g' "$unitFile"
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
## Create a container using Kata
```
$ sudo docker run -ti --runtime kata-runtime busybox sh
```
# Run Kata Containers with Containerd
Refer to the [How to use Kata Containers and Containerd](how-to/containerd-kata.md) how-to guide.
# Run Kata Containers with Kubernetes
Refer to to the [Run Kata Containers with Kubernetes](how-to/run-kata-with-k8s.md) how-to guide.
Refer to the [Run Kata Containers with Kubernetes](how-to/run-kata-with-k8s.md) how-to guide.
# Troubleshoot Kata Containers
@@ -452,18 +427,6 @@ To perform analysis on Kata logs, use the
[`kata-log-parser`](https://github.com/kata-containers/tests/tree/master/cmd/log-parser)
tool, which can convert the logs into formats (e.g. JSON, TOML, XML, and YAML).
To obtain a full backtrace for the agent, proxy, runtime, or shim send the
`SIGUSR1` signal to the process ID of the component. The component will send a
backtrace to the system log on the host system and continue to run without
interruption.
For example, to obtain a backtrace for `kata-proxy`:
```
$ sudo kill -USR1 $kata_proxy_pid
$ sudo journalctl -t kata-proxy
```
See [Set up a debug console](#set-up-a-debug-console).
# Appendices
@@ -473,9 +436,56 @@ See [Set up a debug console](#set-up-a-debug-console).
```
$ sudo docker info 2>/dev/null | grep -i "default runtime" | cut -d: -f2- | grep -q runc && echo "SUCCESS" || echo "ERROR: Incorrect default Docker runtime"
```
## Set up a debug console
Kata containers provides two ways to connect to the guest. One is using traditional login service, which needs additional works. In contrast the simple debug console is easy to setup.
### Simple debug console setup
Kata Containers 2.0 supports a shell simulated *console* for quick debug purpose. This approach uses VSOCK to
connect to the shell running inside the guest which the agent starts. This method only requires the guest image to
contain either `/bin/sh` or `/bin/bash`.
#### Enable agent debug console
Enable debug_console_enabled in the `configuration.toml` configuration file:
```
[agent.kata]
debug_console_enabled = true
```
This will pass `agent.debug_console agent.debug_console_vport=1026` to agent as kernel parameters, and sandboxes created using this parameters will start a shell in guest if new connection is accept from VSOCK.
#### Start `kata-monitor`
The `kata-runtime exec` command needs `kata-monitor` to get the sandbox's `vsock` address to connect to, first start `kata-monitor`.
```
$ sudo kata-monitor
```
`kata-monitor` will serve at `localhost:8090` by default.
#### Connect to debug console
Command `kata-runtime exec` is used to connect to the debug console.
```
$ kata-runtime exec 1a9ab65be63b8b03dfd0c75036d27f0ed09eab38abb45337fea83acd3cd7bacd
bash-4.2# id
uid=0(root) gid=0(root) groups=0(root)
bash-4.2# pwd
/
bash-4.2# exit
exit
```
If you want to access guest OS through a traditional way, see [Traditional debug console setup)](#traditional-debug-console-setup).
### Traditional debug console setup
By default you cannot login to a virtual machine, since this can be sensitive
from a security perspective. Also, allowing logins would require additional
packages in the rootfs, which would increase the size of the image used to
@@ -486,12 +496,12 @@ the following steps (using rootfs or initrd image).
> **Note:** The following debug console instructions assume a systemd-based guest
> O/S image. This means you must create a rootfs for a distro that supports systemd.
> Currently, all distros supported by [osbuilder](https://github.com/kata-containers/osbuilder) support systemd
> Currently, all distros supported by [osbuilder](../tools/osbuilder) support systemd
> except for Alpine Linux.
>
> Look for `INIT_PROCESS=systemd` in the `config.sh` osbuilder rootfs config file
> to verify an osbuilder distro supports systemd for the distro you want to build rootfs for.
> For an example, see the [Clear Linux config.sh file](https://github.com/kata-containers/osbuilder/blob/master/rootfs-builder/clearlinux/config.sh).
> For an example, see the [Clear Linux config.sh file](../tools/osbuilder/rootfs-builder/clearlinux/config.sh).
>
> For a non-systemd-based distro, create an equivalent system
> service using that distros init system syntax. Alternatively, you can build a distro
@@ -501,7 +511,7 @@ the following steps (using rootfs or initrd image).
>
> Once these steps are taken you can connect to the virtual machine using the [debug console](Developer-Guide.md#connect-to-the-virtual-machine-using-the-debug-console).
### Create a custom image containing a shell
#### Create a custom image containing a shell
To login to a virtual machine, you must
[create a custom rootfs](#create-a-rootfs-image) or [custom initrd](#create-an-initrd-image---optional)
@@ -511,12 +521,12 @@ an additional `coreutils` package.
For example using CentOS:
```
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/rootfs-builder
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/osbuilder/rootfs-builder/rootfs
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
$ export ROOTFS_DIR=${GOPATH}/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs
$ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true EXTRA_PKGS="bash coreutils" ./rootfs.sh centos'
```
### Create a debug systemd service
#### Create a debug systemd service
Create the service file that starts the shell in the rootfs directory:
@@ -545,12 +555,12 @@ Add a dependency to start the debug console:
$ sudo sed -i '$a Requires=kata-debug.service' ${ROOTFS_DIR}/lib/systemd/system/kata-containers.target
```
### Build the debug image
#### Build the debug image
Follow the instructions in the [Build a rootfs image](#build-a-rootfs-image)
section when using rootfs, or when using initrd, complete the steps in the [Build an initrd image](#build-an-initrd-image) section.
### Configure runtime for custom debug image
#### Configure runtime for custom debug image
Install the image:
@@ -575,31 +585,18 @@ $ (cd /usr/share/kata-containers && sudo ln -sf "$name" kata-containers.img)
**Note**: You should take care to undo this change after you finish debugging
to avoid all subsequently created containers from using the debug image.
### Ensure debug options are valid
#### Create a container
For the debug console to work, you **must** ensure that proxy debug is
**disabled** in the configuration file. If proxy debug is enabled, you will
not see any output when you connect to the virtual machine:
Create a container as normal. For example using `crictl`:
```
$ sudo mkdir -p /etc/kata-containers/
$ sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
$ sudo awk '{if (/^\[proxy\.kata\]/) {got=1}; if (got == 1 && /^.*enable_debug/) {print "#enable_debug = true"; got=0; next; } else {print}}' /etc/kata-containers/configuration.toml > /tmp/configuration.toml
$ sudo install -o root -g root -m 0640 /tmp/configuration.toml /etc/kata-containers/
$ sudo crictl run -r kata container.yaml pod.yaml
```
### Create a container
Create a container as normal. For example using Docker:
#### Connect to the virtual machine using the debug console
```
$ sudo docker run -ti busybox sh
```
### Connect to the virtual machine using the debug console
```
$ id=$(sudo docker ps -q --no-trunc)
$ id=$(sudo crictl pods --no-trunc -q)
$ console="/var/run/vc/vm/${id}/console.sock"
$ sudo socat "stdin,raw,echo=0,escape=0x11" "unix-connect:${console}"
```
@@ -609,10 +606,10 @@ $ sudo socat "stdin,raw,echo=0,escape=0x11" "unix-connect:${console}"
To disconnect from the virtual machine, type `CONTROL+q` (hold down the
`CONTROL` key and press `q`).
### Obtain details of the image
## Obtain details of the image
If the image is created using
[osbuilder](https://github.com/kata-containers/osbuilder), the following YAML
[osbuilder](../tools/osbuilder), the following YAML
file exists and contains details of the image and how it was created:
```
@@ -629,54 +626,22 @@ command inside the container to view the kernel boot logs.
If however you are unable to `exec` into the container, you can enable some debug
options to have the kernel boot messages logged into the system journal.
Which debug options you enable depends on if you are using the hypervisor `vsock` mode
or not, as defined by the `use_vsock` setting in the `[hypervisor.qemu]` section of
the configuration file. The following details the settings:
- For `use_vsock = false`:
- Set `enable_debug = true` in both the `[hypervisor.qemu]` and `[proxy.kata]` sections
- For `use_vsock = true`:
- Set `enable_debug = true` in both the `[hypervisor.qemu]` and `[shim.kata]` sections
- Set `enable_debug = true` in the `[hypervisor.qemu]` and `[runtime]` sections
For generic information on enabling debug in the configuration file, see the
[Enable full debug](#enable-full-debug) section.
The kernel boot messages will appear in the `kata-proxy` or `kata-shim` log appropriately,
The kernel boot messages will appear in the `containerd` or `CRI-O` log appropriately,
such as:
```bash
$ sudo journalctl -t kata-proxy
$ sudo journalctl -t containerd
-- Logs begin at Thu 2020-02-13 16:20:40 UTC, end at Thu 2020-02-13 16:30:23 UTC. --
...
Feb 13 16:20:56 minikube kata-proxy[17371]: time="2020-02-13T16:20:56.608714324Z" level=info msg="[ 1.418768] brd: module loaded\n" name=kata-proxy pid=17371 sandbox=a13ffb2b9b5a66f7787bdae9a427fa954a4d21ec4031d0179eee2573986a8a6e source=agent
Feb 13 16:20:56 minikube kata-proxy[17371]: time="2020-02-13T16:20:56.628493231Z" level=info msg="[ 1.438612] loop: module loaded\n" name=kata-proxy pid=17371 sandbox=a13ffb2b9b5a66f7787bdae9a427fa954a4d21ec4031d0179eee2573986a8a6e source=agent
Feb 13 16:20:56 minikube kata-proxy[17371]: time="2020-02-13T16:20:56.67707956Z" level=info msg="[ 1.487165] pmem0: p1\n" name=kata-proxy pid=17371 sandbox=a13ffb2b9b5a66f7787bdae9a427fa954a4d21ec4031d0179eee2573986a8a6e source=agent
time="2020-09-15T14:56:23.095113803+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.395399] brd: module loaded"
time="2020-09-15T14:56:23.102633107+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.402845] random: fast init done"
time="2020-09-15T14:56:23.103125469+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.403544] random: crng init done"
time="2020-09-15T14:56:23.105268162+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.405599] loop: module loaded"
time="2020-09-15T14:56:23.121121598+08:00" level=debug msg="reading guest console" console-protocol=unix console-url=/run/vc/vm/ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791/console.sock pid=107642 sandbox=ab9f633385d4987828d342e47554fc6442445b32039023eeddaa971c1bb56791 source=virtcontainers subsystem=sandbox vmconsole="[ 0.421324] memmap_init_zone_device initialised 32768 pages in 12ms"
...
```
## Running standalone
It is possible to start the runtime without a container manager. This is
mostly useful for testing and debugging purposes.
### Create an OCI bundle
To build an
[OCI bundle](https://github.com/opencontainers/runtime-spec/blob/master/bundle.md),
required by the runtime:
```
$ bundle="/tmp/bundle"
$ rootfs="$bundle/rootfs"
$ mkdir -p "$rootfs" && (cd "$bundle" && kata-runtime spec)
$ sudo docker export $(sudo docker create busybox) | tar -C "$rootfs" -xvf -
```
### Launch the runtime to create a container
Run the runtime standalone by providing it with the path to the
previously-created [OCI bundle](#create-an-oci-bundle):
```
$ sudo kata-runtime --log=/dev/stdout run --bundle "$bundle" foo
```

View File

@@ -54,7 +54,7 @@ Documents that help to understand and contribute to Kata Containers.
* [Developer Guide](Developer-Guide.md): Setup the Kata Containers developing environments
* [How to contribute to Kata Containers](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md)
* [Code of Conduct](CODE_OF_CONDUCT.md)
* [Code of Conduct](../CODE_OF_CONDUCT.md)
### Code Licensing

View File

@@ -10,7 +10,6 @@
- [Merge all bump version Pull requests](#merge-all-bump-version-pull-requests)
- [Tag all Kata repositories](#tag-all-kata-repositories)
- [Check Git-hub Actions](#check-git-hub-actions)
- [Create OBS Packages](#create-obs-packages)
- [Create release notes](#create-release-notes)
- [Announce the release](#announce-the-release)
<!-- TOC END -->
@@ -42,7 +41,7 @@
Alternatively, you can also bump the repositories using a script in the Kata packaging repo
```
$ cd ${GOPATH}/src/github.com/kata-containers/packaging/release
$ cd ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/release
$ export NEW_VERSION=<the-new-kata-version>
$ export BRANCH=<the-branch-you-want-to-bump>
$ ./update-repository-version.sh -p "$NEW_VERSION" "$BRANCH"
@@ -59,7 +58,7 @@
Once all the pull requests to bump versions in all Kata repositories are merged,
tag all the repositories as shown below.
```
$ cd ${GOPATH}/src/github.com/kata-containers/packaging/release
$ cd ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/release
$ git checkout <kata-branch-to-release>
$ git pull
$ ./tag_repos.sh -p -b "$BRANCH" tag
@@ -71,33 +70,6 @@
Check the [actions status page](https://github.com/kata-containers/kata-containers/actions) to verify all steps in the actions workflow have completed successfully. On success, a static tarball containing Kata release artifacts will be uploaded to the [Release page](https://github.com/kata-containers/kata-containers/releases).
### Create OBS Packages
- We have set up an [Azure Pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/) job
to trigger generation of Kata packages in [OBS](https://build.opensuse.org/).
Go to the [Azure Pipelines job that creates OBS packages](https://dev.azure.com/kata-containers/release-process/_release?_a=releases&view=mine&definitionId=1).
- Click on "Create release" (blue button, at top right corner).
It should prompt you for variables to be passed to the release job. They should look like:
```
BRANCH="the-kata-branch-that-is-release"
BUILD_HEAD=false
OBS_BRANCH="the-kata-branch-that-is-release"
```
Note: If the release is `Alpha` , `Beta` , or `RC` (that is part of a `master` release), please use `OBS_BRANCH=master`.
The above step shall create OBS packages for Kata for various distributions that Kata supports and test them as well.
- Verify that the packages have built successfully by checking the [Kata OBS project page](https://build.opensuse.org/project/subprojects/home:katacontainers).
- Make sure packages work correctly. This can be done manually or via the [package testing pipeline](http://jenkins.katacontainers.io/job/package-release-testing).
You have to make sure the packages are already published by OBS before this step.
It should prompt you for variables to be passed to the pipeline:
```
BRANCH="<kata-branch-to-release>"
NEW_VERSION=<the-version-you-expect-to-be-packaged|latest>
```
Note: `latest` will verify that a package provides the latest Kata tag in that branch.
### Create release notes
We have a script in place in the packaging repository to create release notes that include a short-log of the commits across Kata components.
@@ -105,12 +77,12 @@
Run the script as shown below:
```
$ cd ${GOPATH}/src/github.com/kata-containers/packaging/release
$ cd ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/release
# Note: OLD_VERSION is where the script should start to get changes.
$ ./runtime-release-notes.sh ${OLD_VERSION} ${NEW_VERSION} > notes.md
# Edit the `notes.md` file to review and make any changes to the release notes.
# Add the release notes in GitHub runtime.
$ hub -C "${GOPATH}/src/github.com/kata-containers/runtime" release edit -F notes.md "${NEW_VERSION}"
$ hub release edit -F notes.md "${NEW_VERSION}"
```
### Announce the release

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -1,72 +1,57 @@
# Kata Containers Architecture
* [Overview](#overview)
* [Virtualization](#virtualization)
* [Guest assets](#guest-assets)
* [Guest kernel](#guest-kernel)
* [Guest Image](#guest-image)
* [Root filesystem image](#root-filesystem-image)
* [Initrd image](#initrd-image)
* [Agent](#agent)
* [Runtime](#runtime)
* [Configuration](#configuration)
* [Significant OCI commands](#significant-oci-commands)
* [create](#create)
* [start](#start)
* [exec](#exec)
* [kill](#kill)
* [delete](#delete)
* [Networking](#networking)
* [Storage](#storage)
* [Kubernetes Support](#kubernetes-support)
* [Problem Statement](#problem-statement)
* [Containerd](#containerd)
* [CRI-O](#cri-o)
* [OCI Annotations](#oci-annotations)
* [Mixing VM based and namespace based runtimes](#mixing-vm-based-and-namespace-based-runtimes)
* [Appendices](#appendices)
* [DAX](#dax)
- [Kata Containers Architecture](#kata-containers-architecture)
- [Overview](#overview)
- [Virtualization](#virtualization)
- [Guest assets](#guest-assets)
- [Guest kernel](#guest-kernel)
- [Guest image](#guest-image)
- [Root filesystem image](#root-filesystem-image)
- [Initrd image](#initrd-image)
- [Agent](#agent)
- [Runtime](#runtime)
- [Configuration](#configuration)
- [Networking](#networking)
- [CNM](#cnm)
- [Network Hotplug](#network-hotplug)
- [Storage](#storage)
- [Kubernetes support](#kubernetes-support)
- [OCI annotations](#oci-annotations)
- [Mixing VM based and namespace based runtimes](#mixing-vm-based-and-namespace-based-runtimes)
- [Appendices](#appendices)
- [DAX](#dax)
## Overview
This is an architectural overview of Kata Containers, based on the 1.5.0 release.
This is an architectural overview of Kata Containers, based on the 2.0 release.
The two primary deliverables of the Kata Containers project are a container runtime
and a CRI friendly shim. There is also a CRI friendly library API behind them.
The primary deliverable of the Kata Containers project is a CRI friendly shim. There is also a CRI friendly library API behind them.
The [Kata Containers runtime (`kata-runtime`)](../../src/runtime)
The [Kata Containers runtime](../../src/runtime)
is compatible with the [OCI](https://github.com/opencontainers) [runtime specification](https://github.com/opencontainers/runtime-spec)
and therefore works seamlessly with the
[Docker\* Engine](https://www.docker.com/products/docker-engine) pluggable runtime
architecture. It also supports the [Kubernetes\* Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
and therefore works seamlessly with the [Kubernetes\* Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md)
through the [CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and
[Containerd CRI Plugin\*](https://github.com/containerd/cri) implementation. In other words, you can transparently
select between the [default Docker and CRI shim runtime (runc)](https://github.com/opencontainers/runc)
and `kata-runtime`.
[Containerd\*](https://github.com/containerd/containerd) implementation.
`kata-runtime` creates a QEMU\*/KVM virtual machine for each container or pod,
the Docker engine or `kubelet` (Kubernetes) creates respectively.
![Docker and Kata Containers](arch-images/docker-kata.png)
Kata Containers creates a QEMU\*/KVM virtual machine for pod that `kubelet` (Kubernetes) creates respectively.
The [`containerd-shim-kata-v2` (shown as `shimv2` from this point onwards)](../../src/runtime/containerd-shim-v2)
is another Kata Containers entrypoint, which
is the Kata Containers entrypoint, which
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata.
With `shimv2`, Kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead
of `2N+1` shims (a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself), and no standalone
`kata-proxy` process even if no VSOCK is available.
Before `shimv2` (as done in [Kata Containers 1.x releases](https://github.com/kata-containers/runtime/releases)), we need to create a `containerd-shim` and a [`kata-shim`](https://github.com/kata-containers/shim) for each container and the Pod sandbox itself, plus an optional [`kata-proxy`](https://github.com/kata-containers/proxy) when VSOCK is not available. With `shimv2`, Kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead of `2N+1` shims, and no standalone `kata-proxy` process even if no VSOCK is available.
![Kubernetes integration with shimv2](arch-images/shimv2.svg)
The container process is then spawned by
[agent](../../src/agent), an agent process running
as a daemon inside the virtual machine. `kata-agent` runs a gRPC server in
[`kata-agent`](../../src/agent), an agent process running
as a daemon inside the virtual machine. `kata-agent` runs a [`ttRPC`](https://github.com/containerd/ttrpc-rust) server in
the guest using a VIRTIO serial or VSOCK interface which QEMU exposes as a socket
file on the host. `kata-runtime` uses a gRPC protocol to communicate with
file on the host. `shimv2` uses a `ttRPC` protocol to communicate with
the agent. This protocol allows the runtime to send container management
commands to the agent. The protocol is also used to carry the I/O streams (stdout,
stderr, stdin) between the containers and the manage engines (e.g. Docker Engine).
stderr, stdin) between the containers and the manage engines (e.g. CRI-O or containerd).
For any given container, both the init process and all potentially executed
commands within that container, together with their related I/O streams, need
@@ -111,7 +96,7 @@ The only services running in the context of the mini O/S are the init daemon
is created using libcontainer, creating a container in the same manner that is done
by `runc`.
For example, when `docker run -ti ubuntu date` is run:
For example, when `ctr run -ti ubuntu date` is run:
- The hypervisor will boot the mini-OS image using the guest kernel.
- `systemd`, running inside the mini-OS context, will launch the `kata-agent` in
@@ -130,170 +115,37 @@ The only service running in the context of the initrd is the [Agent](#agent) as
## Agent
[`kata-agent`](../../src/agent) is a process running in the
guest as a supervisor for managing containers and processes running within
those containers.
[`kata-agent`](../../src/agent) is a process running in the guest as a supervisor for managing containers and processes running within those containers.
The `kata-agent` execution unit is the sandbox. A `kata-agent` sandbox is a container sandbox defined by a set of namespaces (NS, UTS, IPC and PID). `kata-runtime` can
For the 2.0 release, the `kata-agent` is rewritten in the [RUST programming language](https://www.rust-lang.org/) so that we can minimize its memory footprint while keeping the memory safety of the original GO version of [`kata-agent` used in Kata Container 1.x](https://github.com/kata-containers/agent). This memory footprint reduction is pretty impressive, from tens of megabytes down to less than 100 kilobytes, enabling Kata Containers in more use cases like functional computing and edge computing.
The `kata-agent` execution unit is the sandbox. A `kata-agent` sandbox is a container sandbox defined by a set of namespaces (NS, UTS, IPC and PID). `shimv2` can
run several containers per VM to support container engines that require multiple
containers running inside a pod. In the case of docker, `kata-runtime` creates a
single container per pod.
containers running inside a pod.
`kata-agent` communicates with the other Kata components over ttRPC.
### Agent gRPC protocol
placeholder
`kata-agent` communicates with the other Kata components over `ttRPC`.
## Runtime
`kata-runtime` is an OCI compatible container runtime and is responsible for handling
all commands specified by
[the OCI runtime specification](https://github.com/opencontainers/runtime-spec)
and launching `kata-shim` instances.
`containerd-shim-kata-v2` is a [containerd runtime shimv2](https://github.com/containerd/containerd/blob/v1.4.1/runtime/v2/README.md) implementation and is responsible for handling the `runtime v2 shim APIs`, which is similar to [the OCI runtime specification](https://github.com/opencontainers/runtime-spec) but simplifies the architecture by loading the runtime once and making RPC calls to handle the various container lifecycle commands. This refinement is an improvement on the OCI specification which requires the container manager call the runtime binary multiple times, at least once for each lifecycle command.
`kata-runtime` heavily utilizes the
[virtcontainers project](https://github.com/containers/virtcontainers), which
provides a generic, runtime-specification agnostic, hardware-virtualized containers
library.
`containerd-shim-kata-v2` heavily utilizes the
[virtcontainers package](../../src/runtime/virtcontainers/), which provides a generic, runtime-specification agnostic, hardware-virtualized containers library.
### Configuration
The runtime uses a TOML format configuration file called `configuration.toml`. By
default this file is installed in the `/usr/share/defaults/kata-containers`
directory and contains various settings such as the paths to the hypervisor,
the guest kernel and the mini-OS image.
The runtime uses a TOML format configuration file called `configuration.toml`. By default this file is installed in the `/usr/share/defaults/kata-containers` directory and contains various settings such as the paths to the hypervisor, the guest kernel and the mini-OS image.
The actual configuration file paths can be determined by running:
```
$ kata-runtime --kata-show-default-config-paths
```
Most users will not need to modify the configuration file.
The file is well commented and provides a few "knobs" that can be used to modify
the behavior of the runtime.
The file is well commented and provides a few "knobs" that can be used to modify the behavior of the runtime and your chosen hypervisor.
The configuration file is also used to enable runtime [debug output](../Developer-Guide.md#enable-full-debug).
### Significant OCI commands
Here we describe how `kata-runtime` handles the most important OCI commands.
#### `create`
When handling the OCI
[`create`](https://github.com/kata-containers/runtime/blob/master/cli/create.go)
command, `kata-runtime` goes through the following steps:
1. Create the network namespace where we will spawn VM and shims processes.
2. Call into the pre-start hooks. One of them should be responsible for creating
the `veth` network pair between the host network namespace and the network namespace
freshly created.
3. Scan the network from the new network namespace, and create a MACVTAP connection
between the `veth` interface and a `tap` interface into the VM.
4. Start the VM inside the network namespace by providing the `tap` interface
previously created.
5. Wait for the VM to be ready.
6. Start `kata-proxy`, which will connect to the created VM. The `kata-proxy` process
will take care of proxying all communications with the VM. Kata has a single proxy
per VM.
7. Communicate with `kata-agent` (through the proxy) to configure the sandbox
inside the VM.
8. Communicate with `kata-agent` to create the container, relying on the OCI
configuration file `config.json` initially provided to `kata-runtime`. This
spawns the container process inside the VM, leveraging the `libcontainer` package.
9. Start `kata-shim`, which will connect to the gRPC server socket provided by the `kata-proxy`. `kata-shim` will spawn a few Go routines to parallelize blocking calls `ReadStdout()` , `ReadStderr()` and `WaitProcess()`. Both `ReadStdout()` and `ReadStderr()` are run through infinite loops since `kata-shim` wants the output of those until the container process terminates. `WaitProcess()` is a unique call which returns the exit code of the container process when it terminates inside the VM. Note that `kata-shim` is started inside the network namespace, to allow upper layers to determine which network namespace has been created and by checking the `kata-shim` process. It also creates a new PID namespace by entering into it. This ensures that all `kata-shim` processes belonging to the same container will get killed when the `kata-shim` representing the container process terminates.
At this point the container process is running inside of the VM, and it is represented
on the host system by the `kata-shim` process.
![`kata-oci-create`](arch-images/kata-oci-create.svg)
#### `start`
With traditional containers, [`start`](https://github.com/kata-containers/runtime/blob/master/cli/start.go) launches a container process in its own set of namespaces. With Kata Containers, the main task of `kata-runtime` is to ask [`kata-agent`](#agent) to start the container workload inside the virtual machine. `kata-runtime` will run through the following steps:
1. Communicate with `kata-agent` (through the proxy) to start the container workload
inside the VM. If, for example, the command to execute inside of the container is `top`,
the `kata-shim`'s `ReadStdOut()` will start returning text output for top, and
`WaitProcess()` will continue to block as long as the `top` process runs.
2. Call into the post-start hooks. Usually, this is a no-op since nothing is provided
(this needs clarification)
![`kata-oci-start`](arch-images/kata-oci-start.svg)
#### `exec`
OCI [`exec`](https://github.com/kata-containers/runtime/blob/master/cli/exec.go) allows you to run an additional command within an already running
container. In Kata Containers, this is handled as follows:
1. A request is sent to the `kata agent` (through the proxy) to start a new process
inside an existing container running within the VM.
2. A new `kata-shim` is created within the same network and PID namespaces as the
original `kata-shim` representing the container process. This new `kata-shim` is
used for the new exec process.
Now the process started with `exec` is running within the VM, sharing `uts`, `pid`, `mnt` and `ipc` namespaces with the container process.
![`kata-oci-exec`](arch-images/kata-oci-exec.svg)
#### `kill`
When sending the OCI [`kill`](https://github.com/kata-containers/runtime/blob/master/cli/kill.go) command, the container runtime should send a
[UNIX signal](https://en.wikipedia.org/wiki/Unix_signal) to the container process.
A `kill` sending a termination signal such as `SIGKILL` or `SIGTERM` is expected
to terminate the container process. In the context of a traditional container,
this means stopping the container. For `kata-runtime`, this translates to stopping
the container and the VM associated with it.
1. Send a request to kill the container process to the `kata-agent` (through the proxy).
2. Wait for `kata-shim` process to exit.
3. Force kill the container process if `kata-shim` process didn't return after a
timeout. This is done by communicating with `kata-agent` (connecting the proxy),
sending `SIGKILL` signal to the container process inside the VM.
4. Wait for `kata-shim` process to exit, and return an error if we reach the
timeout again.
5. Communicate with `kata-agent` (through the proxy) to remove the container
configuration from the VM.
6. Communicate with `kata-agent` (through the proxy) to destroy the sandbox
configuration from the VM.
7. Stop the VM.
8. Remove all network configurations inside the network namespace and delete the
namespace.
9. Execute post-stop hooks.
If `kill` was invoked with a non-termination signal, this simply signals the container process. Otherwise, everything has been torn down, and the VM has been removed.
#### `delete`
[`delete`](https://github.com/kata-containers/runtime/blob/master/cli/delete.go) removes all internal resources related to a container. A running container
cannot be deleted unless the OCI runtime is explicitly being asked to, by using
`--force` flag.
If the sandbox is not stopped, but the particular container process returned on
its own already, the `kata-runtime` will first go through most of the steps a `kill`
would go through for a termination signal. After this process, or if the `sandboxID` was already stopped to begin with, then `kata-runtime` will:
1. Remove container resources. Every file kept under `/var/{lib,run}/virtcontainers/sandboxes/<sandboxID>/<containerID>`.
2. Remove sandbox resources. Every file kept under `/var/{lib,run}/virtcontainers/sandboxes/<sandboxID>`.
At this point, everything related to the container should have been removed from the host system, and no related process should be running.
#### `state`
[`state`](https://github.com/kata-containers/runtime/blob/master/cli/state.go)
returns the status of the container. For `kata-runtime`, this means being
able to detect if the container is still running by looking at the state of `kata-shim`
process representing this container process.
1. Ask the container status by checking information stored on disk. (clarification needed)
2. Check `kata-shim` process representing the container.
3. In case the container status on disk was supposed to be `ready` or `running`,
and the `kata-shim` process no longer exists, this involves the detection of a
stopped container. This means that before returning the container status,
the container has to be properly stopped. Here are the steps involved in this detection:
1. Wait for `kata-shim` process to exit.
2. Force kill the container process if `kata-shim` process didn't return after a timeout. This is done by communicating with `kata-agent` (connecting the proxy), sending `SIGKILL` signal to the container process inside the VM.
3. Wait for `kata-shim` process to exit, and return an error if we reach the timeout again.
4. Communicate with `kata-agent` (connecting the proxy) to remove the container configuration from the VM.
4. Return container status.
## Networking
Containers will typically live in their own, possibly shared, networking namespace.
@@ -310,7 +162,7 @@ cannot handle `veth` interfaces. Typically, `TAP` interfaces are created for VM
connectivity.
To overcome incompatibility between typical container engines expectations
and virtual machines, `kata-runtime` networking transparently connects `veth`
and virtual machines, Kata Containers networking transparently connects `veth`
interfaces with `TAP` ones using MACVTAP:
![Kata Containers networking](arch-images/network.png)
@@ -375,35 +227,14 @@ The following diagram illustrates the Kata Containers network hotplug workflow.
![Network Hotplug](arch-images/kata-containers-network-hotplug.png)
## Storage
Container workloads are shared with the virtualized environment through [9pfs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt).
The devicemapper storage driver is a special case. The driver uses dedicated block
devices rather than formatted filesystems, and operates at the block level rather
than the file level. This knowledge is used to directly use the underlying block
device instead of the overlay file system for the container root file system. The
block device maps to the top read-write layer for the overlay. This approach gives
much better I/O performance compared to using 9pfs to share the container file system.
Container workloads are shared with the virtualized environment through [virtio-fs](https://virtio-fs.gitlab.io/).
The approach above does introduce a limitation in terms of dynamic file copy
in/out of the container using the `docker cp` operations. The copy operation from
host to container accesses the mounted file system on the host-side. This is
not expected to work and may lead to inconsistencies as the block device will
be simultaneously written to from two different mounts. The copy operation from
container to host will work, provided the user calls `sync(1)` from within the
container prior to the copy to make sure any outstanding cached data is written
to the block device.
The [devicemapper `snapshotter`](https://github.com/containerd/containerd/tree/master/snapshots/devmapper) is a special case. The `snapshotter` uses dedicated block devices rather than formatted filesystems, and operates at the block level rather than the file level. This knowledge is used to directly use the underlying block device instead of the overlay file system for the container root file system. The block device maps to the top read-write layer for the overlay. This approach gives much better I/O performance compared to using `virtio-fs` to share the container file system.
```
docker cp [OPTIONS] CONTAINER:SRC_PATH HOST:DEST_PATH
docker cp [OPTIONS] HOST:SRC_PATH CONTAINER:DEST_PATH
```
Kata Containers has the ability to hotplug and remove block devices, which makes it possible to use block devices for containers started after the VM has been launched.
Kata Containers has the ability to hotplug and remove block devices, which makes it
possible to use block devices for containers started after the VM has been launched.
Users can check to see if the container uses the devicemapper block device as its
rootfs by calling `mount(8)` within the container. If the devicemapper block device
is used, `/` will be mounted on `/dev/vda`. Users can disable direct mounting
of the underlying block device through the runtime configuration.
Users can check to see if the container uses the devicemapper block device as its rootfs by calling `mount(8)` within the container. If the devicemapper block device
is used, `/` will be mounted on `/dev/vda`. Users can disable direct mounting of the underlying block device through the runtime configuration.
## Kubernetes support
@@ -424,44 +255,13 @@ lifecycle management from container execution through the dedicated
In other words, a Kubelet is a CRI client and expects a CRI implementation to
handle the server side of the interface.
[CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and [Containerd CRI Plugin\*](https://github.com/containerd/cri) are CRI implementations that rely on [OCI](https://github.com/opencontainers/runtime-spec)
[CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and [Containerd\*](https://github.com/containerd/containerd/) are CRI implementations that rely on [OCI](https://github.com/opencontainers/runtime-spec)
compatible runtimes for managing container instances.
Kata Containers is an officially supported CRI-O and Containerd CRI Plugin runtime. It is OCI compatible and therefore aligns with project's architecture and requirements.
However, due to the fact that Kubernetes execution units are sets of containers (also
known as pods) rather than single containers, the Kata Containers runtime needs to
get extra information to seamlessly integrate with Kubernetes.
### Problem statement
The Kubernetes\* execution unit is a pod that has specifications detailing constraints
such as namespaces, groups, hardware resources, security contents, *etc* shared by all
the containers within that pod.
By default the Kubelet will send a container creation request to its CRI runtime for
each pod and container creation. Without additional metadata from the CRI runtime,
the Kata Containers runtime will thus create one virtual machine for each pod and for
each containers within a pod. However the task of providing the Kubernetes pod semantics
when creating one virtual machine for each container within the same pod is complex given
the resources of these virtual machines (such as networking or PID) need to be shared.
The challenge with Kata Containers when working as a Kubernetes\* runtime is thus to know
when to create a full virtual machine (for pods) and when to create a new container inside
a previously created virtual machine. In both cases it will get called with very similar
arguments, so it needs the help of the Kubernetes CRI runtime to be able to distinguish a
pod creation request from a container one.
### Containerd
As of Kata Containers 1.5, using `shimv2` with containerd 1.2.0 or above is the preferred
way to run Kata Containers with Kubernetes ([see the howto](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)).
The CRI-O will catch up soon ([`kubernetes-sigs/cri-o#2024`](https://github.com/kubernetes-sigs/cri-o/issues/2024)).
Refer to the following how-to guides:
Kata Containers is an officially supported CRI-O and Containerd runtime. Refer to the following guides on how to set up Kata Containers with Kubernetes:
- [How to use Kata Containers and Containerd](../how-to/containerd-kata.md)
- [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md)
### CRI-O
- [Run Kata Containers with Kubernetes](../how-to/run-kata-with-k8s.md)
#### OCI annotations
@@ -506,36 +306,10 @@ with a Kubernetes pod:
#### Mixing VM based and namespace based runtimes
> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](../how-to/containerd-kata.md#kubernetes-runtimeclass)
> **Note:** Since Kubernetes 1.12, the [`Kubernetes RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/)
> has been supported and the user can specify runtime without the non-standardized annotations.
One interesting evolution of the CRI-O support for `kata-runtime` is the ability
to run virtual machine based pods alongside namespace ones. With CRI-O and Kata
Containers, one can introduce the concept of workload trust inside a Kubernetes
cluster.
A cluster operator can now tag (through Kubernetes annotations) container workloads
as `trusted` or `untrusted`. The former labels known to be safe workloads while
the latter describes potentially malicious or misbehaving workloads that need the
highest degree of isolation. In a software development context, an example of a `trusted` workload would be a containerized continuous integration engine whereas all
developers applications would be `untrusted` by default. Developers workloads can
be buggy, unstable or even include malicious code and thus from a security perspective
it makes sense to tag them as `untrusted`. A CRI-O and Kata Containers based
Kubernetes cluster handles this use case transparently as long as the deployed
containers are properly tagged. All `untrusted` containers will be handled by Kata Containers and thus run in a hardware virtualized secure sandbox while `runc`, for
example, could handle the `trusted` ones.
CRI-O's default behavior is to trust all pods, except when they're annotated with
`io.kubernetes.cri-o.TrustedSandbox` set to `false`. The default CRI-O trust level
is set through its `configuration.toml` configuration file. Generally speaking,
the CRI-O runtime selection between its trusted runtime (typically `runc`) and its untrusted one (`kata-runtime`) is a function of the pod `Privileged` setting, the `io.kubernetes.cri-o.TrustedSandbox` annotation value, and the default CRI-O trust
level. When a pod is `Privileged`, the runtime will always be `runc`. However, when
a pod is **not** `Privileged` the runtime selection is done as follows:
| | `io.kubernetes.cri-o.TrustedSandbox` not set | `io.kubernetes.cri-o.TrustedSandbox` = `true` | `io.kubernetes.cri-o.TrustedSandbox` = `false` |
| :--- | :---: | :---: | :---: |
| Default CRI-O trust level: `trusted` | runc | runc | Kata Containers |
| Default CRI-O trust level: `untrusted` | Kata Containers | Kata Containers | Kata Containers |
With `RuntimeClass`, users can define Kata Containers as a `RuntimeClass` and then explicitly specify that a pod being created as a Kata Containers pod. For details, please refer to [How to use Kata Containers and Containerd](../../docs/how-to/containerd-kata.md).
# Appendices

View File

@@ -220,6 +220,566 @@ components:
fixed: false
values: []
since: 2.0.0
- prefix: kata_firecracker
title: Firecracker vmm metrics
desc: Metrics for Firecracker vmm
metrics:
- name: kata_firecracker_api_server
type: GAUGE
unit: ""
help: Metrics related to the internal API server.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: process_startup_time_cpu_us
desc: ""
- value: process_startup_time_us
desc: ""
- value: sync_response_fails
desc: ""
- value: sync_vmm_send_timeout_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_block
type: GAUGE
unit: ""
help: Block Device associated metrics.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: activate_fails
desc: ""
- value: cfg_fails
desc: ""
- value: event_fails
desc: ""
- value: execute_fails
desc: ""
- value: flush_count
desc: ""
- value: invalid_reqs_count
desc: ""
- value: no_avail_buffer
desc: ""
- value: queue_event_count
desc: ""
- value: rate_limiter_event_count
desc: ""
- value: rate_limiter_throttled_events
desc: ""
- value: read_bytes
desc: ""
- value: read_count
desc: ""
- value: update_count
desc: ""
- value: update_fails
desc: ""
- value: write_bytes
desc: ""
- value: write_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_get_api_requests
type: GAUGE
unit: ""
help: Metrics specific to GET API Requests for counting user triggered actions and/or failures.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: instance_info_count
desc: ""
- value: instance_info_fails
desc: ""
- value: machine_cfg_count
desc: ""
- value: machine_cfg_fails
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_i8042
type: GAUGE
unit: ""
help: Metrics specific to the i8042 device.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: error_count
desc: ""
- value: missed_read_count
desc: ""
- value: missed_write_count
desc: ""
- value: read_count
desc: ""
- value: reset_count
desc: ""
- value: write_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_latencies_us
type: GAUGE
unit: ""
help: Performance metrics related for the moment only to snapshots.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: diff_create_snapshot
desc: ""
- value: full_create_snapshot
desc: ""
- value: load_snapshot
desc: ""
- value: pause_vm
desc: ""
- value: resume_vm
desc: ""
- value: vmm_diff_create_snapshot
desc: ""
- value: vmm_full_create_snapshot
desc: ""
- value: vmm_load_snapshot
desc: ""
- value: vmm_pause_vm
desc: ""
- value: vmm_resume_vm
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_logger
type: GAUGE
unit: ""
help: Metrics for the logging subsystem.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: log_fails
desc: ""
- value: metrics_fails
desc: ""
- value: missed_log_count
desc: ""
- value: missed_metrics_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_mmds
type: GAUGE
unit: ""
help: Metrics for the MMDS functionality.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: connections_created
desc: ""
- value: connections_destroyed
desc: ""
- value: rx_accepted
desc: ""
- value: rx_accepted_err
desc: ""
- value: rx_accepted_unusual
desc: ""
- value: rx_bad_eth
desc: ""
- value: rx_count
desc: ""
- value: tx_bytes
desc: ""
- value: tx_count
desc: ""
- value: tx_errors
desc: ""
- value: tx_frames
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_net
type: GAUGE
unit: ""
help: Network-related metrics.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: activate_fails
desc: ""
- value: cfg_fails
desc: ""
- value: event_fails
desc: ""
- value: mac_address_updates
desc: ""
- value: no_rx_avail_buffer
desc: ""
- value: no_tx_avail_buffer
desc: ""
- value: rx_bytes_count
desc: ""
- value: rx_count
desc: ""
- value: rx_event_rate_limiter_count
desc: ""
- value: rx_fails
desc: ""
- value: rx_packets_count
desc: ""
- value: rx_partial_writes
desc: ""
- value: rx_queue_event_count
desc: ""
- value: rx_rate_limiter_throttled
desc: ""
- value: rx_tap_event_count
desc: ""
- value: tap_read_fails
desc: ""
- value: tap_write_fails
desc: ""
- value: tx_bytes_count
desc: ""
- value: tx_count
desc: ""
- value: tx_fails
desc: ""
- value: tx_malformed_frames
desc: ""
- value: tx_packets_count
desc: ""
- value: tx_partial_reads
desc: ""
- value: tx_queue_event_count
desc: ""
- value: tx_rate_limiter_event_count
desc: ""
- value: tx_rate_limiter_throttled
desc: ""
- value: tx_spoofed_mac_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_patch_api_requests
type: GAUGE
unit: ""
help: Metrics specific to PATCH API Requests for counting user triggered actions and/or failures.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: drive_count
desc: ""
- value: drive_fails
desc: ""
- value: machine_cfg_count
desc: ""
- value: machine_cfg_fails
desc: ""
- value: network_count
desc: ""
- value: network_fails
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_put_api_requests
type: GAUGE
unit: ""
help: Metrics specific to PUT API Requests for counting user triggered actions and/or failures.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: actions_count
desc: ""
- value: actions_fails
desc: ""
- value: boot_source_count
desc: ""
- value: boot_source_fails
desc: ""
- value: drive_count
desc: ""
- value: drive_fails
desc: ""
- value: logger_count
desc: ""
- value: logger_fails
desc: ""
- value: machine_cfg_count
desc: ""
- value: machine_cfg_fails
desc: ""
- value: metrics_count
desc: ""
- value: metrics_fails
desc: ""
- value: network_count
desc: ""
- value: network_fails
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_rtc
type: GAUGE
unit: ""
help: Metrics specific to the RTC device.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: error_count
desc: ""
- value: missed_read_count
desc: ""
- value: missed_write_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_seccomp
type: GAUGE
unit: ""
help: Metrics for the seccomp filtering.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: num_faults
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_signals
type: GAUGE
unit: ""
help: Metrics related to signals.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: sigbus
desc: ""
- value: sigsegv
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_uart
type: GAUGE
unit: ""
help: Metrics specific to the UART device.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: error_count
desc: ""
- value: flush_count
desc: ""
- value: missed_read_count
desc: ""
- value: missed_write_count
desc: ""
- value: read_count
desc: ""
- value: write_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_vcpu
type: GAUGE
unit: ""
help: Metrics specific to VCPUs' mode of functioning.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: exit_io_in
desc: ""
- value: exit_io_out
desc: ""
- value: exit_mmio_read
desc: ""
- value: exit_mmio_write
desc: ""
- value: failures
desc: ""
- value: filter_cpuid
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_vmm
type: GAUGE
unit: ""
help: Metrics specific to the machine manager as a whole.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: device_events
desc: ""
- value: panic_count
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- name: kata_firecracker_vsock
type: GAUGE
unit: ""
help: Vsock-related metrics.
labels:
- name: item
desc: ""
manually_edit: false
fixed: true
values:
- value: activate_fails
desc: ""
- value: cfg_fails
desc: ""
- value: conn_event_fails
desc: ""
- value: conns_added
desc: ""
- value: conns_killed
desc: ""
- value: conns_removed
desc: ""
- value: ev_queue_event_fails
desc: ""
- value: killq_resync
desc: ""
- value: muxer_event_fails
desc: ""
- value: rx_bytes_count
desc: ""
- value: rx_packets_count
desc: ""
- value: rx_queue_event_count
desc: ""
- value: rx_queue_event_fails
desc: ""
- value: rx_read_fails
desc: ""
- value: tx_bytes_count
desc: ""
- value: tx_flush_fails
desc: ""
- value: tx_packets_count
desc: ""
- value: tx_queue_event_count
desc: ""
- value: tx_queue_event_fails
desc: ""
- value: tx_write_fails
desc: ""
- name: sandbox_id
desc: ""
manually_edit: false
fixed: false
values: []
since: 2.0.0
- prefix: kata_guest
title: Kata guest OS metrics
desc: Guest OS's metrics in hypervisor.

View File

@@ -9,6 +9,7 @@
* [Metrics list](#metrics-list)
* [Metric types](#metric-types)
* [Kata agent metrics](#kata-agent-metrics)
* [Firecracker metrics](#firecracker-metrics)
* [Kata guest OS metrics](#kata-guest-os-metrics)
* [Hypervisor metrics](#hypervisor-metrics)
* [Kata monitor metrics](#kata-monitor-metrics)
@@ -152,6 +153,7 @@ Metrics is categorized by component where metrics are collected from and for.
* [Metric types](#metric-types)
* [Kata agent metrics](#kata-agent-metrics)
* [Firecracker metrics](#firecracker-metrics)
* [Kata guest OS metrics](#kata-guest-os-metrics)
* [Hypervisor metrics](#hypervisor-metrics)
* [Kata monitor metrics](#kata-monitor-metrics)
@@ -198,6 +200,30 @@ Agent's metrics contains metrics about agent process.
| `kata_agent_total_time`: <br> Agent process total time | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_agent_total_vm`: <br> Agent process total `vm` size | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | 2.0.0 |
### Firecracker metrics
Metrics for Firecracker vmm.
| Metric name | Type | Units | Labels | Introduced in Kata version |
|---|---|---|---|---|
| `kata_firecracker_api_server`: <br> Metrics related to the internal API server. | `GAUGE` | | <ul><li>`item`<ul><li>`process_startup_time_cpu_us`</li><li>`process_startup_time_us`</li><li>`sync_response_fails`</li><li>`sync_vmm_send_timeout_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_block`: <br> Block Device associated metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activate_fails`</li><li>`cfg_fails`</li><li>`event_fails`</li><li>`execute_fails`</li><li>`flush_count`</li><li>`invalid_reqs_count`</li><li>`no_avail_buffer`</li><li>`queue_event_count`</li><li>`rate_limiter_event_count`</li><li>`rate_limiter_throttled_events`</li><li>`read_bytes`</li><li>`read_count`</li><li>`update_count`</li><li>`update_fails`</li><li>`write_bytes`</li><li>`write_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_get_api_requests`: <br> Metrics specific to GET API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`instance_info_count`</li><li>`instance_info_fails`</li><li>`machine_cfg_count`</li><li>`machine_cfg_fails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_i8042`: <br> Metrics specific to the i8042 device. | `GAUGE` | | <ul><li>`item`<ul><li>`error_count`</li><li>`missed_read_count`</li><li>`missed_write_count`</li><li>`read_count`</li><li>`reset_count`</li><li>`write_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_latencies_us`: <br> Performance metrics related for the moment only to snapshots. | `GAUGE` | | <ul><li>`item`<ul><li>`diff_create_snapshot`</li><li>`full_create_snapshot`</li><li>`load_snapshot`</li><li>`pause_vm`</li><li>`resume_vm`</li><li>`vmm_diff_create_snapshot`</li><li>`vmm_full_create_snapshot`</li><li>`vmm_load_snapshot`</li><li>`vmm_pause_vm`</li><li>`vmm_resume_vm`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_logger`: <br> Metrics for the logging subsystem. | `GAUGE` | | <ul><li>`item`<ul><li>`log_fails`</li><li>`metrics_fails`</li><li>`missed_log_count`</li><li>`missed_metrics_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_mmds`: <br> Metrics for the MMDS functionality. | `GAUGE` | | <ul><li>`item`<ul><li>`connections_created`</li><li>`connections_destroyed`</li><li>`rx_accepted`</li><li>`rx_accepted_err`</li><li>`rx_accepted_unusual`</li><li>`rx_bad_eth`</li><li>`rx_count`</li><li>`tx_bytes`</li><li>`tx_count`</li><li>`tx_errors`</li><li>`tx_frames`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_net`: <br> Network-related metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activate_fails`</li><li>`cfg_fails`</li><li>`event_fails`</li><li>`mac_address_updates`</li><li>`no_rx_avail_buffer`</li><li>`no_tx_avail_buffer`</li><li>`rx_bytes_count`</li><li>`rx_count`</li><li>`rx_event_rate_limiter_count`</li><li>`rx_fails`</li><li>`rx_packets_count`</li><li>`rx_partial_writes`</li><li>`rx_queue_event_count`</li><li>`rx_rate_limiter_throttled`</li><li>`rx_tap_event_count`</li><li>`tap_read_fails`</li><li>`tap_write_fails`</li><li>`tx_bytes_count`</li><li>`tx_count`</li><li>`tx_fails`</li><li>`tx_malformed_frames`</li><li>`tx_packets_count`</li><li>`tx_partial_reads`</li><li>`tx_queue_event_count`</li><li>`tx_rate_limiter_event_count`</li><li>`tx_rate_limiter_throttled`</li><li>`tx_spoofed_mac_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_patch_api_requests`: <br> Metrics specific to PATCH API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`drive_count`</li><li>`drive_fails`</li><li>`machine_cfg_count`</li><li>`machine_cfg_fails`</li><li>`network_count`</li><li>`network_fails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_put_api_requests`: <br> Metrics specific to PUT API Requests for counting user triggered actions and/or failures. | `GAUGE` | | <ul><li>`item`<ul><li>`actions_count`</li><li>`actions_fails`</li><li>`boot_source_count`</li><li>`boot_source_fails`</li><li>`drive_count`</li><li>`drive_fails`</li><li>`logger_count`</li><li>`logger_fails`</li><li>`machine_cfg_count`</li><li>`machine_cfg_fails`</li><li>`metrics_count`</li><li>`metrics_fails`</li><li>`network_count`</li><li>`network_fails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_rtc`: <br> Metrics specific to the RTC device. | `GAUGE` | | <ul><li>`item`<ul><li>`error_count`</li><li>`missed_read_count`</li><li>`missed_write_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_seccomp`: <br> Metrics for the seccomp filtering. | `GAUGE` | | <ul><li>`item`<ul><li>`num_faults`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_signals`: <br> Metrics related to signals. | `GAUGE` | | <ul><li>`item`<ul><li>`sigbus`</li><li>`sigsegv`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_uart`: <br> Metrics specific to the UART device. | `GAUGE` | | <ul><li>`item`<ul><li>`error_count`</li><li>`flush_count`</li><li>`missed_read_count`</li><li>`missed_write_count`</li><li>`read_count`</li><li>`write_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_vcpu`: <br> Metrics specific to VCPUs' mode of functioning. | `GAUGE` | | <ul><li>`item`<ul><li>`exit_io_in`</li><li>`exit_io_out`</li><li>`exit_mmio_read`</li><li>`exit_mmio_write`</li><li>`failures`</li><li>`filter_cpuid`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_vmm`: <br> Metrics specific to the machine manager as a whole. | `GAUGE` | | <ul><li>`item`<ul><li>`device_events`</li><li>`panic_count`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
| `kata_firecracker_vsock`: <br> Vsock-related metrics. | `GAUGE` | | <ul><li>`item`<ul><li>`activate_fails`</li><li>`cfg_fails`</li><li>`conn_event_fails`</li><li>`conns_added`</li><li>`conns_killed`</li><li>`conns_removed`</li><li>`ev_queue_event_fails`</li><li>`killq_resync`</li><li>`muxer_event_fails`</li><li>`rx_bytes_count`</li><li>`rx_packets_count`</li><li>`rx_queue_event_count`</li><li>`rx_queue_event_fails`</li><li>`rx_read_fails`</li><li>`tx_bytes_count`</li><li>`tx_flush_fails`</li><li>`tx_packets_count`</li><li>`tx_queue_event_count`</li><li>`tx_queue_event_fails`</li><li>`tx_write_fails`</li></ul></li><li>`sandbox_id`</li></ul> | 2.0.0 |
### Kata guest OS metrics
Guest OS's metrics in hypervisor.

View File

@@ -193,13 +193,16 @@ From Containerd v1.2.4 and Kata v1.6.0, there is a new runtime option supported,
```toml
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
privileged_without_host_devices = true
[plugins.cri.containerd.runtimes.kata.options]
ConfigPath = "/etc/kata-containers/config.toml"
```
`privileged_without_host_devices` tells containerd that a privileged Kata container should not have direct access to all host devices. If unset, containerd will pass all host devices to Kata container, which may cause security issues.
This `ConfigPath` option is optional. If you do not specify it, shimv2 first tries to get the configuration file from the environment variable `KATA_CONF_FILE`. If neither are set, shimv2 will use the default Kata configuration file paths (`/etc/kata-containers/configuration.toml` and `/usr/share/defaults/kata-containers/configuration.toml`).
If you use Containerd older than v1.2.4 or a version of Kata older than v1.6.0 and also want to specify a configuration file, you can use the following workaround, since the shimv2 accepts an environment variable, `KATA_CONF_FILE` for the configuration file path. Then, you can create a
If you use Containerd older than v1.2.4 or a version of Kata older than v1.6.0 and also want to specify a configuration file, you can use the following workaround, since the shimv2 accepts an environment variable, `KATA_CONF_FILE` for the configuration file path. Then, you can create a
shell script with the following:
```bash

View File

@@ -29,7 +29,7 @@ to launch Kata Containers. For the previous version of Kata Containers, the Pods
> **Note:** For information about the supported versions of these components,
> see the Kata Containers
> [`versions.yaml`](https://github.com/kata-containers/runtime/blob/master/versions.yaml)
> [`versions.yaml`](../../versions.yaml)
> file.
## Install and configure containerd

View File

@@ -1,16 +1,16 @@
# Kata Containers installation user guides
* [Prerequisites](#prerequisites)
* [Packaged installation methods](#packaged-installation-methods)
* [Supported Distributions](#supported-distributions)
* [Official packages](#official-packages)
* [Automatic Installation](#automatic-installation)
* [Snap Installation](#snap-installation)
* [Scripted Installation](#scripted-installation)
* [Manual Installation](#manual-installation)
* [Build from source installation](#build-from-source-installation)
* [Installing on a Cloud Service Platform](#installing-on-a-cloud-service-platform)
* [Further information](#further-information)
- [Kata Containers installation user guides](#kata-containers-installation-user-guides)
- [Prerequisites](#prerequisites)
- [Packaged installation methods](#packaged-installation-methods)
- [Official packages](#official-packages)
- [Automatic Installation](#automatic-installation)
- [Snap Installation](#snap-installation)
- [Scripted Installation](#scripted-installation)
- [Manual Installation](#manual-installation)
- [Build from source installation](#build-from-source-installation)
- [Installing on a Cloud Service Platform](#installing-on-a-cloud-service-platform)
- [Further information](#further-information)
The following is an overview of the different installation methods available. All of these methods equally result
in a system configured to run Kata Containers.
@@ -29,33 +29,22 @@ to see if your system is capable of running Kata Containers.
| Installation method | Description | Distributions supported |
|------------------------------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------|
| [Automatic](#automatic-installation) |Run a single command to install a full system |[see table](#supported-distributions) |
| [Automatic](#automatic-installation) |Run a single command to install a full system | |
| [Using snap](#snap-installation) |Easy to install and automatic updates |any distro that supports snapd |
| [Using official distro packages](#official-packages) |Kata packages provided by Linux distributions official repositories |[see table](#supported-distributions) |
| [Scripted](#scripted-installation) |Generates an installation script which will result in a working system when executed |[see table](#supported-distributions) |
| [Manual](#manual-installation) |Allows the user to read a brief document and execute the specified commands step-by-step |[see table](#supported-distributions) |
| [Using official distro packages](#official-packages) |Kata packages provided by Linux distributions official repositories | |
| [Scripted](#scripted-installation) |Generates an installation script which will result in a working system when executed | |
| [Manual](#manual-installation) |Allows the user to read a brief document and execute the specified commands step-by-step | |
### Supported Distributions
Kata is packaged by the Kata community for:
|Distribution (link to installation guide) | Versions |
|-----------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
|[CentOS](centos-installation-guide.md) | 7 |
|[Debian](debian-installation-guide.md) | 9, 10 |
|[Fedora](fedora-installation-guide.md) | 28, 29, 30 |
|[openSUSE](opensuse-installation-guide.md) | [Leap](opensuse-leap-installation-guide.md) (15, 15.1)<br>[Tumbleweed](opensuse-tumbleweed-installation-guide.md) |
|[Red Hat Enterprise Linux (RHEL)](rhel-installation-guide.md) | 7 |
|[SUSE Linux Enterprise Server (SLES)](sles-installation-guide.md)| SLES 12 SP3 |
|[Ubuntu](ubuntu-installation-guide.md) | 16.04, 18.04 |
#### Official packages
### Official packages
Kata packages are provided by official distribution repositories for:
|Distribution (link to packages) | Versions |
|-----------------------------------------------------------------|------------|
|[openSUSE](https://software.opensuse.org/package/katacontainers) | Tumbleweed |
| Distribution (link to packages) | Versions | Contacts |
| -------------------------------------------------------- | ------------------------------------------------------------------------------ | -------- |
| [CentOS](centos-installation-guide.md) | 8 | |
| [Fedora](fedora-installation-guide.md) | 32, Rawhide | |
| [SUSE Linux Enterprise (SLE)](sle-installation-guide.md) | SLE 15 SP1, 15 SP2 | |
| [openSUSE](opensuse-installation-guide.md) | [Leap 15.1](opensuse-leap-15.1-installation-guide.md)<br>Leap 15.2, Tumbleweed | |
### Automatic Installation
@@ -72,11 +61,11 @@ Kata packages are provided by official distribution repositories for:
[Use `kata-doc-to-script`](installing-with-kata-doc-to-script.md) to generate installation scripts that can be reviewed before they are executed.
### Manual Installation
Manual installation instructions are available for [these distributions](#supported-distributions) and document how to:
Manual installation instructions are available for [these distributions](#packaged-installation-methods) and document how to:
1. Add the Kata Containers repository to your distro package manager, and import the packages signing key.
2. Install the Kata Containers packages.
3. Install a supported container manager.
4. Configure the container manager to use `kata-runtime` as the default OCI runtime. Or, for Kata Containers 1.5.0 or above, configure the
4. Configure the container manager to use Kata Containers as the default OCI runtime. Or, for Kata Containers 1.5.0 or above, configure the
`io.containerd.kata.v2` to be the runtime shim (see [containerd runtime v2 (shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)
and [How to use Kata Containers and CRI (containerd plugin) with Kubernetes](../how-to/how-to-use-k8s-with-cri-containerd-and-kata.md)).

View File

@@ -15,4 +15,4 @@ Create a new virtual machine with:
## Set up with distribution specific quick start
Follow distribution specific [install guides](../install/README.md#supported-distributions).
Follow distribution specific [install guides](../install/README.md#packaged-installation-methods).

View File

@@ -4,14 +4,25 @@
```bash
$ source /etc/os-release
$ sudo yum -y install yum-utils
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ sudo -E yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/CentOS_${VERSION_ID}/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"
$ sudo -E yum -y install kata-runtime kata-proxy kata-shim
$ cat <<EOF | sudo -E tee /etc/yum.repos.d/advanced-virt.repo
[advanced-virt]
name=Advanced Virtualization
baseurl=http://mirror.centos.org/\$contentdir/\$releasever/virt/\$basearch/advanced-virtualization
enabled=1
gpgcheck=1
skip_if_unavailable=1
EOF
$ cat <<EOF | sudo -E tee /etc/yum.repos.d/kata-containers.repo
[kata-containers]
name=Kata Containers
baseurl=http://mirror.centos.org/\$contentdir/\$releasever/virt/\$basearch/kata-containers
enabled=1
gpgcheck=1
skip_if_unavailable=1
EOF
$ sudo -E dnf module disable -y virt:rhel
$ sudo -E dnf install -y kata-runtime
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/centos-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,22 +0,0 @@
# Install Kata Containers on Debian
1. Install the Kata Containers components with the following commands:
```bash
$ export DEBIAN_FRONTEND=noninteractive
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ source /etc/os-release
$ [ "$ID" = debian ] && [ -z "$VERSION_ID" ] && echo >&2 "ERROR: Debian unstable not supported.
You can try stable packages here:
http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}" && exit 1
$ sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/Debian_${VERSION_ID}/ /' > /etc/apt/sources.list.d/kata-containers.list"
$ curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/Debian_${VERSION_ID}/Release.key | sudo apt-key add -
$ sudo -E apt-get update
$ sudo -E apt-get -y install kata-runtime kata-proxy kata-shim
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/debian-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,75 +0,0 @@
# Install Docker for Kata Containers on CentOS
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../centos-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum -y install docker-ce
```
For more information on installing Docker please refer to the
[Docker Guide](https://docs.docker.com/engine/installation/linux/centos).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,103 +0,0 @@
# Install Docker for Kata Containers on Debian
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../debian-installation-guide.md).
> - This guide allows for installation with `systemd` or `sysVinit` init systems.
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ sudo apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo -E apt-get -y install docker-ce
```
For more information on installing Docker please refer to the
[Docker Guide](https://docs.docker.com/engine/installation/linux/debian).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
a. `sysVinit`
- with `sysVinit`, docker config is stored in `/etc/default/docker`, edit the options similar to the following:
```sh
$ sudo sh -c "echo '# specify docker runtime for kata-containers
DOCKER_OPTS=\"-D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime\"' >> /etc/default/docker"
```
b. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
c. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with one of the following (depending on init choice):
a. `sysVinit`
```sh
$ sudo /etc/init.d/docker stop
$ sudo /etc/init.d/docker start
```
To watch for errors:
```sh
$ tail -f /var/log/docker.log
```
b. systemd
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,77 +0,0 @@
# Install Docker for Kata Containers on Fedora
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../fedora-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ source /etc/os-release
$ sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
$ sudo dnf makecache
$ sudo dnf -y install docker-ce
```
For more information on installing Docker please refer to the
[Docker Guide](https://docs.docker.com/engine/installation/linux/fedora).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,75 +0,0 @@
# Install Docker for Kata Containers on openSUSE
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../opensuse-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ sudo zypper -n install docker
```
For more information on installing Docker please refer to the
[Docker Guide](https://software.opensuse.org/package/docker).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. Specify the runtime options in `/etc/sysconfig/docker` (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ DOCKER_SYSCONFIG=/etc/sysconfig/docker
# Add kata-runtime to the list of available runtimes, if not already listed
$ grep -qE "^ *DOCKER_OPTS=.+--add-runtime[= ] *kata-runtime" $DOCKER_SYSCONFIG || sudo -E sed -i -E "s|^( *DOCKER_OPTS=.+)\" *$|\1 --add-runtime kata-runtime=/usr/bin/kata-runtime\"|g" $DOCKER_SYSCONFIG
# If a current default runtime is specified, overwrite it with kata-runtime
$ sudo -E sed -i -E "s|^( *DOCKER_OPTS=.+--default-runtime[= ] *)[^ \"]+(.*\"$)|\1kata-runtime\2|g" $DOCKER_SYSCONFIG
# Add kata-runtime as default runtime, if no default runtime is specified
$ grep -qE "^ *DOCKER_OPTS=.+--default-runtime" $DOCKER_SYSCONFIG || sudo -E sed -i -E "s|^( *DOCKER_OPTS=.+)(\"$)|\1 --default-runtime=kata-runtime\2|g" $DOCKER_SYSCONFIG
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,14 +0,0 @@
# Install Docker for Kata Containers on openSUSE Leap
Follow the instructions in the generic [openSUSE Docker install guide](opensuse-docker-install.md).
<!--
You can ignore the content of this comment.
(test code run by test-install-docs.sh to validate code blocks this document)
```bash
$ echo "NOTE: this document is just a link to the generic openSUSE install guide located at:
https://raw.githubusercontent.com/kata-containers/documentation/master/install/docker/opensuse-docker-install.md
Please download this file and run kata-doc-to-script.sh again."
```
-->

View File

@@ -1,14 +0,0 @@
# Install Docker for Kata Containers on openSUSE Tumbleweed
Follow the instructions in the generic [openSUSE Docker install guide](opensuse-docker-install.md).
<!--
You can ignore the content of this comment.
(test code run by test-install-docs.sh to validate code blocks this document)
```bash
$ echo "NOTE: this document is just a link to the generic openSUSE install guide located at:
https://raw.githubusercontent.com/kata-containers/documentation/master/install/docker/opensuse-docker-install.md
Please download this file and run kata-doc-to-script.sh again."
```
-->

View File

@@ -1,76 +0,0 @@
# Install Docker for Kata Containers on RHEL
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../rhel-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ export rhel_devtoolset_version="7"
$ sudo subscription-manager repos --enable=rhel-${rhel_devtoolset_version}-server-extras-rpms
$ sudo yum -y install docker && systemctl enable --now docker
```
For more information on installing Docker please refer to the
[Docker Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_containers/#getting_docker_in_rhel_7).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,74 +0,0 @@
# Install Docker for Kata Containers on SLES
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../sles-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ sudo zypper -n install docker
```
For more information on installing Docker please refer to the
[Docker Guide](https://www.suse.com/documentation/sles-12/singlehtml/book_sles_docker/book_sles_docker.html).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -1,79 +0,0 @@
# Install Docker for Kata Containers on Ubuntu
> **Note:**
>
> - This guide assumes you have
> [already installed the Kata Containers packages](../ubuntu-installation-guide.md).
1. Install the latest version of Docker with the following commands:
> **Notes:**
>
> - This step is only required if Docker is not installed on the system.
> - Docker version 18.09 [removed devicemapper support](https://github.com/kata-containers/documentation/issues/373).
> If you wish to use a block based backend, see the options listed on https://github.com/kata-containers/documentation/issues/407.
```bash
$ sudo -E apt-get -y install apt-transport-https ca-certificates software-properties-common
$ curl -sL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ arch=$(dpkg --print-architecture)
$ sudo -E add-apt-repository "deb [arch=${arch}] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo -E apt-get update
$ sudo -E apt-get -y install docker-ce
```
For more information on installing Docker please refer to the
[Docker Guide](https://docs.docker.com/engine/installation/linux/ubuntu).
2. Configure Docker to use Kata Containers by default with **ONE** of the following methods:
1. systemd (this is the default and is applied automatically if you select the
[automatic installation](../../install/README.md#automatic-installation) option)
```bash
$ sudo mkdir -p /etc/systemd/system/docker.service.d/
$ cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/kata-containers.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -D --add-runtime kata-runtime=/usr/bin/kata-runtime --default-runtime=kata-runtime
EOF
```
2. Docker `daemon.json`
Create docker configuration folder.
```
$ sudo mkdir -p /etc/docker
```
Add the following definitions to `/etc/docker/daemon.json`:
```json
{
"default-runtime": "kata-runtime",
"runtimes": {
"kata-runtime": {
"path": "/usr/bin/kata-runtime"
}
}
}
```
3. Restart the Docker systemd service with the following commands:
```bash
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
4. Run Kata Containers
You are now ready to run Kata Containers:
```bash
$ sudo docker run busybox uname -a
```
The previous command shows details of the kernel version running inside the
container, which is different to the host kernel version.

View File

@@ -3,15 +3,8 @@
1. Install the Kata Containers components with the following commands:
```bash
$ source /etc/os-release
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ sudo dnf -y install dnf-plugins-core
$ sudo -E dnf config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/Fedora_${VERSION_ID}/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"
$ sudo -E dnf -y install kata-runtime kata-proxy kata-shim
$ sudo -E dnf -y install kata-runtime
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/fedora-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -5,7 +5,7 @@
* [Docker Installation and Setup](#docker-installation-and-setup)
## Introduction
Use [these installation instructions](README.md#supported-distributions) together with
Use [these installation instructions](README.md#packaged-installation-methods) together with
[`kata-doc-to-script`](https://github.com/kata-containers/tests/blob/master/.ci/kata-doc-to-script.sh)
to generate installation bash scripts.

View File

@@ -6,7 +6,7 @@
* [Further Information](#further-information)
## Introduction
`kata-manager` automates the Kata Containers installation procedure documented for [these Linux distributions](README.md#supported-distributions).
`kata-manager` automates the Kata Containers installation procedure documented for [these Linux distributions](README.md#packaged-installation-methods).
> **Note**:
> - `kata-manager` requires `curl` and `sudo` installed on your system.

View File

@@ -3,21 +3,8 @@
1. Install the Kata Containers components with the following commands:
```bash
$ source /etc/os-release
$ DISTRO_REPO=$(sed "s/ /_/g" <<< "$NAME")
$ [ -n "$VERSION" ] && DISTRO_REPO+="_${VERSION}"
$ DISTRO_REPO=$(echo $DISTRO_REPO | tr -d ' ')
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ REPO_ALIAS="kata-${BRANCH}"
$ PUBKEY="/tmp/rpm-signkey.pub"
$ curl -SsL -o "$PUBKEY" "https://raw.githubusercontent.com/kata-containers/tests/master/data/rpm-signkey.pub"
$ sudo -E rpm --import "$PUBKEY"
$ zypper lr "$REPO_ALIAS" && sudo -E zypper -n removerepo "$REPO_ALIAS"
$ sudo -E zypper addrepo --refresh "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/${DISTRO_REPO}/" "$REPO_ALIAS"
$ sudo -E zypper -n install kata-runtime
$ sudo -E zypper -n install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/opensuse-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -0,0 +1,11 @@
# Install Kata Containers on openSUSE Leap 15.1
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E zypper addrepo --refresh "https://download.opensuse.org/repositories/devel:/kubic/openSUSE_Leap_15.1/devel:kubic.repo"
$ sudo -E zypper -n --gpg-auto-import-keys install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,19 +0,0 @@
# Install Kata Containers on openSUSE Leap
1. Install Kata Containers on openSUSE by following the instructions in the
[openSUSE install guide](opensuse-installation-guide.md).
<!--
You can ignore the content of this comment.
(test code run by test-install-docs.sh to validate code blocks this document)
```bash
$ echo "NOTE: this document is just a link to the generic openSUSE install guide located at:
https://raw.githubusercontent.com/kata-containers/documentation/master/install/opensuse-installation-guide.md
Please download this file and run kata-doc-to-script.sh again."
```
-->
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/opensuse-leap-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,19 +0,0 @@
# Install Kata Containers on openSUSE Tumbleweed
1. Install Kata Containers on openSUSE by following the instructions in the
[openSUSE install guide](opensuse-installation-guide.md).
<!--
You can ignore the content of this comment.
(test code run by test-install-docs.sh to validate code blocks this document)
```bash
$ echo "NOTE: this document is just a link to the generic openSUSE install guide located at:
https://raw.githubusercontent.com/kata-containers/documentation/master/install/opensuse-installation-guide.md
Please download this file and run kata-doc-to-script.sh again."
```
-->
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/opensuse-tumbleweed-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,16 +0,0 @@
# Install Kata Containers on RHEL
1. Install the Kata Containers components with the following commands:
```bash
$ source /etc/os-release
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ sudo -E yum-config-manager --add-repo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/RHEL_${VERSION_ID}/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"
$ sudo -E yum -y install kata-runtime kata-proxy kata-shim
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/rhel-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -0,0 +1,13 @@
# Install Kata Containers on SLE
1. Install the Kata Containers components with the following commands:
```bash
$ source /etc/os-release
$ DISTRO_VERSION=$(sed "s/-/_/g" <<< "$VERSION")
$ sudo -E zypper addrepo --refresh "https://download.opensuse.org/repositories/devel:/kubic/SLE_${DISTRO_VERSION}_Backports/devel:kubic.repo"
$ sudo -E zypper -n --gpg-auto-import-keys install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,15 +0,0 @@
# Install Kata Containers on SLES
1. Install the Kata Containers components with the following commands:
```bash
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ sudo -E zypper addrepo "http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/SLE_15_SP1/home:katacontainers:releases:${ARCH}:${BRANCH}.repo"
$ sudo -E zypper -n --no-gpg-checks install kata-runtime kata-proxy kata-shim
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/sles-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -12,6 +12,4 @@
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Docker](docker/ubuntu-docker-install.md)
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -13,4 +13,4 @@ with v2). The recommended machine type for container workloads is `v2-highcpu`
## Set up with distribution specific quick start
Follow distribution specific [install guides](../install/README.md#supported-distributions).
Follow distribution specific [install guides](../install/README.md#packaged-installation-methods).

View File

@@ -84,12 +84,12 @@ then a new configuration file can be [created](#configure-kata-containers)
and [configured][7].
[1]: https://docs.snapcraft.io/snaps/intro
[2]: ../../../docs/design/architecture.md#root-filesystem-image
[2]: ../docs/design/architecture.md#root-filesystem-image
[3]: https://docs.snapcraft.io/reference/confinement#classic
[4]: https://github.com/kata-containers/runtime#configuration
[5]: https://docs.docker.com/engine/reference/commandline/dockerd
[6]: ../../../docs/install/docker/ubuntu-docker-install.md
[7]: ../../../docs/Developer-Guide.md#configure-to-use-initrd-or-rootfs-image
[6]: ../docs/install/docker/ubuntu-docker-install.md
[7]: ../docs/Developer-Guide.md#configure-to-use-initrd-or-rootfs-image
[8]: https://snapcraft.io/kata-containers
[9]: ../../../docs/Developer-Guide.md#run-kata-containers-with-docker
[10]: ../../../docs/Developer-Guide.md#run-kata-containers-with-kubernetes
[9]: ../docs/Developer-Guide.md#run-kata-containers-with-docker
[10]: ../docs/Developer-Guide.md#run-kata-containers-with-kubernetes

76
src/agent/Cargo.lock generated
View File

@@ -100,7 +100,7 @@ checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
[[package]]
name = "cgroups"
version = "0.1.1-alpha.0"
source = "git+https://github.com/kata-containers/cgroups-rs?tag=0.1.1#3852d7c1805499cd6a0e37ec400d81a7085d91a7"
source = "git+https://github.com/kata-containers/cgroups-rs?branch=stable-0.1.1#8717524f2c95aacd30768b6f0f7d7f2fddef5cac"
dependencies = [
"libc",
"log",
@@ -119,6 +119,15 @@ dependencies = [
"time",
]
[[package]]
name = "cloudabi"
version = "0.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ddfc5b9aa5d4507acaf872de71051dfd0e309860e88966e1051e462a077aac4f"
dependencies = [
"bitflags",
]
[[package]]
name = "constant_time_eq"
version = "0.1.5"
@@ -305,6 +314,15 @@ version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3286f09f7d4926fc486334f28d8d2e6ebe4f7f9994494b6dab27ddfad2c9b11b"
[[package]]
name = "lock_api"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4da24a77a3d8a6d4862d95f72e6fdb9c09a643ecdb402d754004a557f2bec75"
dependencies = [
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.8"
@@ -416,6 +434,30 @@ dependencies = [
"serde_json",
]
[[package]]
name = "parking_lot"
version = "0.10.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3a704eb390aafdc107b0e392f56a82b668e3a71366993b5340f5833fd62505e"
dependencies = [
"lock_api",
"parking_lot_core",
]
[[package]]
name = "parking_lot_core"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d58c7c768d4ba344e3e8d72518ac13e259d7c7ade24167003b8488e10b6740a3"
dependencies = [
"cfg-if",
"cloudabi",
"libc",
"redox_syscall",
"smallvec",
"winapi",
]
[[package]]
name = "path-absolutize"
version = "1.2.1"
@@ -448,7 +490,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "059a34f111a9dee2ce1ac2826a68b24601c4298cfeb1a587c3cb493d5ab46f52"
dependencies = [
"libc",
"nix 0.17.0",
"nix 0.18.0",
]
[[package]]
@@ -659,8 +701,10 @@ dependencies = [
"serde",
"serde_derive",
"serde_json",
"serial_test",
"slog",
"slog-scope",
"tempfile",
]
[[package]]
@@ -712,6 +756,28 @@ dependencies = [
"serde",
]
[[package]]
name = "serial_test"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1b15f74add9a9d4a3eb2bf739c9a427d266d3895b53d992c3a7c234fec2ff1f1"
dependencies = [
"lazy_static",
"parking_lot",
"serial_test_derive",
]
[[package]]
name = "serial_test_derive"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65f59259be9fc1bf677d06cc1456e97756004a1a5a577480f71430bd7c17ba33"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "signal-hook"
version = "0.1.15"
@@ -779,6 +845,12 @@ dependencies = [
"slog",
]
[[package]]
name = "smallvec"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fbee7696b84bbf3d89a1c2eccff0850e3047ed46bfcd2e92c29a2d074d57e252"
[[package]]
name = "spin"
version = "0.5.2"

View File

@@ -32,7 +32,7 @@ tempfile = "3.1.0"
prometheus = { version = "0.9.0", features = ["process"] }
procfs = "0.7.9"
anyhow = "1.0.32"
cgroups = { git = "https://github.com/kata-containers/cgroups-rs", tag = "0.1.1"}
cgroups = { git = "https://github.com/kata-containers/cgroups-rs", branch = "stable-0.1.1"}
[workspace]
members = [

View File

@@ -42,6 +42,8 @@ endif
ifeq ($(ARCH), ppc64le)
override ARCH = powerpc64le
override LIBC = gnu
$(warning "WARNING: powerpc64le-unknown-linux-musl target is unavailable")
endif
TRIPLE = $(ARCH)-unknown-linux-$(LIBC)

View File

@@ -1,10 +1,10 @@
# Kata Agent in Rust
This is a rust version of the [`kata-agent`](https://github.com/kata-containers/kata-agent).
This is a rust version of the [`kata-agent`](https://github.com/kata-containers/agent).
In Denver PTG, [we discussed about re-writing agent in rust](https://etherpad.openstack.org/p/katacontainers-2019-ptg-denver-agenda):
> In general, we all think about re-write agent in rust to reduce the footprint of agent. Moreover, Eric mentioned the possibility to stop using gRPC, which may have some impact on footprint. We may begin to do some PoC to show how much we could save by re-writing agent in rust.
> In general, we all think about re-write agent in rust to reduce the footprint of agent. Moreover, Eric mentioned the possibility to stop using gRPC, which may have some impact on footprint. We may begin to do some POC to show how much we could save by re-writing agent in rust.
After that, we drafted the initial code here, and any contributions are welcome.
@@ -18,7 +18,7 @@ After that, we drafted the initial code here, and any contributions are welcome.
| exec/list process | :white_check_mark: |
| I/O stream | :white_check_mark: |
| Cgroups | :white_check_mark: |
| Capabilities, rlimit, readonly path, masked path, users | :white_check_mark: |
| Capabilities, `rlimit`, readonly path, masked path, users | :white_check_mark: |
| container stats (`stats_container`) | :white_check_mark: |
| Hooks | :white_check_mark: |
| **Agent Features & APIs** |
@@ -28,7 +28,7 @@ After that, we drafted the initial code here, and any contributions are welcome.
| network, interface/routes (`update_container`) | :white_check_mark: |
| File transfer API (`copy_file`) | :white_check_mark: |
| Device APIs (`reseed_random_device`, , `online_cpu_memory`, `mem_hotplug_probe`, `set_guet_data_time`) | :white_check_mark: |
| vsock support | :white_check_mark: |
| VSOCK support | :white_check_mark: |
| virtio-serial support | :heavy_multiplication_x: |
| OCI Spec validator | :white_check_mark: |
| **Infrastructures**|
@@ -39,25 +39,24 @@ After that, we drafted the initial code here, and any contributions are welcome.
## Getting Started
### Build from Source
The rust-agent need to be built with rust nightly, and static linked with musl.
The rust-agent need to be built with rust newer than 1.37, and static linked with `musl`.
```bash
rustup target add x86_64-unknown-linux-musl
git submodule update --init --recursive
sudo ln -s /usr/bin/g++ /bin/musl-g++
cargo build --target x86_64-unknown-linux-musl --release
```
## Run Kata CI with rust-agent
* Firstly, install kata as noted by ["how to install Kata"](../../docs/install/README.md)
* Secondly, build your own kata initrd/image following the steps in ["how to build your own initrd/image"](../../docs/Developer-Guide.md#create-and-install-rootfs-and-initrd-image).
* Firstly, install Kata as noted by ["how to install Kata"](../../docs/install/README.md)
* Secondly, build your own Kata initrd/image following the steps in ["how to build your own initrd/image"](../../docs/Developer-Guide.md#create-and-install-rootfs-and-initrd-image).
notes: Please use your rust agent instead of the go agent when building your initrd/image.
* Clone the kata ci test cases from: https://github.com/kata-containers/tests.git, and then run the cri test with:
* Clone the Kata CI test cases from: https://github.com/kata-containers/tests.git, and then run the CRI test with:
```bash
$sudo -E PATH=$PATH -E GOPATH=$GOPATH integration/containerd/shimv2/shimv2-tests.sh
```
## Mini Benchmark
The memory of 'RssAnon' consumed by the go-agent and rust-agent as below:
The memory of `RssAnon` consumed by the go-agent and rust-agent as below:
go-agent: about 11M
rust-agent: about 1.1M

View File

@@ -653,7 +653,7 @@ pub struct WindowsNetwork {
#[serde(
default,
skip_serializing_if = "String::is_empty",
rename = "nwtworkSharedContainerName"
rename = "networkSharedContainerName"
)]
pub network_shared_container_name: String,
}

View File

@@ -29,13 +29,6 @@ impl Display for Error {
}
impl error::Error for Error {
fn description(&self) -> &str {
match *self {
Error::Io(ref e) => e.description(),
Error::Json(ref e) => e.description(),
}
}
fn cause(&self) -> Option<&dyn error::Error> {
match *self {
Error::Io(ref e) => Some(e),

View File

@@ -6,11 +6,11 @@
pub mod agent;
pub mod agent_ttrpc;
pub mod empty;
pub mod health;
pub mod health_ttrpc;
pub mod oci;
pub mod types;
pub mod empty;
#[cfg(test)]
mod tests {

View File

@@ -24,4 +24,8 @@ regex = "1.1"
path-absolutize = "1.2.0"
dirs = "3.0.1"
anyhow = "1.0.32"
cgroups = { git = "https://github.com/kata-containers/cgroups-rs", tag = "0.1.1"}
cgroups = { git = "https://github.com/kata-containers/cgroups-rs", branch = "stable-0.1.1"}
tempfile = "3.1.0"
[dev-dependencies]
serial_test = "0.5.0"

View File

@@ -103,19 +103,16 @@ fn register_memory_event_v2(
}
info!(sl!(), "event.wd: {:?}", event.wd);
match event.wd {
ev_fd => {
let oom = get_value_from_cgroup(&event_control_path, "oom_kill");
if oom.unwrap_or(0) > 0 {
sender.send(containere_id.clone()).unwrap();
return;
}
if event.wd == ev_fd {
let oom = get_value_from_cgroup(&event_control_path, "oom_kill");
if oom.unwrap_or(0) > 0 {
sender.send(containere_id.clone()).unwrap();
return;
}
cg_fd => {
let pids = get_value_from_cgroup(&cgroup_event_control_path, "populated");
if pids.unwrap_or(-1) == 0 {
return;
}
} else if event.wd == cg_fd {
let pids = get_value_from_cgroup(&cgroup_event_control_path, "populated");
if pids.unwrap_or(-1) == 0 {
return;
}
}
}

View File

@@ -340,8 +340,6 @@ pub fn init_child() {
return;
}
}
std::process::exit(-1);
}
fn do_init_child(cwfd: RawFd) -> Result<()> {
@@ -647,8 +645,6 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
}
do_exec(&args);
Err(anyhow!("fail to create container"))
}
impl BaseContainer for LinuxContainer {
@@ -1022,7 +1018,7 @@ where
})
}
fn do_exec(args: &[String]) -> Result<()> {
fn do_exec(args: &[String]) -> ! {
let path = &args[0];
let p = CString::new(path.to_string()).unwrap();
let sa: Vec<CString> = args
@@ -1041,8 +1037,8 @@ fn do_exec(args: &[String]) -> Result<()> {
_ => std::process::exit(-2),
}
}
// should never reach here
Ok(())
unreachable!()
}
fn update_namespaces(logger: &Logger, spec: &mut Spec, init_pid: RawFd) -> Result<()> {

View File

@@ -13,6 +13,9 @@
#![allow(non_upper_case_globals)]
// #![allow(unused_comparisons)]
#[macro_use]
#[cfg(test)]
extern crate serial_test;
#[macro_use]
extern crate serde;
extern crate serde_json;
#[macro_use]
@@ -580,4 +583,15 @@ mod tests {
fn it_works() {
assert_eq!(2 + 2, 4);
}
#[allow(unused_macros)]
#[macro_export]
macro_rules! skip_if_not_root {
() => {
if !nix::unistd::Uid::effective().is_root() {
println!("INFO: skipping {} which needs root", module_path!());
return;
}
};
}
}

View File

@@ -14,6 +14,7 @@ use nix::NixPath;
use oci::{LinuxDevice, Mount, Spec};
use std::collections::{HashMap, HashSet};
use std::fs::{self, OpenOptions};
use std::mem::MaybeUninit;
use std::os::unix;
use std::os::unix::io::RawFd;
use std::path::{Path, PathBuf};
@@ -48,6 +49,13 @@ pub struct Info {
}
const MOUNTINFOFORMAT: &'static str = "{d} {d} {d}:{d} {} {} {} {}";
const PROC_PATH: &str = "/proc";
// since libc didn't defined this const for musl, thus redefined it here.
#[cfg(all(target_os = "linux", target_env = "gnu"))]
const PROC_SUPER_MAGIC: libc::c_long = 0x00009fa0;
#[cfg(all(target_os = "linux", target_env = "musl"))]
const PROC_SUPER_MAGIC: libc::c_ulong = 0x00009fa0;
lazy_static! {
static ref PROPAGATION: HashMap<&'static str, MsFlags> = {
@@ -102,6 +110,31 @@ lazy_static! {
};
}
#[inline(always)]
fn mount<P1: ?Sized + NixPath, P2: ?Sized + NixPath, P3: ?Sized + NixPath, P4: ?Sized + NixPath>(
source: Option<&P1>,
target: &P2,
fstype: Option<&P3>,
flags: MsFlags,
data: Option<&P4>,
) -> std::result::Result<(), nix::Error> {
#[cfg(not(test))]
return mount::mount(source, target, fstype, flags, data);
#[cfg(test)]
return Ok(());
}
#[inline(always)]
fn umount2<P: ?Sized + NixPath>(
target: &P,
flags: MntFlags,
) -> std::result::Result<(), nix::Error> {
#[cfg(not(test))]
return mount::umount2(target, flags);
#[cfg(test)]
return Ok(());
}
pub fn init_rootfs(
cfd_log: RawFd,
spec: &Spec,
@@ -113,22 +146,34 @@ pub fn init_rootfs(
lazy_static::initialize(&PROPAGATION);
lazy_static::initialize(&LINUXDEVICETYPE);
let linux = spec.linux.as_ref().unwrap();
let linux = &spec
.linux
.as_ref()
.ok_or::<Error>(anyhow!("Could not get linux configuration from spec"))?;
let mut flags = MsFlags::MS_REC;
match PROPAGATION.get(&linux.rootfs_propagation.as_str()) {
Some(fl) => flags |= *fl,
None => flags |= MsFlags::MS_SLAVE,
}
let rootfs = spec.root.as_ref().unwrap().path.as_str();
let root = fs::canonicalize(rootfs)?;
let rootfs = root.to_str().unwrap();
let root = spec
.root
.as_ref()
.ok_or(anyhow!("Could not get rootfs path from spec"))
.and_then(|r| {
fs::canonicalize(r.path.as_str()).context("Could not canonicalize rootfs path")
})?;
mount::mount(None::<&str>, "/", None::<&str>, flags, None::<&str>)?;
let rootfs = (*root)
.to_str()
.ok_or(anyhow!("Could not convert rootfs path to string"))?;
mount(None::<&str>, "/", None::<&str>, flags, None::<&str>)?;
rootfs_parent_mount_private(rootfs)?;
mount::mount(
mount(
Some(rootfs),
rootfs,
None::<&str>,
@@ -139,8 +184,12 @@ pub fn init_rootfs(
for m in &spec.mounts {
let (mut flags, data) = parse_mount(&m);
if !m.destination.starts_with("/") || m.destination.contains("..") {
return Err(anyhow!(nix::Error::Sys(Errno::EINVAL)));
return Err(anyhow!(
"the mount destination {} is invalid",
m.destination
));
}
if m.r#type == "cgroup" {
mount_cgroups(cfd_log, &m, rootfs, flags, &data, cpath, mounts)?;
} else {
@@ -148,6 +197,10 @@ pub fn init_rootfs(
flags &= !MsFlags::MS_RDONLY;
}
if m.r#type == "bind" {
check_proc_mount(m)?;
}
mount_from(cfd_log, &m, &rootfs, flags, &data, "")?;
// bind mount won't change mount options, we need remount to make mount options
// effective.
@@ -157,7 +210,7 @@ pub fn init_rootfs(
for o in &m.options {
if let Some(fl) = PROPAGATION.get(o.as_str()) {
let dest = format!("{}{}", &rootfs, &m.destination);
mount::mount(None::<&str>, dest.as_str(), None::<&str>, *fl, None::<&str>)?;
mount(None::<&str>, dest.as_str(), None::<&str>, *fl, None::<&str>)?;
}
}
}
@@ -176,6 +229,59 @@ pub fn init_rootfs(
Ok(())
}
fn check_proc_mount(m: &Mount) -> Result<()> {
// White list, it should be sub directories of invalid destinations
// These entries can be bind mounted by files emulated by fuse,
// so commands like top, free displays stats in container.
let valid_destinations = [
"/proc/cpuinfo",
"/proc/diskstats",
"/proc/meminfo",
"/proc/stat",
"/proc/swaps",
"/proc/uptime",
"/proc/loadavg",
"/proc/net/dev",
];
for i in valid_destinations.iter() {
if m.destination.as_str() == *i {
return Ok(());
}
}
if m.destination == PROC_PATH {
// only allow a mount on-top of proc if it's source is "proc"
unsafe {
let mut stats = MaybeUninit::<libc::statfs>::uninit();
if let Ok(_) = m
.source
.with_nix_path(|path| libc::statfs(path.as_ptr(), stats.as_mut_ptr()))
{
if stats.assume_init().f_type == PROC_SUPER_MAGIC {
return Ok(());
}
} else {
return Ok(());
}
return Err(anyhow!(format!(
"{} cannot be mounted to {} because it is not of type proc",
m.source, m.destination
)));
}
}
if m.destination.starts_with(PROC_PATH) {
return Err(anyhow!(format!(
"{} cannot be mounted because it is inside /proc",
m.destination
)));
}
return Ok(());
}
fn mount_cgroups_v2(cfd_log: RawFd, m: &Mount, rootfs: &str, flags: MsFlags) -> Result<()> {
let olddir = unistd::getcwd()?;
unistd::chdir(rootfs)?;
@@ -196,7 +302,7 @@ fn mount_cgroups_v2(cfd_log: RawFd, m: &Mount, rootfs: &str, flags: MsFlags) ->
if flags.contains(MsFlags::MS_RDONLY) {
let dest = format!("{}{}", rootfs, m.destination.as_str());
mount::mount(
mount(
Some(dest.as_str()),
dest.as_str(),
None::<&str>,
@@ -303,7 +409,7 @@ fn mount_cgroups(
if flags.contains(MsFlags::MS_RDONLY) {
let dest = format!("{}{}", rootfs, m.destination.as_str());
mount::mount(
mount(
Some(dest.as_str()),
dest.as_str(),
None::<&str>,
@@ -315,6 +421,16 @@ fn mount_cgroups(
Ok(())
}
fn pivot_root<P1: ?Sized + NixPath, P2: ?Sized + NixPath>(
new_root: &P1,
put_old: &P2,
) -> anyhow::Result<(), nix::Error> {
#[cfg(not(test))]
return unistd::pivot_root(new_root, put_old);
#[cfg(test)]
return Ok(());
}
pub fn pivot_rootfs<P: ?Sized + NixPath + std::fmt::Debug>(path: &P) -> Result<()> {
let oldroot = fcntl::open("/", OFlag::O_DIRECTORY | OFlag::O_RDONLY, Mode::empty())?;
defer!(unistd::close(oldroot).unwrap());
@@ -323,7 +439,7 @@ pub fn pivot_rootfs<P: ?Sized + NixPath + std::fmt::Debug>(path: &P) -> Result<(
// Change to the new root so that the pivot_root actually acts on it.
unistd::fchdir(newroot)?;
unistd::pivot_root(".", ".").context(format!("failed to pivot_root on {:?}", path))?;
pivot_root(".", ".").context(format!("failed to pivot_root on {:?}", path))?;
// Currently our "." is oldroot (according to the current kernel code).
// However, purely for safety, we will fchdir(oldroot) since there isn't
@@ -336,7 +452,7 @@ pub fn pivot_rootfs<P: ?Sized + NixPath + std::fmt::Debug>(path: &P) -> Result<(
// to races where we still have a reference to a mount while a process in
// the host namespace are trying to operate on something they think has no
// mounts (devicemapper in particular).
mount::mount(
mount(
Some("none"),
".",
Some(""),
@@ -345,7 +461,7 @@ pub fn pivot_rootfs<P: ?Sized + NixPath + std::fmt::Debug>(path: &P) -> Result<(
)?;
// Preform the unmount. MNT_DETACH allows us to unmount /proc/self/cwd.
mount::umount2(".", MntFlags::MNT_DETACH).context("failed to do umount2")?;
umount2(".", MntFlags::MNT_DETACH).context("failed to do umount2")?;
// Switch back to our shiny new root.
unistd::chdir("/")?;
@@ -368,7 +484,7 @@ fn rootfs_parent_mount_private(path: &str) -> Result<()> {
}
if options.contains("shared:") {
mount::mount(
mount(
None::<&str>,
mount_point.as_str(),
None::<&str>,
@@ -436,6 +552,14 @@ fn parse_mount_table() -> Result<Vec<Info>> {
Ok(infos)
}
#[inline(always)]
fn chroot<P: ?Sized + NixPath>(path: &P) -> Result<(), nix::Error> {
#[cfg(not(test))]
return unistd::chroot(path);
#[cfg(test)]
return Ok(());
}
pub fn ms_move_root(rootfs: &str) -> Result<bool> {
unistd::chdir(rootfs)?;
let mount_infos = parse_mount_table()?;
@@ -463,14 +587,14 @@ pub fn ms_move_root(rootfs: &str) -> Result<bool> {
}
// Be sure umount events are not propagated to the host.
mount::mount(
mount(
None::<&str>,
abs_mount_point,
None::<&str>,
MsFlags::MS_SLAVE | MsFlags::MS_REC,
None::<&str>,
)?;
match mount::umount2(abs_mount_point, MntFlags::MNT_DETACH) {
match umount2(abs_mount_point, MntFlags::MNT_DETACH) {
Ok(_) => (),
Err(e) => {
if e.ne(&nix::Error::from(Errno::EINVAL)) && e.ne(&nix::Error::from(Errno::EPERM)) {
@@ -479,7 +603,7 @@ pub fn ms_move_root(rootfs: &str) -> Result<bool> {
// If we have not privileges for umounting (e.g. rootless), then
// cover the path.
mount::mount(
mount(
Some("tmpfs"),
abs_mount_point,
Some("tmpfs"),
@@ -490,14 +614,14 @@ pub fn ms_move_root(rootfs: &str) -> Result<bool> {
}
}
mount::mount(
mount(
Some(abs_root),
"/",
None::<&str>,
MsFlags::MS_MOVE,
None::<&str>,
)?;
unistd::chroot(".")?;
chroot(".")?;
unistd::chdir("/")?;
Ok(true)
@@ -584,7 +708,7 @@ fn mount_from(
}
}
match mount::mount(
match mount(
Some(src.as_str()),
dest.as_str(),
Some(m.r#type.as_str()),
@@ -608,7 +732,7 @@ fn mount_from(
| MsFlags::MS_SLAVE),
)
{
match mount::mount(
match mount(
Some(dest.as_str()),
dest.as_str(),
None::<&str>,
@@ -669,10 +793,6 @@ fn ensure_ptmx() -> Result<()> {
Ok(())
}
fn makedev(major: u64, minor: u64) -> u64 {
(minor & 0xff) | ((major & 0xfff) << 8) | ((minor & !0xff) << 12) | ((major & !0xfff) << 32)
}
lazy_static! {
static ref LINUXDEVICETYPE: HashMap<&'static str, SFlag> = {
let mut m = HashMap::new();
@@ -693,7 +813,7 @@ fn mknod_dev(dev: &LinuxDevice) -> Result<()> {
&dev.path[1..],
*f,
Mode::from_bits_truncate(dev.file_mode.unwrap_or(0)),
makedev(dev.major as u64, dev.minor as u64),
nix::sys::stat::makedev(dev.major as u64, dev.minor as u64),
)?;
unistd::chown(
@@ -714,7 +834,7 @@ fn bind_dev(dev: &LinuxDevice) -> Result<()> {
unistd::close(fd)?;
mount::mount(
mount(
Some(&*dev.path),
&dev.path[1..],
None::<&str>,
@@ -744,7 +864,7 @@ pub fn finish_rootfs(cfd_log: RawFd, spec: &Spec) -> Result<()> {
if m.destination == "/dev" {
let (flags, _) = parse_mount(m);
if flags.contains(MsFlags::MS_RDONLY) {
mount::mount(
mount(
Some("/dev"),
"/dev",
None::<&str>,
@@ -758,7 +878,7 @@ pub fn finish_rootfs(cfd_log: RawFd, spec: &Spec) -> Result<()> {
if spec.root.as_ref().unwrap().readonly {
let flags = MsFlags::MS_BIND | MsFlags::MS_RDONLY | MsFlags::MS_NODEV | MsFlags::MS_REMOUNT;
mount::mount(Some("/"), "/", None::<&str>, flags, None::<&str>)?;
mount(Some("/"), "/", None::<&str>, flags, None::<&str>)?;
}
stat::umask(Mode::from_bits_truncate(0o022));
unistd::chdir(&olddir)?;
@@ -773,7 +893,7 @@ fn mask_path(path: &str) -> Result<()> {
//info!("{}", path);
match mount::mount(
match mount(
Some("/dev/null"),
path,
None::<&str>,
@@ -805,7 +925,7 @@ fn readonly_path(path: &str) -> Result<()> {
//info!("{}", path);
match mount::mount(
match mount(
Some(&path[1..]),
path,
None::<&str>,
@@ -829,7 +949,7 @@ fn readonly_path(path: &str) -> Result<()> {
Ok(_) => {}
}
mount::mount(
mount(
Some(&path[1..]),
&path[1..],
None::<&str>,
@@ -839,3 +959,272 @@ fn readonly_path(path: &str) -> Result<()> {
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use crate::skip_if_not_root;
use std::os::unix::io::AsRawFd;
use tempfile::tempdir;
#[test]
#[serial(chdir)]
fn test_init_rootfs() {
let stdout_fd = std::io::stdout().as_raw_fd();
let mut spec = oci::Spec::default();
let cpath = HashMap::new();
let mounts = HashMap::new();
// there is no spec.linux, should fail
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(
ret.is_err(),
"Should fail: there is no spec.linux. Got: {:?}",
ret
);
// there is no spec.Root, should fail
spec.linux = Some(oci::Linux::default());
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(
ret.is_err(),
"should fail: there is no spec.Root. Got: {:?}",
ret
);
let rootfs = tempdir().unwrap();
let ret = fs::create_dir(rootfs.path().join("dev"));
assert!(ret.is_ok(), "Got: {:?}", ret);
spec.root = Some(oci::Root {
path: rootfs.path().to_str().unwrap().to_string(),
readonly: false,
});
// there is no spec.mounts, but should pass
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
let ret = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev"));
// Adding bad mount point to spec.mounts
spec.mounts.push(oci::Mount {
destination: "error".into(),
r#type: "bind".into(),
source: "error".into(),
options: vec!["shared".into(), "rw".into(), "dev".into()],
});
// destination doesn't start with /, should fail
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(
ret.is_err(),
"Should fail: destination doesn't start with '/'. Got: {:?}",
ret
);
spec.mounts.pop();
let ret = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev"));
// mounting a cgroup
spec.mounts.push(oci::Mount {
destination: "/cgroup".into(),
r#type: "cgroup".into(),
source: "/cgroup".into(),
options: vec!["shared".into()],
});
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
spec.mounts.pop();
let ret = fs::remove_dir_all(rootfs.path().join("dev"));
let ret = fs::create_dir(rootfs.path().join("dev"));
// mounting /dev
spec.mounts.push(oci::Mount {
destination: "/dev".into(),
r#type: "bind".into(),
source: "/dev".into(),
options: vec!["shared".into()],
});
let ret = init_rootfs(stdout_fd, &spec, &cpath, &mounts, true);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
#[serial(chdir)]
fn test_mount_cgroups() {
let stdout_fd = std::io::stdout().as_raw_fd();
let mount = oci::Mount {
destination: "/cgroups".to_string(),
r#type: "cgroup".to_string(),
source: "/cgroups".to_string(),
options: vec!["shared".to_string()],
};
let tempdir = tempdir().unwrap();
let rootfs = tempdir.path().to_str().unwrap().to_string();
let flags = MsFlags::MS_RDONLY;
let mut cpath = HashMap::new();
let mut cgroup_mounts = HashMap::new();
cpath.insert("cpu".to_string(), "cpu".to_string());
cpath.insert("memory".to_string(), "memory".to_string());
cgroup_mounts.insert("default".to_string(), "default".to_string());
cgroup_mounts.insert("cpu".to_string(), "cpu".to_string());
cgroup_mounts.insert("memory".to_string(), "memory".to_string());
let ret = fs::create_dir_all(tempdir.path().join("cgroups"));
assert!(ret.is_ok(), "Should pass. Got {:?}", ret);
let ret = fs::create_dir_all(tempdir.path().join("cpu"));
assert!(ret.is_ok(), "Should pass. Got {:?}", ret);
let ret = fs::create_dir_all(tempdir.path().join("memory"));
assert!(ret.is_ok(), "Should pass. Got {:?}", ret);
let ret = mount_cgroups(
stdout_fd,
&mount,
&rootfs,
flags,
"",
&cpath,
&cgroup_mounts,
);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
#[serial(chdir)]
fn test_pivot_root() {
let ret = pivot_rootfs("/tmp");
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
#[serial(chdir)]
fn test_ms_move_rootfs() {
let ret = ms_move_root("/abc");
assert!(
ret.is_err(),
"Should fail. path doesn't exist. Got: {:?}",
ret
);
let ret = ms_move_root("/tmp");
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
fn test_mask_path() {
let ret = mask_path("abc");
assert!(
ret.is_err(),
"Should fail: path doesn't start with '/'. Got: {:?}",
ret
);
let ret = mask_path("abc/../");
assert!(
ret.is_err(),
"Should fail: path contains '..'. Got: {:?}",
ret
);
let ret = mask_path("/tmp");
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
#[serial(chdir)]
fn test_finish_rootfs() {
let stdout_fd = std::io::stdout().as_raw_fd();
let mut spec = oci::Spec::default();
spec.linux = Some(oci::Linux::default());
spec.linux.as_mut().unwrap().masked_paths = vec!["/tmp".to_string()];
spec.linux.as_mut().unwrap().readonly_paths = vec!["/tmp".to_string()];
spec.root = Some(oci::Root {
path: "/tmp".to_string(),
readonly: true,
});
spec.mounts = vec![oci::Mount {
destination: "/dev".to_string(),
r#type: "bind".to_string(),
source: "/dev".to_string(),
options: vec!["ro".to_string(), "shared".to_string()],
}];
let ret = finish_rootfs(stdout_fd, &spec);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
fn test_readonly_path() {
let ret = readonly_path("abc");
assert!(ret.is_err(), "Should fail. Got: {:?}", ret);
let ret = readonly_path("../../");
assert!(ret.is_err(), "Should fail. Got: {:?}", ret);
let ret = readonly_path("/tmp");
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
#[serial(chdir)]
fn test_mknod_dev() {
skip_if_not_root!();
let tempdir = tempdir().unwrap();
let olddir = unistd::getcwd().unwrap();
defer!(unistd::chdir(&olddir););
unistd::chdir(tempdir.path());
let dev = oci::LinuxDevice {
path: "/fifo".to_string(),
r#type: "c".to_string(),
major: 0,
minor: 0,
file_mode: Some(0660),
uid: Some(unistd::getuid().as_raw()),
gid: Some(unistd::getgid().as_raw()),
};
let ret = mknod_dev(&dev);
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
let ret = stat::stat("fifo");
assert!(ret.is_ok(), "Should pass. Got: {:?}", ret);
}
#[test]
fn test_check_proc_mount() {
let mount = oci::Mount {
destination: "/proc".to_string(),
r#type: "bind".to_string(),
source: "/test".to_string(),
options: vec!["shared".to_string()],
};
assert!(check_proc_mount(&mount).is_err());
let mount = oci::Mount {
destination: "/proc/cpuinfo".to_string(),
r#type: "bind".to_string(),
source: "/test".to_string(),
options: vec!["shared".to_string()],
};
assert!(check_proc_mount(&mount).is_ok());
let mount = oci::Mount {
destination: "/proc/test".to_string(),
r#type: "bind".to_string(),
source: "/test".to_string(),
options: vec!["shared".to_string()],
};
assert!(check_proc_mount(&mount).is_err());
}
}

View File

@@ -31,16 +31,20 @@ extern crate netlink;
use crate::netlink::{RtnlHandle, NETLINK_ROUTE};
use anyhow::{anyhow, Context, Result};
use nix::fcntl::{self, OFlag};
use nix::fcntl::{FcntlArg, FdFlag};
use nix::libc::{STDERR_FILENO, STDIN_FILENO, STDOUT_FILENO};
use nix::pty;
use nix::sys::select::{select, FdSet};
use nix::sys::socket::{self, AddressFamily, SockAddr, SockFlag, SockType};
use nix::sys::wait::{self, WaitStatus};
use nix::unistd;
use nix::unistd::dup;
use nix::unistd::{self, close, dup, dup2, fork, setsid, ForkResult};
use prctl::set_child_subreaper;
use signal_hook::{iterator::Signals, SIGCHLD};
use std::collections::HashMap;
use std::env;
use std::ffi::OsStr;
use std::ffi::{CStr, CString, OsStr};
use std::fs::{self, File};
use std::io::{Read, Write};
use std::os::unix::ffi::OsStrExt;
use std::os::unix::fs as unixfs;
use std::os::unix::io::AsRawFd;
@@ -75,6 +79,8 @@ const NAME: &str = "kata-agent";
const KERNEL_CMDLINE_FILE: &str = "/proc/cmdline";
const CONSOLE_PATH: &str = "/dev/console";
const DEFAULT_BUF_SIZE: usize = 8 * 1024;
lazy_static! {
static ref GLOBAL_DEVICE_WATCHER: Arc<Mutex<HashMap<String, Sender<String>>>> =
Arc::new(Mutex::new(HashMap::new()));
@@ -126,15 +132,6 @@ fn main() -> Result<()> {
let writer = unsafe { File::from_raw_fd(wfd) };
let agentConfig = AGENT_CONFIG.clone();
// once parsed cmdline and set the config, release the write lock
// as soon as possible in case other thread would get read lock on
// it.
{
let mut config = agentConfig.write().unwrap();
config.parse_cmdline(KERNEL_CMDLINE_FILE)?;
}
let config = agentConfig.read().unwrap();
let init_mode = unistd::getpid() == Pid::from_raw(1);
if init_mode {
@@ -148,8 +145,25 @@ fn main() -> Result<()> {
// since before do the base mount, it wouldn't access "/proc/cmdline"
// to get the customzied debug level.
let logger = logging::create_logger(NAME, "agent", slog::Level::Debug, writer);
// Must mount proc fs before parsing kernel command line
general_mount(&logger).map_err(|e| {
error!(logger, "fail general mount: {}", e);
e
})?;
let mut config = agentConfig.write().unwrap();
config.parse_cmdline(KERNEL_CMDLINE_FILE)?;
init_agent_as_init(&logger, config.unified_cgroup_hierarchy)?;
} else {
// once parsed cmdline and set the config, release the write lock
// as soon as possible in case other thread would get read lock on
// it.
let mut config = agentConfig.write().unwrap();
config.parse_cmdline(KERNEL_CMDLINE_FILE)?;
}
let config = agentConfig.read().unwrap();
let log_vport = config.log_vport as u32;
let log_handle = thread::spawn(move || -> Result<()> {
@@ -205,7 +219,7 @@ fn start_sandbox(logger: &Logger, config: &agentConfig, init_mode: bool) -> Resu
let handle = builder.spawn(move || {
let shells = shells.lock().unwrap();
let result = setup_debug_console(shells.to_vec(), debug_console_vport);
let result = setup_debug_console(&thread_logger, shells.to_vec(), debug_console_vport);
if result.is_err() {
// Report error, but don't fail
warn!(thread_logger, "failed to setup debug console";
@@ -335,8 +349,13 @@ fn setup_signal_handler(logger: &Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
// init_agent_as_init will do the initializations such as setting up the rootfs
// when this agent has been run as the init process.
fn init_agent_as_init(logger: &Logger, unified_cgroup_hierarchy: bool) -> Result<()> {
general_mount(logger)?;
cgroups_mount(logger, unified_cgroup_hierarchy)?;
cgroups_mount(logger, unified_cgroup_hierarchy).map_err(|e| {
error!(
logger,
"fail cgroups mount, unified_cgroup_hierarchy {}: {}", unified_cgroup_hierarchy, e
);
e
})?;
fs::remove_file(Path::new("/dev/ptmx"))?;
unixfs::symlink(Path::new("/dev/pts/ptmx"), Path::new("/dev/ptmx"))?;
@@ -393,9 +412,9 @@ use crate::config::agentConfig;
use nix::sys::stat::Mode;
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::PathBuf;
use std::process::{exit, Command, Stdio};
use std::process::exit;
fn setup_debug_console(shells: Vec<String>, port: u32) -> Result<()> {
fn setup_debug_console(logger: &Logger, shells: Vec<String>, port: u32) -> Result<()> {
let mut shell: &str = "";
for sh in shells.iter() {
let binary = PathBuf::from(sh);
@@ -409,7 +428,7 @@ fn setup_debug_console(shells: Vec<String>, port: u32) -> Result<()> {
return Err(anyhow!("no shell found to launch debug console"));
}
let f: RawFd = if port > 0 {
if port > 0 {
let listenfd = socket::socket(
AddressFamily::Vsock,
SockType::Stream,
@@ -419,29 +438,201 @@ fn setup_debug_console(shells: Vec<String>, port: u32) -> Result<()> {
let addr = SockAddr::new_vsock(libc::VMADDR_CID_ANY, port);
socket::bind(listenfd, &addr)?;
socket::listen(listenfd, 1)?;
socket::accept4(listenfd, SockFlag::SOCK_CLOEXEC)?
loop {
let f: RawFd = socket::accept4(listenfd, SockFlag::SOCK_CLOEXEC)?;
match run_debug_console_shell(logger, shell, f) {
Ok(_) => {
info!(logger, "run_debug_console_shell session finished");
}
Err(err) => {
error!(logger, "run_debug_console_shell failed: {:?}", err);
}
}
}
} else {
let mut flags = OFlag::empty();
flags.insert(OFlag::O_RDWR);
flags.insert(OFlag::O_CLOEXEC);
fcntl::open(CONSOLE_PATH, flags, Mode::empty())?
loop {
let f: RawFd = fcntl::open(CONSOLE_PATH, flags, Mode::empty())?;
match run_debug_console_shell(logger, shell, f) {
Ok(_) => {
info!(logger, "run_debug_console_shell session finished");
}
Err(err) => {
error!(logger, "run_debug_console_shell failed: {:?}", err);
}
}
}
};
}
fn io_copy<R: ?Sized, W: ?Sized>(reader: &mut R, writer: &mut W) -> io::Result<u64>
where
R: Read,
W: Write,
{
let mut buf = [0; DEFAULT_BUF_SIZE];
let buf_len;
match reader.read(&mut buf) {
Ok(0) => return Ok(0),
Ok(len) => buf_len = len,
Err(err) => return Err(err),
};
let cmd = Command::new(shell)
.arg("-i")
.stdin(unsafe { Stdio::from_raw_fd(f) })
.stdout(unsafe { Stdio::from_raw_fd(f) })
.stderr(unsafe { Stdio::from_raw_fd(f) })
.spawn();
// write and return
match writer.write_all(&buf[..buf_len]) {
Ok(_) => return Ok(buf_len as u64),
Err(err) => return Err(err),
}
}
let mut cmd = match cmd {
Ok(c) => c,
Err(_) => return Err(anyhow!("failed to spawn shell")),
};
fn run_debug_console_shell(logger: &Logger, shell: &str, socket_fd: RawFd) -> Result<()> {
let pseduo = pty::openpty(None, None)?;
let _ = fcntl::fcntl(pseduo.master, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let _ = fcntl::fcntl(pseduo.slave, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
cmd.wait()?;
let slave_fd = pseduo.slave;
return Ok(());
match fork() {
Ok(ForkResult::Child) => {
// create new session with child as session leader
setsid()?;
// dup stdin, stdout, stderr to let child act as a terminal
dup2(slave_fd, STDIN_FILENO)?;
dup2(slave_fd, STDOUT_FILENO)?;
dup2(slave_fd, STDERR_FILENO)?;
// set tty
unsafe {
libc::ioctl(0, libc::TIOCSCTTY);
}
let cmd = CString::new(shell).unwrap();
let args: Vec<&CStr> = vec![];
// run shell
if let Err(e) = unistd::execvp(cmd.as_c_str(), args.as_slice()) {
match e {
nix::Error::Sys(errno) => {
std::process::exit(errno as i32);
}
_ => std::process::exit(-2),
}
}
}
Ok(ForkResult::Parent { child: child_pid }) => {
info!(logger, "get debug shell pid {:?}", child_pid);
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?;
let master_fd = pseduo.master;
let debug_shell_logger = logger.clone();
// channel that used to sync between thread and main process
let (tx, rx) = mpsc::channel::<i32>();
// start a thread to do IO copy between socket and pseduo.master
thread::spawn(move || {
let mut master_reader = unsafe { File::from_raw_fd(master_fd) };
let mut master_writer = unsafe { File::from_raw_fd(master_fd) };
let mut socket_reader = unsafe { File::from_raw_fd(socket_fd) };
let mut socket_writer = unsafe { File::from_raw_fd(socket_fd) };
loop {
let mut fd_set = FdSet::new();
fd_set.insert(rfd);
fd_set.insert(master_fd);
fd_set.insert(socket_fd);
match select(
Some(fd_set.highest().unwrap() + 1),
&mut fd_set,
None,
None,
None,
) {
Ok(_) => (),
Err(e) => {
if e == nix::Error::from(nix::errno::Errno::EINTR) {
continue;
} else {
error!(debug_shell_logger, "select error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
if fd_set.contains(rfd) {
info!(
debug_shell_logger,
"debug shell process {} exited", child_pid
);
tx.send(1).unwrap();
break;
}
if fd_set.contains(master_fd) {
match io_copy(&mut master_reader, &mut socket_writer) {
Ok(0) => {
debug!(debug_shell_logger, "master fd closed");
tx.send(1).unwrap();
break;
}
Ok(_) => {}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!(debug_shell_logger, "read master fd error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
if fd_set.contains(socket_fd) {
match io_copy(&mut socket_reader, &mut master_writer) {
Ok(0) => {
debug!(debug_shell_logger, "socket fd closed");
tx.send(1).unwrap();
break;
}
Ok(_) => {}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!(debug_shell_logger, "read socket fd error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
}
});
let wait_status = wait::waitpid(child_pid, None);
info!(logger, "debug console process exit code: {:?}", wait_status);
info!(logger, "notify debug monitor thread to exit");
// close pipe to exit select loop
let _ = close(wfd);
// wait for thread exit.
let _ = rx.recv().unwrap();
info!(logger, "debug monitor thread has exited");
// close files
let _ = close(rfd);
let _ = close(master_fd);
let _ = close(slave_fd);
}
Err(err) => {
return Err(anyhow!("fork error: {:?}", err));
}
}
Ok(())
}
#[cfg(test)]
@@ -459,8 +650,9 @@ mod tests {
let shells_ref = SHELLS.clone();
let mut shells = shells_ref.lock().unwrap();
shells.clear();
let logger = slog_scope::logger();
let result = setup_debug_console(shells.to_vec(), 0);
let result = setup_debug_console(&logger, shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(
@@ -485,8 +677,9 @@ mod tests {
.to_string();
shells.push(shell);
let logger = slog_scope::logger();
let result = setup_debug_console(shells.to_vec(), 0);
let result = setup_debug_console(&logger, shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(

View File

@@ -891,10 +891,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
let p = match find_process(&mut sandbox, cid.as_str(), eid.as_str(), false) {
Ok(v) => v,
Err(_) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INVALID_ARGUMENT,
"invalid argument".to_string(),
format!("invalid argument: {:?}", e),
)));
}
};
@@ -923,10 +923,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
let mut sandbox = s.lock().unwrap();
let p = match find_process(&mut sandbox, cid.as_str(), eid.as_str(), false) {
Ok(v) => v,
Err(_e) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::UNAVAILABLE,
"cannot find the process".to_string(),
format!("invalid argument: {:?}", e),
)));
}
};
@@ -948,10 +948,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
};
let err = libc::ioctl(fd, TIOCSWINSZ, &win);
if let Err(_) = Errno::result(err).map(drop) {
if let Err(e) = Errno::result(err).map(drop) {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INTERNAL,
"ioctl error".to_string(),
format!("ioctl error: {:?}", e),
)));
}
}
@@ -976,10 +976,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
let iface = match rtnl.update_interface(interface.as_ref().unwrap()) {
Ok(v) => v,
Err(_) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INTERNAL,
"update interface".to_string(),
format!("update interface: {:?}", e),
)));
}
};
@@ -1005,10 +1005,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
// get current routes to return when error out
let crs = match rtnl.list_routes() {
Ok(routes) => routes,
Err(_) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INTERNAL,
"update routes".to_string(),
format!("update routes: {:?}", e),
)));
}
};
@@ -1037,10 +1037,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
let rtnl = sandbox.rtnl.as_mut().unwrap();
let v = match rtnl.list_interfaces() {
Ok(value) => value,
Err(_) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INTERNAL,
"list interface".to_string(),
format!("list interface: {:?}", e),
)));
}
};
@@ -1066,10 +1066,10 @@ impl protocols::agent_ttrpc::AgentService for agentService {
let v = match rtnl.list_routes() {
Ok(value) => value,
Err(_) => {
Err(e) => {
return Err(ttrpc::Error::RpcStatus(ttrpc::get_status(
ttrpc::Code::INTERNAL,
"list routes".to_string(),
format!("list routes: {:?}", e),
)));
}
};

View File

@@ -1,5 +0,0 @@
# Contributing
## This repo is part of [Kata Containers](https://katacontainers.io)
For details on how to contribute to the Kata Containers project, please see the main [contributing document](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).

View File

@@ -43,7 +43,8 @@ include $(ARCH_FILE)
PROJECT_TYPE = kata
PROJECT_NAME = Kata Containers
PROJECT_TAG = kata-containers
PROJECT_URL = https://github.com/kata-containers
PROJECT_ORG = $(PROJECT_TAG)
PROJECT_URL = https://github.com/$(PROJECT_ORG)
PROJECT_BUG_URL = $(PROJECT_URL)/kata-containers/issues/new
# list of scripts to install
@@ -581,7 +582,6 @@ $(MONITOR_OUTPUT): $(SOURCES) $(GENERATED_FILES) $(MAKEFILE_LIST)
.PHONY: \
check \
check-go-static \
check-go-test \
coverage \
default \
install \
@@ -628,6 +628,7 @@ $(GENERATED_FILES): %: %.in $(MAKEFILE_LIST) VERSION .git-commit
-e "s|@PKGRUNDIR@|$(PKGRUNDIR)|g" \
-e "s|@NETMONPATH@|$(NETMONPATH)|g" \
-e "s|@PROJECT_BUG_URL@|$(PROJECT_BUG_URL)|g" \
-e "s|@PROJECT_ORG@|$(PROJECT_ORG)|g" \
-e "s|@PROJECT_URL@|$(PROJECT_URL)|g" \
-e "s|@PROJECT_NAME@|$(PROJECT_NAME)|g" \
-e "s|@PROJECT_TAG@|$(PROJECT_TAG)|g" \
@@ -686,9 +687,9 @@ go-test: $(GENERATED_FILES)
go test -v -mod=vendor ./...
check-go-static:
$(QUIET_CHECK).ci/static-checks.sh
$(QUIET_CHECK).ci/go-no-os-exit.sh ./cli
$(QUIET_CHECK).ci/go-no-os-exit.sh ./virtcontainers
$(QUIET_CHECK)../../ci/static-checks.sh
$(QUIET_CHECK)../../ci/go-no-os-exit.sh ./cli
$(QUIET_CHECK)../../ci/go-no-os-exit.sh ./virtcontainers
coverage:
go test -v -mod=vendor -covermode=atomic -coverprofile=coverage.txt ./...

View File

@@ -25,6 +25,9 @@ const projectPrefix = "@PROJECT_TYPE@"
// original URL for this project
const projectURL = "@PROJECT_URL@"
// Project URL's organisation name
const projectORG = "@PROJECT_ORG@"
const defaultRootDirectory = "@PKGRUNDIR@"
// commit is the git commit the runtime is compiled from.

View File

@@ -127,6 +127,13 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_ACRN@"
#trace_mode = "dynamic"
#trace_type = "isolated"
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -125,6 +125,12 @@ block_device_driver = "virtio-blk"
#trace_mode = "dynamic"
#trace_type = "isolated"
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the

View File

@@ -256,6 +256,13 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@"
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -352,6 +352,12 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the

View File

@@ -375,6 +375,12 @@ vhost_user_store_path = "@DEFVHOSTUSERSTOREPATH@"
#
kernel_modules=[]
# Enable debug console.
# If enabled, user can connect guest OS running inside hypervisor
# through "kata-runtime exec <sandbox-id>" command
#debug_console_enabled = true
[netmon]
# If enabled, the network monitoring process gets started when the

View File

@@ -10,6 +10,7 @@ import (
"os"
"testing"
ktu "github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils"
"github.com/stretchr/testify/assert"
)
@@ -22,6 +23,9 @@ func TestConsoleFromFile(t *testing.T) {
}
func TestNewConsole(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
t.Skip(testDisabledAsNonRoot)
}
assert := assert.New(t)
console, err := newConsole()
@@ -34,6 +38,9 @@ func TestNewConsole(t *testing.T) {
}
func TestIsTerminal(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
t.Skip(testDisabledAsNonRoot)
}
assert := assert.New(t)
var fd uintptr = 4

View File

@@ -71,6 +71,9 @@ const (
genericCPUFlagsTag = "flags" // nolint: varcheck, unused, deadcode
genericCPUVendorField = "vendor_id" // nolint: varcheck, unused, deadcode
genericCPUModelField = "model name" // nolint: varcheck, unused, deadcode
// If set, do not perform any network checks
noNetworkEnvVar = "KATA_CHECK_NO_NETWORK"
)
// variables rather than consts to allow tests to modify them
@@ -307,14 +310,71 @@ var kataCheckCLICommand = cli.Command{
Usage: "tests if system can run " + project,
Flags: []cli.Flag{
cli.BoolFlag{
Name: "verbose, v",
Usage: "display the list of checks performed",
Name: "check-version-only",
Usage: "Only compare the current and latest available versions (requires network, non-root only)",
},
cli.BoolFlag{
Name: "include-all-releases",
Usage: "Don't filter out pre-release release versions",
},
cli.BoolFlag{
Name: "no-network-checks, n",
Usage: "Do not run any checks using the network",
},
cli.BoolFlag{
Name: "only-list-releases",
Usage: "Only list newer available releases (non-root only)",
},
cli.BoolFlag{
Name: "strict, s",
Usage: "perform strict checking",
},
cli.BoolFlag{
Name: "verbose, v",
Usage: "display the list of checks performed",
},
},
Description: fmt.Sprintf(`tests if system can run %s and version is current.
ENVIRONMENT VARIABLES:
- %s: If set to any value, act as if "--no-network-checks" was specified.
EXAMPLES:
- Perform basic checks:
$ %s %s
- Local basic checks only:
$ %s %s --no-network-checks
- Perform further checks:
$ sudo %s %s
- Just check if a newer version is available:
$ %s %s --check-version-only
- List available releases (shows output in format "version;release-date;url"):
$ %s %s --only-list-releases
- List all available releases (includes pre-release versions):
$ %s %s --only-list-releases --include-all-releases
`,
project,
noNetworkEnvVar,
name, checkCmd,
name, checkCmd,
name, checkCmd,
name, checkCmd,
name, checkCmd,
name, checkCmd,
),
Action: func(context *cli.Context) error {
verbose := context.Bool("verbose")
@@ -329,6 +389,28 @@ var kataCheckCLICommand = cli.Command{
span, _ := katautils.Trace(ctx, "kata-check")
defer span.Finish()
if context.Bool("no-network-checks") == false && os.Getenv(noNetworkEnvVar) == "" {
cmd := RelCmdCheck
if context.Bool("only-list-releases") {
cmd = RelCmdList
}
if os.Geteuid() == 0 {
kataLog.Warn("Not running network checks as super user")
} else {
err = HandleReleaseVersions(cmd, version, context.Bool("include-all-releases"))
if err != nil {
return err
}
}
}
if context.Bool("check-version-only") || context.Bool("only-list-releases") {
return nil
}
runtimeConfig, ok := context.App.Metadata["runtimeConfig"].(oci.RuntimeConfig)
if !ok {
return errors.New("kata-check: cannot determine runtime config")

View File

@@ -0,0 +1,218 @@
// Copyright (c) 2017-2019 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
"os"
"strings"
"sync"
"time"
"github.com/containerd/console"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
clientUtils "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols/client"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
const (
// The buffer size used to specify the buffer for IO streams copy
bufSize = 1024 * 2
defaultTimeout = 3 * time.Second
subCommandName = "exec"
// command-line parameters name
paramKataMonitorAddr = "kata-monitor-addr"
paramDebugConsolePort = "kata-debug-port"
defaultKernelParamDebugConsoleVPortValue = 1026
defaultParamKataMonitorAddr = "http://localhost:8090"
)
var (
bufPool = sync.Pool{
New: func() interface{} {
buffer := make([]byte, bufSize)
return &buffer
},
}
)
var kataExecCLICommand = cli.Command{
Name: subCommandName,
Usage: "Enter into guest by debug console",
Flags: []cli.Flag{
cli.StringFlag{
Name: paramKataMonitorAddr,
Usage: "Kata monitor listen address.",
},
cli.Uint64Flag{
Name: paramDebugConsolePort,
Usage: "Port that debug console is listening on.",
},
},
Action: func(context *cli.Context) error {
ctx, err := cliContextToContext(context)
if err != nil {
return err
}
span, _ := katautils.Trace(ctx, subCommandName)
defer span.Finish()
endPoint := context.String(paramKataMonitorAddr)
if endPoint == "" {
endPoint = defaultParamKataMonitorAddr
}
port := context.Uint64(paramDebugConsolePort)
if port == 0 {
port = defaultKernelParamDebugConsoleVPortValue
}
sandboxID := context.Args().Get(0)
if sandboxID == "" {
return fmt.Errorf("SandboxID not found")
}
conn, err := getConn(endPoint, sandboxID, port)
if err != nil {
return err
}
defer conn.Close()
con := console.Current()
defer con.Reset()
if err := con.SetRaw(); err != nil {
return err
}
iostream := &iostream{
conn: conn,
exitch: make(chan struct{}),
closed: false,
}
ioCopy(iostream, con)
<-iostream.exitch
return nil
},
}
func ioCopy(stream *iostream, con console.Console) {
var wg sync.WaitGroup
// stdin
go func() {
p := bufPool.Get().(*[]byte)
defer bufPool.Put(p)
io.CopyBuffer(stream, con, *p)
}()
// stdout
wg.Add(1)
go func() {
p := bufPool.Get().(*[]byte)
defer bufPool.Put(p)
io.CopyBuffer(os.Stdout, stream, *p)
wg.Done()
}()
wg.Wait()
close(stream.exitch)
}
type iostream struct {
conn net.Conn
exitch chan struct{}
closed bool
}
func (s *iostream) Write(data []byte) (n int, err error) {
if s.closed {
return 0, errors.New("stream closed")
}
return s.conn.Write(data)
}
func (s *iostream) Close() error {
if s.closed {
return errors.New("stream closed")
}
err := s.conn.Close()
if err == nil {
s.closed = true
}
return err
}
func (s *iostream) Read(data []byte) (n int, err error) {
if s.closed {
return 0, errors.New("stream closed")
}
return s.conn.Read(data)
}
func getConn(endPoint, sandboxID string, port uint64) (net.Conn, error) {
shimURL := fmt.Sprintf("%s/agent-url?sandbox=%s", endPoint, sandboxID)
resp, err := http.Get(shimURL)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("Failed to get %s: %d", shimURL, resp.StatusCode)
}
defer resp.Body.Close()
data, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
sock := strings.TrimSuffix(string(data), "\n")
addr, err := url.Parse(sock)
if err != nil {
return nil, err
}
// validate more
switch addr.Scheme {
case clientUtils.VSockSocketScheme:
// vsock://31513974:1024
cidAndPort := strings.Split(addr.Host, ":")
if len(cidAndPort) != 2 {
return nil, fmt.Errorf("Invalid vsock scheme: %s", sock)
}
shimAddr := fmt.Sprintf("%s:%s:%d", clientUtils.VSockSocketScheme, cidAndPort[0], port)
return clientUtils.VsockDialer(shimAddr, defaultTimeout)
case clientUtils.HybridVSockScheme:
// addr: hvsock:///run/vc/firecracker/340b412c97bf1375cdda56bfa8f18c8a/root/kata.hvsock:1024
hvsocket := strings.Split(addr.Path, ":")
if len(hvsocket) != 2 {
return nil, fmt.Errorf("Invalid hybrid vsock scheme: %s", sock)
}
// hvsock:///run/vc/firecracker/340b412c97bf1375cdda56bfa8f18c8a/root/kata.hvsock
shimAddr := fmt.Sprintf("%s:%s:%d", clientUtils.HybridVSockScheme, hvsocket[0], port)
return clientUtils.HybridVSockDialer(shimAddr, defaultTimeout)
}
return nil, fmt.Errorf("schema %s not found", addr.Scheme)
}

View File

@@ -36,6 +36,7 @@ func main() {
m := http.NewServeMux()
m.Handle("/metrics", http.HandlerFunc(km.ProcessMetricsRequest))
m.Handle("/sandboxes", http.HandlerFunc(km.ListSandboxes))
m.Handle("/agent-url", http.HandlerFunc(km.GetAgentURL))
// for debug shim process
m.Handle("/debug/vars", http.HandlerFunc(km.ExpvarHandler))

View File

@@ -125,6 +125,7 @@ var runtimeCommands = []cli.Command{
// Kata Containers specific extensions
kataCheckCLICommand,
kataEnvCLICommand,
kataExecCLICommand,
factoryCLICommand,
}

406
src/runtime/cli/release.go Normal file
View File

@@ -0,0 +1,406 @@
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"strings"
"github.com/blang/semver"
)
type ReleaseCmd int
type releaseDetails struct {
version semver.Version
date string
url string
filename string
}
const (
// A release URL is expected to be prefixed with this value
projectAPIURL = "https://api.github.com/repos/" + projectORG
releasesSuffix = "/releases"
downloadsSuffix = releasesSuffix + "/download"
// Kata 1.x
kata1xRepo = "runtime"
kataLegacyReleaseURL = projectAPIURL + "/" + kata1xRepo + releasesSuffix
kataLegacyDownloadURL = projectURL + "/" + kata1xRepo + downloadsSuffix
// Kata 2.x or newer
kata2xRepo = "kata-containers"
kataReleaseURL = projectAPIURL + "/" + kata2xRepo + releasesSuffix
kataDownloadURL = projectURL + "/" + kata2xRepo + downloadsSuffix
// Environment variable that can be used to override a release URL
ReleaseURLEnvVar = "KATA_RELEASE_URL"
RelCmdList ReleaseCmd = iota
RelCmdCheck ReleaseCmd = iota
msgNoReleases = "No releases available"
msgNoNewerRelease = "No newer release available"
errNoNetChecksAsRoot = "No network checks allowed running as super user"
)
func (c ReleaseCmd) Valid() bool {
switch c {
case RelCmdCheck, RelCmdList:
return true
default:
return false
}
}
func downloadURLIsValid(url string) error {
if url == "" {
return errors.New("URL cannot be blank")
}
if strings.HasPrefix(url, kataDownloadURL) ||
strings.HasPrefix(url, kataLegacyDownloadURL) {
return nil
}
return fmt.Errorf("Download URL %q is not valid", url)
}
func releaseURLIsValid(url string) error {
if url == "" {
return errors.New("URL cannot be blank")
}
if url == kataReleaseURL || url == kataLegacyReleaseURL {
return nil
}
return fmt.Errorf("Release URL %q is not valid", url)
}
func getReleaseURL(currentVersion semver.Version) (url string, err error) {
major := currentVersion.Major
if major == 0 {
return "", fmt.Errorf("invalid current version: %v", currentVersion)
} else if major == 1 {
url = kataLegacyReleaseURL
} else {
url = kataReleaseURL
}
if value := os.Getenv(ReleaseURLEnvVar); value != "" {
url = value
}
if err := releaseURLIsValid(url); err != nil {
return "", err
}
return url, nil
}
func ignoreRelease(release releaseDetails, includeAll bool) bool {
if includeAll {
return false
}
if len(release.version.Pre) > 0 {
// Pre-releases are ignored by default
return true
}
return false
}
// Returns a release version and release object from the specified map.
func makeRelease(release map[string]interface{}) (version string, details releaseDetails, err error) {
key := "tag_name"
version, ok := release[key].(string)
if !ok {
return "", details, fmt.Errorf("failed to find key %s in release data", key)
}
if version == "" {
return "", details, fmt.Errorf("release version cannot be blank")
}
releaseSemver, err := semver.Make(version)
if err != nil {
return "", details, fmt.Errorf("release %q has invalid semver version: %v", version, err)
}
key = "assets"
assetsArray, ok := release[key].([]interface{})
if !ok {
return "", details, fmt.Errorf("failed to find key %s in release version %q data", key, version)
}
if len(assetsArray) == 0 {
// GitHub auto-creates the source assets, but binaries have to
// be built and uploaded for a release.
return "", details, fmt.Errorf("no binary assets for release %q", version)
}
var createDate string
var filename string
var downloadURL string
assets := assetsArray[0]
key = "browser_download_url"
downloadURL, ok = assets.(map[string]interface{})[key].(string)
if !ok {
return "", details, fmt.Errorf("failed to find key %s in release version %q asset data", key, version)
}
if err := downloadURLIsValid(downloadURL); err != nil {
return "", details, err
}
key = "name"
filename, ok = assets.(map[string]interface{})[key].(string)
if !ok {
return "", details, fmt.Errorf("failed to find key %s in release version %q asset data", key, version)
}
if filename == "" {
return "", details, fmt.Errorf("Release %q asset missing filename", version)
}
key = "created_at"
createDate, ok = assets.(map[string]interface{})[key].(string)
if !ok {
return "", details, fmt.Errorf("failed to find key %s in release version %q asset data", key, version)
}
if createDate == "" {
return "", details, fmt.Errorf("Release %q asset missing creation date", version)
}
details = releaseDetails{
version: releaseSemver,
date: createDate,
url: downloadURL,
filename: filename,
}
return version, details, nil
}
func readReleases(releasesArray []map[string]interface{}, includeAll bool) (versions []semver.Version,
releases map[string]releaseDetails) {
releases = make(map[string]releaseDetails)
for _, release := range releasesArray {
version, details, err := makeRelease(release)
// Don't error if makeRelease() fails to construct a release.
// There are many reasons a release may not be considered
// valid, so just ignore the invalid ones.
if err != nil {
kataLog.WithField("version", version).WithError(err).Debug("ignoring invalid release version")
continue
}
if ignoreRelease(details, includeAll) {
continue
}
versions = append(versions, details.version)
releases[version] = details
}
semver.Sort(versions)
return versions, releases
}
// Note: Assumes versions is sorted in ascending order
func findNewestRelease(currentVersion semver.Version, versions []semver.Version) (bool, semver.Version, error) {
var candidates []semver.Version
if len(versions) == 0 {
return false, semver.Version{}, errors.New("no versions available")
}
for _, version := range versions {
if currentVersion.GTE(version) {
// Ignore older releases (and the current one!)
continue
}
candidates = append(candidates, version)
}
count := len(candidates)
if count == 0 {
return false, semver.Version{}, nil
}
return true, candidates[count-1], nil
}
func getReleases(releaseURL string, includeAll bool) ([]semver.Version, map[string]releaseDetails, error) {
kataLog.WithField("url", releaseURL).Info("Looking for releases")
if os.Geteuid() == 0 {
return nil, nil, errors.New(errNoNetChecksAsRoot)
}
client := &http.Client{}
resp, err := client.Get(releaseURL)
if err != nil {
return nil, nil, err
}
defer resp.Body.Close()
releasesArray := []map[string]interface{}{}
bytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, nil, fmt.Errorf("failed to read release details: %v", err)
}
if err := json.Unmarshal(bytes, &releasesArray); err != nil {
return nil, nil, fmt.Errorf("failed to unpack release details: %v", err)
}
versions, releases := readReleases(releasesArray, includeAll)
return versions, releases, nil
}
func getNewReleaseType(current semver.Version, latest semver.Version) (string, error) {
if current.GT(latest) {
return "", fmt.Errorf("current version %s newer than latest %s", current, latest)
}
if current.EQ(latest) {
return "", fmt.Errorf("current version %s and latest are same", current)
}
var desc string
if latest.Major > current.Major {
if len(latest.Pre) > 0 {
desc = "major pre-release"
} else {
desc = "major"
}
} else if latest.Minor > current.Minor {
if len(latest.Pre) > 0 {
desc = "minor pre-release"
} else {
desc = "minor"
}
} else if latest.Patch > current.Patch {
if len(latest.Pre) > 0 {
desc = "patch pre-release"
} else {
desc = "patch"
}
} else if latest.Patch == current.Patch && len(latest.Pre) > 0 {
desc = "pre-release"
} else {
return "", fmt.Errorf("BUG: unhandled scenario: current version: %s, latest version: %v", current, latest)
}
return desc, nil
}
func showLatestRelease(output *os.File, current semver.Version, details releaseDetails) error {
latest := details.version
desc, err := getNewReleaseType(current, latest)
if err != nil {
return err
}
fmt.Fprintf(output, "Newer %s release available: %s (url: %v, date: %v)\n",
desc,
details.version, details.url, details.date)
return nil
}
func listReleases(output *os.File, current semver.Version, versions []semver.Version, releases map[string]releaseDetails) error {
for _, version := range versions {
details, ok := releases[version.String()]
if !ok {
return fmt.Errorf("Release %v has no details", version)
}
fmt.Fprintf(output, "%s;%s;%s\n", version, details.date, details.url)
}
return nil
}
func HandleReleaseVersions(cmd ReleaseCmd, currentVersion string, includeAll bool) error {
if !cmd.Valid() {
return fmt.Errorf("invalid release command: %v", cmd)
}
output := os.Stdout
currentSemver, err := semver.Make(currentVersion)
if err != nil {
return fmt.Errorf("BUG: Current version of %s (%s) has invalid SemVer version: %v", name, currentVersion, err)
}
releaseURL, err := getReleaseURL(currentSemver)
if err != nil {
return err
}
versions, releases, err := getReleases(releaseURL, includeAll)
if err != nil {
return err
}
if cmd == RelCmdList {
return listReleases(output, currentSemver, versions, releases)
}
if len(versions) == 0 {
fmt.Fprintf(output, "%s\n", msgNoReleases)
return nil
}
available, newest, err := findNewestRelease(currentSemver, versions)
if err != nil {
return err
}
if !available {
fmt.Fprintf(output, "%s\n", msgNoNewerRelease)
return nil
}
details, ok := releases[newest.String()]
if !ok {
return fmt.Errorf("Release %v has no details", newest)
}
return showLatestRelease(output, currentSemver, details)
}

View File

@@ -0,0 +1,498 @@
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"fmt"
"os"
"strings"
"testing"
"github.com/blang/semver"
"github.com/stretchr/testify/assert"
)
var currentSemver semver.Version
var expectedReleasesURL string
func init() {
var err error
currentSemver, err = semver.Make(version)
if err != nil {
panic(fmt.Sprintf("failed to create semver for testing: %v", err))
}
if currentSemver.Major == 1 {
expectedReleasesURL = kataLegacyReleaseURL
} else {
expectedReleasesURL = kataReleaseURL
}
}
func TestReleaseCmd(t *testing.T) {
assert := assert.New(t)
for i, value := range []ReleaseCmd{RelCmdCheck, RelCmdList} {
assert.True(value.Valid(), "test[%d]: %+v", i, value)
}
for i, value := range []int{-1, 2, 42, 255} {
invalid := ReleaseCmd(i)
assert.False(invalid.Valid(), "test[%d]: %+v", i, value)
}
}
func TestGetReleaseURL(t *testing.T) {
assert := assert.New(t)
const kata1xURL = "https://api.github.com/repos/kata-containers/runtime/releases"
const kata2xURL = "https://api.github.com/repos/kata-containers/kata-containers/releases"
type testData struct {
currentVersion string
expectError bool
expectedURL string
}
data := []testData{
{"0.0.0", true, ""},
{"1.0.0", false, kata1xURL},
{"1.9999.9999", false, kata1xURL},
{"2.0.0-alpha3", false, kata2xURL},
{"2.9999.9999", false, kata2xURL},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
ver, err := semver.Make(d.currentVersion)
msg = fmt.Sprintf("%s, version: %v, error: %v", msg, ver, err)
assert.NoError(err, msg)
url, err := getReleaseURL(ver)
if d.expectError {
assert.Error(err, msg)
} else {
assert.NoError(err, msg)
assert.Equal(url, d.expectedURL, msg)
assert.True(strings.HasPrefix(url, projectAPIURL), msg)
}
}
url, err := getReleaseURL(currentSemver)
assert.NoError(err)
assert.Equal(url, expectedReleasesURL)
assert.True(strings.HasPrefix(url, projectAPIURL))
}
func TestGetReleaseURLEnvVar(t *testing.T) {
assert := assert.New(t)
type testData struct {
envVarValue string
expectError bool
expectedURL string
}
data := []testData{
{"", false, expectedReleasesURL},
{"http://google.com", true, ""},
{"https://katacontainers.io", true, ""},
{"https://github.com/kata-containers/runtime/releases/latest", true, ""},
{"https://github.com/kata-containers/kata-containers/releases/latest", true, ""},
{expectedReleasesURL, false, expectedReleasesURL},
}
assert.Equal(os.Getenv("KATA_RELEASE_URL"), "")
defer os.Setenv("KATA_RELEASE_URL", "")
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
err := os.Setenv("KATA_RELEASE_URL", d.envVarValue)
msg = fmt.Sprintf("%s, error: %v", msg, err)
assert.NoError(err, msg)
url, err := getReleaseURL(currentSemver)
if d.expectError {
assert.Errorf(err, msg)
} else {
assert.NoErrorf(err, msg)
assert.Equal(d.expectedURL, url, msg)
}
}
}
func TestMakeRelease(t *testing.T) {
assert := assert.New(t)
type testData struct {
release map[string]interface{}
expectError bool
expectedVersion string
expectedDetails releaseDetails
}
invalidRel1 := map[string]interface{}{"foo": 1}
invalidRel2 := map[string]interface{}{"foo": "bar"}
invalidRel3 := map[string]interface{}{"foo": true}
testDate := "2020-09-01T22:10:44Z"
testRelVersion := "1.2.3"
testFilename := "kata-static-1.12.0-alpha1-x86_64.tar.xz"
testURL := fmt.Sprintf("https://github.com/kata-containers/runtime/releases/download/%s/%s", testRelVersion, testFilename)
testSemver, err := semver.Make(testRelVersion)
assert.NoError(err)
invalidRelMissingVersion := map[string]interface{}{}
invalidRelInvalidVersion := map[string]interface{}{
"tag_name": "not.valid.semver",
}
invalidRelMissingAssets := map[string]interface{}{
"tag_name": testRelVersion,
"name": testFilename,
"assets": []interface{}{},
}
invalidAssetsMissingURL := []interface{}{
map[string]interface{}{
"name": testFilename,
"created_at": testDate,
},
}
invalidAssetsMissingFile := []interface{}{
map[string]interface{}{
"browser_download_url": testURL,
"created_at": testDate,
},
}
invalidAssetsMissingDate := []interface{}{
map[string]interface{}{
"name": testFilename,
"browser_download_url": testURL,
},
}
validAssets := []interface{}{
map[string]interface{}{
"browser_download_url": testURL,
"name": testFilename,
"created_at": testDate,
},
}
invalidRelAssetsMissingURL := map[string]interface{}{
"tag_name": testRelVersion,
"name": testFilename,
"assets": invalidAssetsMissingURL,
}
invalidRelAssetsMissingFile := map[string]interface{}{
"tag_name": testRelVersion,
"name": testFilename,
"assets": invalidAssetsMissingFile,
}
invalidRelAssetsMissingDate := map[string]interface{}{
"tag_name": testRelVersion,
"name": testFilename,
"assets": invalidAssetsMissingDate,
}
validRel := map[string]interface{}{
"tag_name": testRelVersion,
"name": testFilename,
"assets": validAssets,
}
validReleaseDetails := releaseDetails{
version: testSemver,
date: testDate,
url: testURL,
filename: testFilename,
}
data := []testData{
{invalidRel1, true, "", releaseDetails{}},
{invalidRel2, true, "", releaseDetails{}},
{invalidRel3, true, "", releaseDetails{}},
{invalidRelMissingVersion, true, "", releaseDetails{}},
{invalidRelInvalidVersion, true, "", releaseDetails{}},
{invalidRelMissingAssets, true, "", releaseDetails{}},
{invalidRelAssetsMissingURL, true, "", releaseDetails{}},
{invalidRelAssetsMissingFile, true, "", releaseDetails{}},
{invalidRelAssetsMissingDate, true, "", releaseDetails{}},
{validRel, false, testRelVersion, validReleaseDetails},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
version, details, err := makeRelease(d.release)
msg = fmt.Sprintf("%s, version: %v, details: %+v, error: %v", msg, version, details, err)
if d.expectError {
assert.Error(err, msg)
continue
}
assert.NoError(err, msg)
assert.Equal(d.expectedVersion, version, msg)
assert.Equal(d.expectedDetails, details, msg)
}
}
func TestReleaseURLIsValid(t *testing.T) {
assert := assert.New(t)
type testData struct {
url string
expectError bool
}
data := []testData{
{"", true},
{"foo", true},
{"foo bar", true},
{"https://google.com", true},
{projectAPIURL, true},
{kataLegacyReleaseURL, false},
{kataReleaseURL, false},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
err := releaseURLIsValid(d.url)
msg = fmt.Sprintf("%s, error: %v", msg, err)
if d.expectError {
assert.Error(err, msg)
} else {
assert.NoError(err, msg)
}
}
}
func TestDownloadURLIsValid(t *testing.T) {
assert := assert.New(t)
type testData struct {
url string
expectError bool
}
validKata1xDownload := "https://github.com/kata-containers/runtime/releases/download/1.12.0-alpha1/kata-static-1.12.0-alpha1-x86_64.tar.xz"
validKata2xDownload := "https://github.com/kata-containers/kata-containers/releases/download/2.0.0-alpha3/kata-static-2.0.0-alpha3-x86_64.tar.xz"
data := []testData{
{"", true},
{"foo", true},
{"foo bar", true},
{"https://google.com", true},
{projectURL, true},
{validKata1xDownload, false},
{validKata2xDownload, false},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
err := downloadURLIsValid(d.url)
msg = fmt.Sprintf("%s, error: %v", msg, err)
if d.expectError {
assert.Error(err, msg)
} else {
assert.NoError(err, msg)
}
}
}
func TestIgnoreRelease(t *testing.T) {
assert := assert.New(t)
type testData struct {
details releaseDetails
includeAll bool
expectIgnore bool
}
verWithoutPreRelease, err := semver.Make("2.0.0")
assert.NoError(err)
verWithPreRelease, err := semver.Make("2.0.0-alpha3")
assert.NoError(err)
relWithoutPreRelease := releaseDetails{
version: verWithoutPreRelease,
}
relWithPreRelease := releaseDetails{
version: verWithPreRelease,
}
data := []testData{
{relWithoutPreRelease, false, false},
{relWithoutPreRelease, true, false},
{relWithPreRelease, false, true},
{relWithPreRelease, true, false},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
ignore := ignoreRelease(d.details, d.includeAll)
if d.expectIgnore {
assert.True(ignore, msg)
} else {
assert.False(ignore, msg)
}
}
}
func TestGetReleases(t *testing.T) {
assert := assert.New(t)
url := "foo"
expectedErrMsg := "unsupported protocol scheme"
for _, includeAll := range []bool{true, false} {
euid := os.Geteuid()
msg := fmt.Sprintf("includeAll: %v, euid: %v", includeAll, euid)
_, _, err := getReleases(url, includeAll)
msg = fmt.Sprintf("%s, error: %v", msg, err)
assert.Error(err, msg)
if euid == 0 {
assert.Equal(err.Error(), errNoNetChecksAsRoot, msg)
} else {
assert.True(strings.Contains(err.Error(), expectedErrMsg), msg)
}
}
}
func TestFindNewestRelease(t *testing.T) {
assert := assert.New(t)
type testData struct {
versions []semver.Version
currentVer semver.Version
expectVersion semver.Version
expectError bool
expectAvailable bool
}
ver1, err := semver.Make("1.11.1")
assert.NoError(err)
ver2, err := semver.Make("1.11.3")
assert.NoError(err)
ver3, err := semver.Make("2.0.0")
assert.NoError(err)
data := []testData{
{[]semver.Version{}, semver.Version{}, semver.Version{}, true, false},
{[]semver.Version{}, ver1, semver.Version{}, true, false},
{[]semver.Version{ver1}, ver1, semver.Version{}, false, false},
{[]semver.Version{ver1}, ver2, semver.Version{}, false, false},
{[]semver.Version{ver2}, ver1, ver2, false, true},
{[]semver.Version{ver3}, ver1, ver3, false, true},
{[]semver.Version{ver2, ver3}, ver1, ver3, false, true},
{[]semver.Version{ver1, ver3}, ver2, ver3, false, true},
{[]semver.Version{ver1}, ver2, semver.Version{}, false, false},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
available, version, err := findNewestRelease(d.currentVer, d.versions)
msg = fmt.Sprintf("%s, available: %v, version: %v, error: %v", msg, available, version, err)
if d.expectError {
assert.Error(err, msg)
continue
}
assert.NoError(err, msg)
if !d.expectAvailable {
assert.False(available, msg)
continue
}
assert.Equal(d.expectVersion, version)
}
}
func TestGetNewReleaseType(t *testing.T) {
assert := assert.New(t)
type testData struct {
currentVer string
latestVer string
expectError bool
result string
}
data := []testData{
{"2.0.0-alpha3", "1.0.0", true, ""},
{"1.0.0", "1.0.0", true, ""},
{"2.0.0", "1.0.0", true, ""},
{"1.0.0", "2.0.0", false, "major"},
{"2.0.0-alpha3", "2.0.0-alpha4", false, "pre-release"},
{"1.0.0", "2.0.0-alpha3", false, "major pre-release"},
{"1.0.0", "1.1.2", false, "minor"},
{"1.0.0", "1.1.2-pre2", false, "minor pre-release"},
{"1.0.0", "1.1.2-foo", false, "minor pre-release"},
{"1.0.0", "1.0.3", false, "patch"},
{"1.0.0-beta29", "1.0.0-beta30", false, "pre-release"},
{"1.0.0", "1.0.3-alpha99.1b", false, "patch pre-release"},
}
for i, d := range data {
msg := fmt.Sprintf("test[%d]: %+v", i, d)
current, err := semver.Make(d.currentVer)
msg = fmt.Sprintf("%s, current: %v, error: %v", msg, current, err)
assert.NoError(err, msg)
latest, err := semver.Make(d.latestVer)
assert.NoError(err, msg)
desc, err := getNewReleaseType(current, latest)
if d.expectError {
assert.Error(err, msg)
continue
}
assert.NoError(err, msg)
assert.Equal(d.result, desc, msg)
}
}

View File

@@ -179,6 +179,7 @@ func checkAndMount(s *service, r *taskAPI.CreateTaskRequest) (bool, error) {
if len(r.Rootfs) == 1 {
m := r.Rootfs[0]
// Plug the block backed rootfs directly instead of mounting it.
if katautils.IsBlockDevice(m.Source) && !s.config.HypervisorConfig.DisableBlockDeviceUse {
return false, nil
}

View File

@@ -8,6 +8,7 @@ package containerdshim
import (
"context"
"expvar"
"fmt"
"io"
"net/http"
"net/http/pprof"
@@ -34,6 +35,18 @@ var (
shimMgtLog = shimLog.WithField("subsystem", "shim-management")
)
// agentURL returns URL for agent
func (s *service) agentURL(w http.ResponseWriter, r *http.Request) {
url, err := s.sandbox.GetAgentURL()
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(err.Error()))
return
}
fmt.Fprint(w, url)
}
// serveMetrics handle /metrics requests
func (s *service) serveMetrics(w http.ResponseWriter, r *http.Request) {
@@ -139,6 +152,7 @@ func (s *service) startManagementServer(ctx context.Context, ociSpec *specs.Spec
// bind hanlder
m := http.NewServeMux()
m.Handle("/metrics", http.HandlerFunc(s.serveMetrics))
m.Handle("/agent-url", http.HandlerFunc(s.agentURL))
s.mountPprofHandle(m, ociSpec)
// register shim metrics

View File

@@ -68,10 +68,10 @@ func TestStatsSandbox(t *testing.T) {
StatsFunc: getSandboxCPUFunc(1000, 100000),
StatsContainerFunc: getStatsContainerCPUFunc(100, 200, 10000, 20000),
MockContainers: []*vcmock.Container{
&vcmock.Container{
{
MockID: "foo",
},
&vcmock.Container{
{
MockID: "bar",
},
},

View File

@@ -466,6 +466,17 @@ show_container_mgr_details()
subheading "$title"
run_cmd_and_show_quoted_output "" "podman --version"
run_cmd_and_show_quoted_output "" "podman system info"
local cmd file
for file in {/etc,/usr/share}/containers/*.{conf,json}; do
if [ -e "$file" ]; then
cmd="cat $file"
run_cmd_and_show_quoted_output "" "$cmd"
fi
done
end_section
fi

View File

@@ -10,7 +10,6 @@ import (
"compress/gzip"
"io"
"io/ioutil"
"net"
"net/http"
"path/filepath"
"sort"
@@ -236,33 +235,7 @@ func (km *KataMonitor) aggregateSandboxMetrics(encoder expfmt.Encoder) error {
// getSandboxMetrics will get sandbox's metrics from shim
func (km *KataMonitor) getSandboxMetrics(sandboxID, namespace string) ([]*dto.MetricFamily, error) {
socket, err := km.getMonitorAddress(sandboxID, namespace)
if err != nil {
return nil, err
}
transport := &http.Transport{
DisableKeepAlives: true,
Dial: func(proto, addr string) (conn net.Conn, err error) {
return net.Dial("unix", "\x00"+socket)
},
}
client := http.Client{
Timeout: 3 * time.Second,
Transport: transport,
}
resp, err := client.Get("http://shim/metrics")
if err != nil {
return nil, err
}
defer func() {
resp.Body.Close()
}()
body, err := ioutil.ReadAll(resp.Body)
body, err := km.doGet(sandboxID, namespace, defaultTimeout, "metrics")
if err != nil {
return nil, err
}

View File

@@ -80,6 +80,28 @@ func (km *KataMonitor) initSandboxCache() error {
return nil
}
// GetAgentURL returns agent URL
func (km *KataMonitor) GetAgentURL(w http.ResponseWriter, r *http.Request) {
sandboxID, err := getSandboxIdFromReq(r)
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return
}
namespace, err := km.getSandboxNamespace(sandboxID)
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return
}
data, err := km.doGet(sandboxID, namespace, defaultTimeout, "agent-url")
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return
}
fmt.Fprintln(w, string(data))
}
// ListSandboxes list all sandboxes running in Kata
func (km *KataMonitor) ListSandboxes(w http.ResponseWriter, r *http.Request) {
sandboxes := km.getSandboxList()

View File

@@ -12,14 +12,6 @@ import (
"net/http"
)
func getSandboxIdFromReq(r *http.Request) (string, error) {
sandbox := r.URL.Query().Get("sandbox")
if sandbox != "" {
return sandbox, nil
}
return "", fmt.Errorf("sandbox not found in %+v", r.URL.Query())
}
func serveError(w http.ResponseWriter, status int, txt string) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("X-Go-Pprof", "1")

View File

@@ -0,0 +1,81 @@
// Copyright (c) 2020 Ant Financial
//
// SPDX-License-Identifier: Apache-2.0
//
package katamonitor
import (
"fmt"
"io/ioutil"
"net"
"net/http"
"time"
)
const (
defaultTimeout = 3 * time.Second
)
func commonServeError(w http.ResponseWriter, status int, err error) {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.WriteHeader(status)
if err != nil {
fmt.Fprintln(w, err.Error())
}
}
func getSandboxIdFromReq(r *http.Request) (string, error) {
sandbox := r.URL.Query().Get("sandbox")
if sandbox != "" {
return sandbox, nil
}
return "", fmt.Errorf("sandbox not found in %+v", r.URL.Query())
}
func (km *KataMonitor) buildShimClient(sandboxID, namespace string, timeout time.Duration) (*http.Client, error) {
socket, err := km.getMonitorAddress(sandboxID, namespace)
if err != nil {
return nil, err
}
transport := &http.Transport{
DisableKeepAlives: true,
Dial: func(proto, addr string) (conn net.Conn, err error) {
return net.Dial("unix", "\x00"+socket)
},
}
client := &http.Client{
Transport: transport,
}
if timeout > 0 {
client.Timeout = timeout
}
return client, nil
}
func (km *KataMonitor) doGet(sandboxID, namespace string, timeoutInSeconds time.Duration, urlPath string) ([]byte, error) {
client, err := km.buildShimClient(sandboxID, namespace, timeoutInSeconds)
if err != nil {
return nil, err
}
resp, err := client.Get(fmt.Sprintf("http://shim/%s", urlPath))
if err != nil {
return nil, err
}
defer func() {
resp.Body.Close()
}()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return body, nil
}

View File

@@ -132,11 +132,12 @@ type runtime struct {
}
type agent struct {
Debug bool `toml:"enable_debug"`
Tracing bool `toml:"enable_tracing"`
TraceMode string `toml:"trace_mode"`
TraceType string `toml:"trace_type"`
KernelModules []string `toml:"kernel_modules"`
Debug bool `toml:"enable_debug"`
Tracing bool `toml:"enable_tracing"`
TraceMode string `toml:"trace_mode"`
TraceType string `toml:"trace_type"`
KernelModules []string `toml:"kernel_modules"`
DebugConsoleEnabled bool `toml:"debug_console_enabled"`
}
type netmon struct {
@@ -441,6 +442,10 @@ func (h hypervisor) getIOMMUPlatform() bool {
return h.IOMMUPlatform
}
func (a agent) debugConsoleEnabled() bool {
return a.DebugConsoleEnabled
}
func (a agent) debug() bool {
return a.Debug
}
@@ -866,23 +871,15 @@ func updateRuntimeConfigHypervisor(configPath string, tomlConf tomlConfig, confi
}
func updateRuntimeConfigAgent(configPath string, tomlConf tomlConfig, config *oci.RuntimeConfig, builtIn bool) error {
if builtIn {
config.AgentConfig = vc.KataAgentConfig{
LongLiveConn: true,
Debug: config.AgentConfig.Debug,
KernelModules: config.AgentConfig.KernelModules,
}
return nil
}
for _, agent := range tomlConf.Agent {
config.AgentConfig = vc.KataAgentConfig{
Debug: agent.debug(),
Trace: agent.trace(),
TraceMode: agent.traceMode(),
TraceType: agent.traceType(),
KernelModules: agent.kernelModules(),
LongLiveConn: true,
Debug: agent.debug(),
Trace: agent.trace(),
TraceMode: agent.traceMode(),
TraceType: agent.traceType(),
KernelModules: agent.kernelModules(),
EnableDebugConsole: agent.debugConsoleEnabled(),
}
}
@@ -1026,12 +1023,10 @@ func initConfig() (config oci.RuntimeConfig, err error) {
return oci.RuntimeConfig{}, err
}
defaultAgentConfig := vc.KataAgentConfig{}
config = oci.RuntimeConfig{
HypervisorType: defaultHypervisor,
HypervisorConfig: GetDefaultHypervisorConfig(),
AgentConfig: defaultAgentConfig,
AgentConfig: vc.KataAgentConfig{},
}
return config, nil

View File

@@ -167,7 +167,9 @@ func createAllRuntimeConfigFiles(dir, hypervisor string) (config testRuntimeConf
VirtioFSCache: defaultVirtioFSCacheMode,
}
agentConfig := vc.KataAgentConfig{}
agentConfig := vc.KataAgentConfig{
LongLiveConn: true,
}
netmonConfig := vc.NetmonConfig{
Path: netmonPath,
@@ -519,7 +521,8 @@ func TestMinimalRuntimeConfig(t *testing.T) {
# Runtime configuration file
[agent.kata]
debug_console_enabled=true
kernel_modules=["a", "b", "c"]
[netmon]
path = "` + netmonPath + `"
`
@@ -576,7 +579,11 @@ func TestMinimalRuntimeConfig(t *testing.T) {
VirtioFSCache: defaultVirtioFSCacheMode,
}
expectedAgentConfig := vc.KataAgentConfig{}
expectedAgentConfig := vc.KataAgentConfig{
LongLiveConn: true,
EnableDebugConsole: true,
KernelModules: []string{"a", "b", "c"},
}
expectedNetmonConfig := vc.NetmonConfig{
Path: netmonPath,

View File

@@ -105,7 +105,7 @@ func TestAttachVFIODevice(t *testing.T) {
}
tmpDir, err := ioutil.TempDir("", "")
assert.Nil(t, err)
os.RemoveAll(tmpDir)
defer os.RemoveAll(tmpDir)
testFDIOGroup := "2"
testDeviceBDFPath := "0000:00:1c.0"
@@ -114,15 +114,27 @@ func TestAttachVFIODevice(t *testing.T) {
err = os.MkdirAll(devicesDir, dirMode)
assert.Nil(t, err)
deviceFile := filepath.Join(devicesDir, testDeviceBDFPath)
_, err = os.Create(deviceFile)
deviceBDFDir := filepath.Join(devicesDir, testDeviceBDFPath)
err = os.MkdirAll(deviceBDFDir, dirMode)
assert.Nil(t, err)
deviceClassFile := filepath.Join(deviceBDFDir, "class")
_, err = os.Create(deviceClassFile)
assert.Nil(t, err)
deviceConfigFile := filepath.Join(deviceBDFDir, "config")
_, err = os.Create(deviceConfigFile)
assert.Nil(t, err)
savedIOMMUPath := config.SysIOMMUPath
config.SysIOMMUPath = tmpDir
savedSysBusPciDevicesPath := config.SysBusPciDevicesPath
config.SysBusPciDevicesPath = devicesDir
defer func() {
config.SysIOMMUPath = savedIOMMUPath
config.SysBusPciDevicesPath = savedSysBusPciDevicesPath
}()
path := filepath.Join(vfioPath, testFDIOGroup)
@@ -220,7 +232,7 @@ func TestAttachVhostUserBlkDevice(t *testing.T) {
vhostUserStorePath: tmpDir,
}
assert.Nil(t, err)
os.RemoveAll(tmpDir)
defer os.RemoveAll(tmpDir)
vhostUserDevNodePath := filepath.Join(tmpDir, "/block/devices/")
vhostUserSockPath := filepath.Join(tmpDir, "/block/sockets/")

View File

@@ -75,7 +75,7 @@ func TestIsVhostUserBlk(t *testing.T) {
isVhostUserBlk := isVhostUserBlk(
config.DeviceInfo{
DevType: d.devType,
Major: d.major,
Major: d.major,
})
assert.Equal(t, d.expected, isVhostUserBlk)
}
@@ -100,7 +100,7 @@ func TestIsVhostUserSCSI(t *testing.T) {
isVhostUserSCSI := isVhostUserSCSI(
config.DeviceInfo{
DevType: d.devType,
Major: d.major,
Major: d.major,
})
assert.Equal(t, d.expected, isVhostUserSCSI)
}

View File

@@ -51,5 +51,5 @@ This will:
# Submitting changes
For details on the format and how to submit changes, refer to the
[Contributing](../../CONTRIBUTING.md) document.
[Contributing](../../../../CONTRIBUTING.md) document.

View File

@@ -605,13 +605,13 @@ func (fc *firecracker) fcSetLogger() error {
}
// listen to log fifo file and transfer error info
jailedLogFifo, err := fc.fcListenToFifo(fcLogFifo)
jailedLogFifo, err := fc.fcListenToFifo(fcLogFifo, nil)
if err != nil {
return fmt.Errorf("Failed setting log: %s", err)
}
// listen to metrics file and transfer error info
jailedMetricsFifo, err := fc.fcListenToFifo(fcMetricsFifo)
jailedMetricsFifo, err := fc.fcListenToFifo(fcMetricsFifo, fc.updateMetrics)
if err != nil {
return fmt.Errorf("Failed setting log: %s", err)
}
@@ -625,7 +625,18 @@ func (fc *firecracker) fcSetLogger() error {
return err
}
func (fc *firecracker) fcListenToFifo(fifoName string) (string, error) {
func (fc *firecracker) updateMetrics(line string) {
var fm FirecrackerMetrics
if err := json.Unmarshal([]byte(line), &fm); err != nil {
fc.Logger().WithError(err).WithField("data", line).Error("failed to unmarshal fc metrics")
return
}
updateFirecrackerMetrics(&fm)
}
type fifoConsumer func(string)
func (fc *firecracker) fcListenToFifo(fifoName string, consumer fifoConsumer) (string, error) {
fcFifoPath := filepath.Join(fc.vmPath, fifoName)
fcFifo, err := fifo.OpenFifo(context.Background(), fcFifoPath, syscall.O_CREAT|syscall.O_RDONLY|syscall.O_NONBLOCK, 0)
if err != nil {
@@ -640,9 +651,13 @@ func (fc *firecracker) fcListenToFifo(fifoName string) (string, error) {
go func() {
scanner := bufio.NewScanner(fcFifo)
for scanner.Scan() {
fc.Logger().WithFields(logrus.Fields{
"fifoName": fifoName,
"contents": scanner.Text()}).Error("firecracker failed")
if consumer != nil {
consumer(scanner.Text())
} else {
fc.Logger().WithFields(logrus.Fields{
"fifoName": fifoName,
"contents": scanner.Text()}).Debug("read firecracker fifo")
}
}
if err := scanner.Err(); err != nil {
@@ -735,6 +750,9 @@ func (fc *firecracker) fcInitConfiguration() error {
}
}
// register firecracker specificed metrics
registerFirecrackerMetrics()
return nil
}

View File

@@ -0,0 +1,742 @@
// Copyright (c) 2020 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
package virtcontainers
import (
"github.com/prometheus/client_golang/prometheus"
)
const fcMetricsNS = "kata_firecracker"
// prometheus metrics Firecracker exposed.
var (
apiServerMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "api_server",
Help: "Metrics related to the internal API server.",
},
[]string{"item"},
)
blockDeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "block",
Help: "Block Device associated metrics.",
},
[]string{"item"},
)
getRequestsMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "get_api_requests",
Help: "Metrics specific to GET API Requests for counting user triggered actions and/or failures.",
},
[]string{"item"},
)
i8042DeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "i8042",
Help: "Metrics specific to the i8042 device.",
},
[]string{"item"},
)
performanceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "latencies_us",
Help: "Performance metrics related for the moment only to snapshots.",
},
[]string{"item"},
)
loggerSystemMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "logger",
Help: "Metrics for the logging subsystem.",
},
[]string{"item"},
)
mmdsMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "mmds",
Help: "Metrics for the MMDS functionality.",
},
[]string{"item"},
)
netDeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "net",
Help: "Network-related metrics.",
},
[]string{"item"},
)
patchRequestsMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "patch_api_requests",
Help: "Metrics specific to PATCH API Requests for counting user triggered actions and/or failures.",
},
[]string{"item"},
)
putRequestsMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "put_api_requests",
Help: "Metrics specific to PUT API Requests for counting user triggered actions and/or failures.",
},
[]string{"item"},
)
rTCDeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "rtc",
Help: "Metrics specific to the RTC device.",
},
[]string{"item"},
)
seccompMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "seccomp",
Help: "Metrics for the seccomp filtering.",
},
[]string{"item"},
)
vcpuMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "vcpu",
Help: "Metrics specific to VCPUs' mode of functioning.",
},
[]string{"item"},
)
vmmMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "vmm",
Help: "Metrics specific to the machine manager as a whole.",
},
[]string{"item"},
)
serialDeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "uart",
Help: "Metrics specific to the UART device.",
},
[]string{"item"},
)
signalMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "signals",
Help: "Metrics related to signals.",
},
[]string{"item"},
)
vsockDeviceMetrics = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: fcMetricsNS,
Name: "vsock",
Help: "Vsock-related metrics.",
},
[]string{"item"},
)
)
// registerFirecrackerMetrics register all metrics to prometheus.
func registerFirecrackerMetrics() {
prometheus.MustRegister(apiServerMetrics)
prometheus.MustRegister(blockDeviceMetrics)
prometheus.MustRegister(getRequestsMetrics)
prometheus.MustRegister(i8042DeviceMetrics)
prometheus.MustRegister(performanceMetrics)
prometheus.MustRegister(loggerSystemMetrics)
prometheus.MustRegister(mmdsMetrics)
prometheus.MustRegister(netDeviceMetrics)
prometheus.MustRegister(patchRequestsMetrics)
prometheus.MustRegister(putRequestsMetrics)
prometheus.MustRegister(rTCDeviceMetrics)
prometheus.MustRegister(seccompMetrics)
prometheus.MustRegister(vcpuMetrics)
prometheus.MustRegister(vmmMetrics)
prometheus.MustRegister(serialDeviceMetrics)
prometheus.MustRegister(signalMetrics)
prometheus.MustRegister(vsockDeviceMetrics)
}
// updateFirecrackerMetrics update all metrics to the latest values.
func updateFirecrackerMetrics(fm *FirecrackerMetrics) {
// set metrics for ApiServerMetrics
apiServerMetrics.WithLabelValues("process_startup_time_us").Set(float64(fm.ApiServer.ProcessStartupTimeUs))
apiServerMetrics.WithLabelValues("process_startup_time_cpu_us").Set(float64(fm.ApiServer.ProcessStartupTimeCpuUs))
apiServerMetrics.WithLabelValues("sync_response_fails").Set(float64(fm.ApiServer.SyncResponseFails))
apiServerMetrics.WithLabelValues("sync_vmm_send_timeout_count").Set(float64(fm.ApiServer.SyncVmmSendTimeoutCount))
// set metrics for BlockDeviceMetrics
blockDeviceMetrics.WithLabelValues("activate_fails").Set(float64(fm.Block.ActivateFails))
blockDeviceMetrics.WithLabelValues("cfg_fails").Set(float64(fm.Block.CfgFails))
blockDeviceMetrics.WithLabelValues("no_avail_buffer").Set(float64(fm.Block.NoAvailBuffer))
blockDeviceMetrics.WithLabelValues("event_fails").Set(float64(fm.Block.EventFails))
blockDeviceMetrics.WithLabelValues("execute_fails").Set(float64(fm.Block.ExecuteFails))
blockDeviceMetrics.WithLabelValues("invalid_reqs_count").Set(float64(fm.Block.InvalidReqsCount))
blockDeviceMetrics.WithLabelValues("flush_count").Set(float64(fm.Block.FlushCount))
blockDeviceMetrics.WithLabelValues("queue_event_count").Set(float64(fm.Block.QueueEventCount))
blockDeviceMetrics.WithLabelValues("rate_limiter_event_count").Set(float64(fm.Block.RateLimiterEventCount))
blockDeviceMetrics.WithLabelValues("update_count").Set(float64(fm.Block.UpdateCount))
blockDeviceMetrics.WithLabelValues("update_fails").Set(float64(fm.Block.UpdateFails))
blockDeviceMetrics.WithLabelValues("read_bytes").Set(float64(fm.Block.ReadBytes))
blockDeviceMetrics.WithLabelValues("write_bytes").Set(float64(fm.Block.WriteBytes))
blockDeviceMetrics.WithLabelValues("read_count").Set(float64(fm.Block.ReadCount))
blockDeviceMetrics.WithLabelValues("write_count").Set(float64(fm.Block.WriteCount))
blockDeviceMetrics.WithLabelValues("rate_limiter_throttled_events").Set(float64(fm.Block.RateLimiterThrottledEvents))
// set metrics for GetRequestsMetrics
getRequestsMetrics.WithLabelValues("instance_info_count").Set(float64(fm.GetApiRequests.InstanceInfoCount))
getRequestsMetrics.WithLabelValues("instance_info_fails").Set(float64(fm.GetApiRequests.InstanceInfoFails))
getRequestsMetrics.WithLabelValues("machine_cfg_count").Set(float64(fm.GetApiRequests.MachineCfgCount))
getRequestsMetrics.WithLabelValues("machine_cfg_fails").Set(float64(fm.GetApiRequests.MachineCfgFails))
// set metrics for I8042DeviceMetrics
i8042DeviceMetrics.WithLabelValues("error_count").Set(float64(fm.I8042.ErrorCount))
i8042DeviceMetrics.WithLabelValues("missed_read_count").Set(float64(fm.I8042.MissedReadCount))
i8042DeviceMetrics.WithLabelValues("missed_write_count").Set(float64(fm.I8042.MissedWriteCount))
i8042DeviceMetrics.WithLabelValues("read_count").Set(float64(fm.I8042.ReadCount))
i8042DeviceMetrics.WithLabelValues("reset_count").Set(float64(fm.I8042.ResetCount))
i8042DeviceMetrics.WithLabelValues("write_count").Set(float64(fm.I8042.WriteCount))
// set metrics for PerformanceMetrics
performanceMetrics.WithLabelValues("full_create_snapshot").Set(float64(fm.LatenciesUs.FullCreateSnapshot))
performanceMetrics.WithLabelValues("diff_create_snapshot").Set(float64(fm.LatenciesUs.DiffCreateSnapshot))
performanceMetrics.WithLabelValues("load_snapshot").Set(float64(fm.LatenciesUs.LoadSnapshot))
performanceMetrics.WithLabelValues("pause_vm").Set(float64(fm.LatenciesUs.PauseVm))
performanceMetrics.WithLabelValues("resume_vm").Set(float64(fm.LatenciesUs.ResumeVm))
performanceMetrics.WithLabelValues("vmm_full_create_snapshot").Set(float64(fm.LatenciesUs.VmmFullCreateSnapshot))
performanceMetrics.WithLabelValues("vmm_diff_create_snapshot").Set(float64(fm.LatenciesUs.VmmDiffCreateSnapshot))
performanceMetrics.WithLabelValues("vmm_load_snapshot").Set(float64(fm.LatenciesUs.VmmLoadSnapshot))
performanceMetrics.WithLabelValues("vmm_pause_vm").Set(float64(fm.LatenciesUs.VmmPauseVm))
performanceMetrics.WithLabelValues("vmm_resume_vm").Set(float64(fm.LatenciesUs.VmmResumeVm))
// set metrics for LoggerSystemMetrics
loggerSystemMetrics.WithLabelValues("missed_metrics_count").Set(float64(fm.Logger.MissedMetricsCount))
loggerSystemMetrics.WithLabelValues("metrics_fails").Set(float64(fm.Logger.MetricsFails))
loggerSystemMetrics.WithLabelValues("missed_log_count").Set(float64(fm.Logger.MissedLogCount))
loggerSystemMetrics.WithLabelValues("log_fails").Set(float64(fm.Logger.LogFails))
// set metrics for MmdsMetrics
mmdsMetrics.WithLabelValues("rx_accepted").Set(float64(fm.Mmds.RxAccepted))
mmdsMetrics.WithLabelValues("rx_accepted_err").Set(float64(fm.Mmds.RxAcceptedErr))
mmdsMetrics.WithLabelValues("rx_accepted_unusual").Set(float64(fm.Mmds.RxAcceptedUnusual))
mmdsMetrics.WithLabelValues("rx_bad_eth").Set(float64(fm.Mmds.RxBadEth))
mmdsMetrics.WithLabelValues("rx_count").Set(float64(fm.Mmds.RxCount))
mmdsMetrics.WithLabelValues("tx_bytes").Set(float64(fm.Mmds.TxBytes))
mmdsMetrics.WithLabelValues("tx_count").Set(float64(fm.Mmds.TxCount))
mmdsMetrics.WithLabelValues("tx_errors").Set(float64(fm.Mmds.TxErrors))
mmdsMetrics.WithLabelValues("tx_frames").Set(float64(fm.Mmds.TxFrames))
mmdsMetrics.WithLabelValues("connections_created").Set(float64(fm.Mmds.ConnectionsCreated))
mmdsMetrics.WithLabelValues("connections_destroyed").Set(float64(fm.Mmds.ConnectionsDestroyed))
// set metrics for NetDeviceMetrics
netDeviceMetrics.WithLabelValues("activate_fails").Set(float64(fm.Net.ActivateFails))
netDeviceMetrics.WithLabelValues("cfg_fails").Set(float64(fm.Net.CfgFails))
netDeviceMetrics.WithLabelValues("mac_address_updates").Set(float64(fm.Net.MacAddressUpdates))
netDeviceMetrics.WithLabelValues("no_rx_avail_buffer").Set(float64(fm.Net.NoRxAvailBuffer))
netDeviceMetrics.WithLabelValues("no_tx_avail_buffer").Set(float64(fm.Net.NoTxAvailBuffer))
netDeviceMetrics.WithLabelValues("event_fails").Set(float64(fm.Net.EventFails))
netDeviceMetrics.WithLabelValues("rx_queue_event_count").Set(float64(fm.Net.RxQueueEventCount))
netDeviceMetrics.WithLabelValues("rx_event_rate_limiter_count").Set(float64(fm.Net.RxEventRateLimiterCount))
netDeviceMetrics.WithLabelValues("rx_partial_writes").Set(float64(fm.Net.RxPartialWrites))
netDeviceMetrics.WithLabelValues("rx_rate_limiter_throttled").Set(float64(fm.Net.RxRateLimiterThrottled))
netDeviceMetrics.WithLabelValues("rx_tap_event_count").Set(float64(fm.Net.RxTapEventCount))
netDeviceMetrics.WithLabelValues("rx_bytes_count").Set(float64(fm.Net.RxBytesCount))
netDeviceMetrics.WithLabelValues("rx_packets_count").Set(float64(fm.Net.RxPacketsCount))
netDeviceMetrics.WithLabelValues("rx_fails").Set(float64(fm.Net.RxFails))
netDeviceMetrics.WithLabelValues("rx_count").Set(float64(fm.Net.RxCount))
netDeviceMetrics.WithLabelValues("tap_read_fails").Set(float64(fm.Net.TapReadFails))
netDeviceMetrics.WithLabelValues("tap_write_fails").Set(float64(fm.Net.TapWriteFails))
netDeviceMetrics.WithLabelValues("tx_bytes_count").Set(float64(fm.Net.TxBytesCount))
netDeviceMetrics.WithLabelValues("tx_malformed_frames").Set(float64(fm.Net.TxMalformedFrames))
netDeviceMetrics.WithLabelValues("tx_fails").Set(float64(fm.Net.TxFails))
netDeviceMetrics.WithLabelValues("tx_count").Set(float64(fm.Net.TxCount))
netDeviceMetrics.WithLabelValues("tx_packets_count").Set(float64(fm.Net.TxPacketsCount))
netDeviceMetrics.WithLabelValues("tx_partial_reads").Set(float64(fm.Net.TxPartialReads))
netDeviceMetrics.WithLabelValues("tx_queue_event_count").Set(float64(fm.Net.TxQueueEventCount))
netDeviceMetrics.WithLabelValues("tx_rate_limiter_event_count").Set(float64(fm.Net.TxRateLimiterEventCount))
netDeviceMetrics.WithLabelValues("tx_rate_limiter_throttled").Set(float64(fm.Net.TxRateLimiterThrottled))
netDeviceMetrics.WithLabelValues("tx_spoofed_mac_count").Set(float64(fm.Net.TxSpoofedMacCount))
// set metrics for PatchRequestsMetrics
patchRequestsMetrics.WithLabelValues("drive_count").Set(float64(fm.PatchApiRequests.DriveCount))
patchRequestsMetrics.WithLabelValues("drive_fails").Set(float64(fm.PatchApiRequests.DriveFails))
patchRequestsMetrics.WithLabelValues("network_count").Set(float64(fm.PatchApiRequests.NetworkCount))
patchRequestsMetrics.WithLabelValues("network_fails").Set(float64(fm.PatchApiRequests.NetworkFails))
patchRequestsMetrics.WithLabelValues("machine_cfg_count").Set(float64(fm.PatchApiRequests.MachineCfgCount))
patchRequestsMetrics.WithLabelValues("machine_cfg_fails").Set(float64(fm.PatchApiRequests.MachineCfgFails))
// set metrics for PutRequestsMetrics
putRequestsMetrics.WithLabelValues("actions_count").Set(float64(fm.PutApiRequests.ActionsCount))
putRequestsMetrics.WithLabelValues("actions_fails").Set(float64(fm.PutApiRequests.ActionsFails))
putRequestsMetrics.WithLabelValues("boot_source_count").Set(float64(fm.PutApiRequests.BootSourceCount))
putRequestsMetrics.WithLabelValues("boot_source_fails").Set(float64(fm.PutApiRequests.BootSourceFails))
putRequestsMetrics.WithLabelValues("drive_count").Set(float64(fm.PutApiRequests.DriveCount))
putRequestsMetrics.WithLabelValues("drive_fails").Set(float64(fm.PutApiRequests.DriveFails))
putRequestsMetrics.WithLabelValues("logger_count").Set(float64(fm.PutApiRequests.LoggerCount))
putRequestsMetrics.WithLabelValues("logger_fails").Set(float64(fm.PutApiRequests.LoggerFails))
putRequestsMetrics.WithLabelValues("machine_cfg_count").Set(float64(fm.PutApiRequests.MachineCfgCount))
putRequestsMetrics.WithLabelValues("machine_cfg_fails").Set(float64(fm.PutApiRequests.MachineCfgFails))
putRequestsMetrics.WithLabelValues("metrics_count").Set(float64(fm.PutApiRequests.MetricsCount))
putRequestsMetrics.WithLabelValues("metrics_fails").Set(float64(fm.PutApiRequests.MetricsFails))
putRequestsMetrics.WithLabelValues("network_count").Set(float64(fm.PutApiRequests.NetworkCount))
putRequestsMetrics.WithLabelValues("network_fails").Set(float64(fm.PutApiRequests.NetworkFails))
// set metrics for RTCDeviceMetrics
rTCDeviceMetrics.WithLabelValues("error_count").Set(float64(fm.Rtc.ErrorCount))
rTCDeviceMetrics.WithLabelValues("missed_read_count").Set(float64(fm.Rtc.MissedReadCount))
rTCDeviceMetrics.WithLabelValues("missed_write_count").Set(float64(fm.Rtc.MissedWriteCount))
// set metrics for SeccompMetrics
seccompMetrics.WithLabelValues("num_faults").Set(float64(fm.Seccomp.NumFaults))
// set metrics for VcpuMetrics
vcpuMetrics.WithLabelValues("exit_io_in").Set(float64(fm.Vcpu.ExitIoIn))
vcpuMetrics.WithLabelValues("exit_io_out").Set(float64(fm.Vcpu.ExitIoOut))
vcpuMetrics.WithLabelValues("exit_mmio_read").Set(float64(fm.Vcpu.ExitMmioRead))
vcpuMetrics.WithLabelValues("exit_mmio_write").Set(float64(fm.Vcpu.ExitMmioWrite))
vcpuMetrics.WithLabelValues("failures").Set(float64(fm.Vcpu.Failures))
vcpuMetrics.WithLabelValues("filter_cpuid").Set(float64(fm.Vcpu.FilterCpuid))
// set metrics for VmmMetrics
vmmMetrics.WithLabelValues("device_events").Set(float64(fm.Vmm.DeviceEvents))
vmmMetrics.WithLabelValues("panic_count").Set(float64(fm.Vmm.PanicCount))
// set metrics for SerialDeviceMetrics
serialDeviceMetrics.WithLabelValues("error_count").Set(float64(fm.Uart.ErrorCount))
serialDeviceMetrics.WithLabelValues("flush_count").Set(float64(fm.Uart.FlushCount))
serialDeviceMetrics.WithLabelValues("missed_read_count").Set(float64(fm.Uart.MissedReadCount))
serialDeviceMetrics.WithLabelValues("missed_write_count").Set(float64(fm.Uart.MissedWriteCount))
serialDeviceMetrics.WithLabelValues("read_count").Set(float64(fm.Uart.ReadCount))
serialDeviceMetrics.WithLabelValues("write_count").Set(float64(fm.Uart.WriteCount))
// set metrics for SignalMetrics
signalMetrics.WithLabelValues("sigbus").Set(float64(fm.Signals.Sigbus))
signalMetrics.WithLabelValues("sigsegv").Set(float64(fm.Signals.Sigsegv))
// set metrics for VsockDeviceMetrics
vsockDeviceMetrics.WithLabelValues("activate_fails").Set(float64(fm.Vsock.ActivateFails))
vsockDeviceMetrics.WithLabelValues("cfg_fails").Set(float64(fm.Vsock.CfgFails))
vsockDeviceMetrics.WithLabelValues("rx_queue_event_fails").Set(float64(fm.Vsock.RxQueueEventFails))
vsockDeviceMetrics.WithLabelValues("tx_queue_event_fails").Set(float64(fm.Vsock.TxQueueEventFails))
vsockDeviceMetrics.WithLabelValues("ev_queue_event_fails").Set(float64(fm.Vsock.EvQueueEventFails))
vsockDeviceMetrics.WithLabelValues("muxer_event_fails").Set(float64(fm.Vsock.MuxerEventFails))
vsockDeviceMetrics.WithLabelValues("conn_event_fails").Set(float64(fm.Vsock.ConnEventFails))
vsockDeviceMetrics.WithLabelValues("rx_queue_event_count").Set(float64(fm.Vsock.RxQueueEventCount))
vsockDeviceMetrics.WithLabelValues("tx_queue_event_count").Set(float64(fm.Vsock.TxQueueEventCount))
vsockDeviceMetrics.WithLabelValues("rx_bytes_count").Set(float64(fm.Vsock.RxBytesCount))
vsockDeviceMetrics.WithLabelValues("tx_bytes_count").Set(float64(fm.Vsock.TxBytesCount))
vsockDeviceMetrics.WithLabelValues("rx_packets_count").Set(float64(fm.Vsock.RxPacketsCount))
vsockDeviceMetrics.WithLabelValues("tx_packets_count").Set(float64(fm.Vsock.TxPacketsCount))
vsockDeviceMetrics.WithLabelValues("conns_added").Set(float64(fm.Vsock.ConnsAdded))
vsockDeviceMetrics.WithLabelValues("conns_killed").Set(float64(fm.Vsock.ConnsKilled))
vsockDeviceMetrics.WithLabelValues("conns_removed").Set(float64(fm.Vsock.ConnsRemoved))
vsockDeviceMetrics.WithLabelValues("killq_resync").Set(float64(fm.Vsock.KillqResync))
vsockDeviceMetrics.WithLabelValues("tx_flush_fails").Set(float64(fm.Vsock.TxFlushFails))
vsockDeviceMetrics.WithLabelValues("tx_write_fails").Set(float64(fm.Vsock.TxWriteFails))
vsockDeviceMetrics.WithLabelValues("rx_read_fails").Set(float64(fm.Vsock.RxReadFails))
}
// Structure storing all metrics while enforcing serialization support on them.
type FirecrackerMetrics struct {
// API Server related metrics.
ApiServer ApiServerMetrics `json:"api_server"`
// A block device's related metrics.
Block BlockDeviceMetrics `json:"block"`
// Metrics related to API GET requests.
GetApiRequests GetRequestsMetrics `json:"get_api_requests"`
// Metrics related to the i8042 device.
I8042 I8042DeviceMetrics `json:"i8042"`
// Metrics related to performance measurements.
LatenciesUs PerformanceMetrics `json:"latencies_us"`
// Logging related metrics.
Logger LoggerSystemMetrics `json:"logger"`
// Metrics specific to MMDS functionality.
Mmds MmdsMetrics `json:"mmds"`
// A network device's related metrics.
Net NetDeviceMetrics `json:"net"`
// Metrics related to API PATCH requests.
PatchApiRequests PatchRequestsMetrics `json:"patch_api_requests"`
// Metrics related to API PUT requests.
PutApiRequests PutRequestsMetrics `json:"put_api_requests"`
// Metrics related to the RTC device.
Rtc RTCDeviceMetrics `json:"rtc"`
// Metrics related to seccomp filtering.
Seccomp SeccompMetrics `json:"seccomp"`
// Metrics related to a vcpu's functioning.
Vcpu VcpuMetrics `json:"vcpu"`
// Metrics related to the virtual machine manager.
Vmm VmmMetrics `json:"vmm"`
// Metrics related to the UART device.
Uart SerialDeviceMetrics `json:"uart"`
// Metrics related to signals.
Signals SignalMetrics `json:"signals"`
// Metrics related to virtio-vsockets.
Vsock VsockDeviceMetrics `json:"vsock"`
}
// API Server related metrics.
type ApiServerMetrics struct {
// Measures the process's startup time in microseconds.
ProcessStartupTimeUs uint64 `json:"process_startup_time_us"`
// Measures the cpu's startup time in microseconds.
ProcessStartupTimeCpuUs uint64 `json:"process_startup_time_cpu_us"`
// Number of failures on API requests triggered by internal errors.
SyncResponseFails uint64 `json:"sync_response_fails"`
// Number of timeouts during communication with the VMM.
SyncVmmSendTimeoutCount uint64 `json:"sync_vmm_send_timeout_count"`
}
// A block device's related metrics.
type BlockDeviceMetrics struct {
// Number of times when activate failed on a block device.
ActivateFails uint64 `json:"activate_fails"`
// Number of times when interacting with the space config of a block device failed.
CfgFails uint64 `json:"cfg_fails"`
// No available buffer for the block queue.
NoAvailBuffer uint64 `json:"no_avail_buffer"`
// Number of times when handling events on a block device failed.
EventFails uint64 `json:"event_fails"`
// Number of failures in executing a request on a block device.
ExecuteFails uint64 `json:"execute_fails"`
// Number of invalid requests received for this block device.
InvalidReqsCount uint64 `json:"invalid_reqs_count"`
// Number of flushes operation triggered on this block device.
FlushCount uint64 `json:"flush_count"`
// Number of events triggerd on the queue of this block device.
QueueEventCount uint64 `json:"queue_event_count"`
// Number of events ratelimiter-related.
RateLimiterEventCount uint64 `json:"rate_limiter_event_count"`
// Number of update operation triggered on this block device.
UpdateCount uint64 `json:"update_count"`
// Number of failures while doing update on this block device.
UpdateFails uint64 `json:"update_fails"`
// Number of bytes read by this block device.
ReadBytes uint64 `json:"read_bytes"`
// Number of bytes written by this block device.
WriteBytes uint64 `json:"write_bytes"`
// Number of successful read operations.
ReadCount uint64 `json:"read_count"`
// Number of successful write operations.
WriteCount uint64 `json:"write_count"`
// Number of rate limiter throttling events.
RateLimiterThrottledEvents uint64 `json:"rate_limiter_throttled_events"`
}
// Metrics related to API GET requests.
type GetRequestsMetrics struct {
// Number of GETs for getting information on the instance.
InstanceInfoCount uint64 `json:"instance_info_count"`
// Number of failures when obtaining information on the current instance.
InstanceInfoFails uint64 `json:"instance_info_fails"`
// Number of GETs for getting status on attaching machine configuration.
MachineCfgCount uint64 `json:"machine_cfg_count"`
// Number of failures during GETs for getting information on the instance.
MachineCfgFails uint64 `json:"machine_cfg_fails"`
}
// Metrics related to the i8042 device.
type I8042DeviceMetrics struct {
// Errors triggered while using the i8042 device.
ErrorCount uint64 `json:"error_count"`
// Number of superfluous read intents on this i8042 device.
MissedReadCount uint64 `json:"missed_read_count"`
// Number of superfluous write intents on this i8042 device.
MissedWriteCount uint64 `json:"missed_write_count"`
// Bytes read by this device.
ReadCount uint64 `json:"read_count"`
// Number of resets done by this device.
ResetCount uint64 `json:"reset_count"`
// Bytes written by this device.
WriteCount uint64 `json:"write_count"`
}
// Metrics related to performance measurements.
type PerformanceMetrics struct {
// Measures the snapshot full create time, at the API (user) level, in microseconds.
FullCreateSnapshot uint64 `json:"full_create_snapshot"`
// Measures the snapshot diff create time, at the API (user) level, in microseconds.
DiffCreateSnapshot uint64 `json:"diff_create_snapshot"`
// Measures the snapshot load time, at the API (user) level, in microseconds.
LoadSnapshot uint64 `json:"load_snapshot"`
// Measures the microVM pausing duration, at the API (user) level, in microseconds.
PauseVm uint64 `json:"pause_vm"`
// Measures the microVM resuming duration, at the API (user) level, in microseconds.
ResumeVm uint64 `json:"resume_vm"`
// Measures the snapshot full create time, at the VMM level, in microseconds.
VmmFullCreateSnapshot uint64 `json:"vmm_full_create_snapshot"`
// Measures the snapshot diff create time, at the VMM level, in microseconds.
VmmDiffCreateSnapshot uint64 `json:"vmm_diff_create_snapshot"`
// Measures the snapshot load time, at the VMM level, in microseconds.
VmmLoadSnapshot uint64 `json:"vmm_load_snapshot"`
// Measures the microVM pausing duration, at the VMM level, in microseconds.
VmmPauseVm uint64 `json:"vmm_pause_vm"`
// Measures the microVM resuming duration, at the VMM level, in microseconds.
VmmResumeVm uint64 `json:"vmm_resume_vm"`
}
// Logging related metrics.
type LoggerSystemMetrics struct {
// Number of misses on flushing metrics.
MissedMetricsCount uint64 `json:"missed_metrics_count"`
// Number of errors during metrics handling.
MetricsFails uint64 `json:"metrics_fails"`
// Number of misses on logging human readable content.
MissedLogCount uint64 `json:"missed_log_count"`
// Number of errors while trying to log human readable content.
LogFails uint64 `json:"log_fails"`
}
// Metrics specific to MMDS functionality.
type MmdsMetrics struct {
// Number of frames rerouted to MMDS.
RxAccepted uint64 `json:"rx_accepted"`
// Number of errors while handling a frame through MMDS.
RxAcceptedErr uint64 `json:"rx_accepted_err"`
// Number of uncommon events encountered while processing packets through MMDS.
RxAcceptedUnusual uint64 `json:"rx_accepted_unusual"`
// The number of buffers which couldn't be parsed as valid Ethernet frames by the MMDS.
RxBadEth uint64 `json:"rx_bad_eth"`
// The total number of successful receive operations by the MMDS.
RxCount uint64 `json:"rx_count"`
// The total number of bytes sent by the MMDS.
TxBytes uint64 `json:"tx_bytes"`
// The total number of successful send operations by the MMDS.
TxCount uint64 `json:"tx_count"`
// The number of errors raised by the MMDS while attempting to send frames/packets/segments.
TxErrors uint64 `json:"tx_errors"`
// The number of frames sent by the MMDS.
TxFrames uint64 `json:"tx_frames"`
// The number of connections successfully accepted by the MMDS TCP handler.
ConnectionsCreated uint64 `json:"connections_created"`
// The number of connections cleaned up by the MMDS TCP handler.
ConnectionsDestroyed uint64 `json:"connections_destroyed"`
}
// A network device's related metrics.
type NetDeviceMetrics struct {
// Number of times when activate failed on a network device.
ActivateFails uint64 `json:"activate_fails"`
// Number of times when interacting with the space config of a network device failed.
CfgFails uint64 `json:"cfg_fails"`
MacAddressUpdates uint64 `json:"mac_address_updates"`
// No available buffer for the net device rx queue.
NoRxAvailBuffer uint64 `json:"no_rx_avail_buffer"`
// No available buffer for the net device tx queue.
NoTxAvailBuffer uint64 `json:"no_tx_avail_buffer"`
// Number of times when handling events on a network device failed.
EventFails uint64 `json:"event_fails"`
// Number of events associated with the receiving queue.
RxQueueEventCount uint64 `json:"rx_queue_event_count"`
// Number of events associated with the rate limiter installed on the receiving path.
RxEventRateLimiterCount uint64 `json:"rx_event_rate_limiter_count"`
// Number of RX partial writes to guest.
RxPartialWrites uint64 `json:"rx_partial_writes"`
// Number of RX rate limiter throttling events.
RxRateLimiterThrottled uint64 `json:"rx_rate_limiter_throttled"`
// Number of events received on the associated tap.
RxTapEventCount uint64 `json:"rx_tap_event_count"`
// Number of bytes received.
RxBytesCount uint64 `json:"rx_bytes_count"`
// Number of packets received.
RxPacketsCount uint64 `json:"rx_packets_count"`
// Number of errors while receiving data.
RxFails uint64 `json:"rx_fails"`
// Number of successful read operations while receiving data.
RxCount uint64 `json:"rx_count"`
// Number of times reading from TAP failed.
TapReadFails uint64 `json:"tap_read_fails"`
// Number of times writing to TAP failed.
TapWriteFails uint64 `json:"tap_write_fails"`
// Number of transmitted bytes.
TxBytesCount uint64 `json:"tx_bytes_count"`
// Number of malformed TX frames.
TxMalformedFrames uint64 `json:"tx_malformed_frames"`
// Number of errors while transmitting data.
TxFails uint64 `json:"tx_fails"`
// Number of successful write operations while transmitting data.
TxCount uint64 `json:"tx_count"`
// Number of transmitted packets.
TxPacketsCount uint64 `json:"tx_packets_count"`
// Number of TX partial reads from guest.
TxPartialReads uint64 `json:"tx_partial_reads"`
// Number of events associated with the transmitting queue.
TxQueueEventCount uint64 `json:"tx_queue_event_count"`
// Number of events associated with the rate limiter installed on the transmitting path.
TxRateLimiterEventCount uint64 `json:"tx_rate_limiter_event_count"`
// Number of RX rate limiter throttling events.
TxRateLimiterThrottled uint64 `json:"tx_rate_limiter_throttled"`
// Number of packets with a spoofed mac, sent by the guest.
TxSpoofedMacCount uint64 `json:"tx_spoofed_mac_count"`
}
// Metrics related to API PATCH requests.
type PatchRequestsMetrics struct {
// Number of tries to PATCH a block device.
DriveCount uint64 `json:"drive_count"`
// Number of failures in PATCHing a block device.
DriveFails uint64 `json:"drive_fails"`
// Number of tries to PATCH a net device.
NetworkCount uint64 `json:"network_count"`
// Number of failures in PATCHing a net device.
NetworkFails uint64 `json:"network_fails"`
// Number of PATCHs for configuring the machine.
MachineCfgCount uint64 `json:"machine_cfg_count"`
// Number of failures in configuring the machine.
MachineCfgFails uint64 `json:"machine_cfg_fails"`
}
// Metrics related to API PUT requests.
type PutRequestsMetrics struct {
// Number of PUTs triggering an action on the VM.
ActionsCount uint64 `json:"actions_count"`
// Number of failures in triggering an action on the VM.
ActionsFails uint64 `json:"actions_fails"`
// Number of PUTs for attaching source of boot.
BootSourceCount uint64 `json:"boot_source_count"`
// Number of failures during attaching source of boot.
BootSourceFails uint64 `json:"boot_source_fails"`
// Number of PUTs triggering a block attach.
DriveCount uint64 `json:"drive_count"`
// Number of failures in attaching a block device.
DriveFails uint64 `json:"drive_fails"`
// Number of PUTs for initializing the logging system.
LoggerCount uint64 `json:"logger_count"`
// Number of failures in initializing the logging system.
LoggerFails uint64 `json:"logger_fails"`
// Number of PUTs for configuring the machine.
MachineCfgCount uint64 `json:"machine_cfg_count"`
// Number of failures in configuring the machine.
MachineCfgFails uint64 `json:"machine_cfg_fails"`
// Number of PUTs for initializing the metrics system.
MetricsCount uint64 `json:"metrics_count"`
// Number of failures in initializing the metrics system.
MetricsFails uint64 `json:"metrics_fails"`
// Number of PUTs for creating a new network interface.
NetworkCount uint64 `json:"network_count"`
// Number of failures in creating a new network interface.
NetworkFails uint64 `json:"network_fails"`
}
// Metrics related to the RTC device.
type RTCDeviceMetrics struct {
// Errors triggered while using the RTC device.
ErrorCount uint64 `json:"error_count"`
// Number of superfluous read intents on this RTC device.
MissedReadCount uint64 `json:"missed_read_count"`
// Number of superfluous write intents on this RTC device.
MissedWriteCount uint64 `json:"missed_write_count"`
}
// Metrics related to seccomp filtering.
type SeccompMetrics struct {
// Number of errors inside the seccomp filtering.
NumFaults uint64 `json:"num_faults"`
}
// Metrics related to a vcpu's functioning.
type VcpuMetrics struct {
// Number of KVM exits for handling input IO.
ExitIoIn uint64 `json:"exit_io_in"`
// Number of KVM exits for handling output IO.
ExitIoOut uint64 `json:"exit_io_out"`
// Number of KVM exits for handling MMIO reads.
ExitMmioRead uint64 `json:"exit_mmio_read"`
// Number of KVM exits for handling MMIO writes.
ExitMmioWrite uint64 `json:"exit_mmio_write"`
// Number of errors during this VCPU's run.
Failures uint64 `json:"failures"`
// Failures in configuring the CPUID.
FilterCpuid uint64 `json:"filter_cpuid"`
}
// Metrics related to the virtual machine manager.
type VmmMetrics struct {
// Number of device related events received for a VM.
DeviceEvents uint64 `json:"device_events"`
// Metric for signaling a panic has occurred.
PanicCount uint64 `json:"panic_count"`
}
// Metrics related to the UART device.
type SerialDeviceMetrics struct {
// Errors triggered while using the UART device.
ErrorCount uint64 `json:"error_count"`
// Number of flush operations.
FlushCount uint64 `json:"flush_count"`
// Number of read calls that did not trigger a read.
MissedReadCount uint64 `json:"missed_read_count"`
// Number of write calls that did not trigger a write.
MissedWriteCount uint64 `json:"missed_write_count"`
// Number of succeeded read calls.
ReadCount uint64 `json:"read_count"`
// Number of succeeded write calls.
WriteCount uint64 `json:"write_count"`
}
// Metrics related to signals.
type SignalMetrics struct {
// Number of times that SIGBUS was handled.
Sigbus uint64 `json:"sigbus"`
// Number of times that SIGSEGV was handled.
Sigsegv uint64 `json:"sigsegv"`
}
// Metrics related to virtio-vsockets.
type VsockDeviceMetrics struct {
// Number of times when activate failed on a vsock device.
ActivateFails uint64 `json:"activate_fails"`
// Number of times when interacting with the space config of a vsock device failed.
CfgFails uint64 `json:"cfg_fails"`
// Number of times when handling RX queue events on a vsock device failed.
RxQueueEventFails uint64 `json:"rx_queue_event_fails"`
// Number of times when handling TX queue events on a vsock device failed.
TxQueueEventFails uint64 `json:"tx_queue_event_fails"`
// Number of times when handling event queue events on a vsock device failed.
EvQueueEventFails uint64 `json:"ev_queue_event_fails"`
// Number of times when handling muxer events on a vsock device failed.
MuxerEventFails uint64 `json:"muxer_event_fails"`
// Number of times when handling connection events on a vsock device failed.
ConnEventFails uint64 `json:"conn_event_fails"`
// Number of events associated with the receiving queue.
RxQueueEventCount uint64 `json:"rx_queue_event_count"`
// Number of events associated with the transmitting queue.
TxQueueEventCount uint64 `json:"tx_queue_event_count"`
// Number of bytes received.
RxBytesCount uint64 `json:"rx_bytes_count"`
// Number of transmitted bytes.
TxBytesCount uint64 `json:"tx_bytes_count"`
// Number of packets received.
RxPacketsCount uint64 `json:"rx_packets_count"`
// Number of transmitted packets.
TxPacketsCount uint64 `json:"tx_packets_count"`
// Number of added connections.
ConnsAdded uint64 `json:"conns_added"`
// Number of killed connections.
ConnsKilled uint64 `json:"conns_killed"`
// Number of removed connections.
ConnsRemoved uint64 `json:"conns_removed"`
// How many times the killq has been resynced.
KillqResync uint64 `json:"killq_resync"`
// How many flush fails have been seen.
TxFlushFails uint64 `json:"tx_flush_fails"`
// How many write fails have been seen.
TxWriteFails uint64 `json:"tx_write_fails"`
// Number of times read() has failed.
RxReadFails uint64 `json:"rx_read_fails"`
}

View File

@@ -75,6 +75,7 @@ type VCSandbox interface {
UpdateRuntimeMetrics() error
GetAgentMetrics() (string, error)
GetAgentURL() (string, error)
}
// VCContainer is the Container interface

View File

@@ -52,6 +52,11 @@ const (
// path to vfio devices
vfioPath = "/dev/vfio/"
// enable debug console
kernelParamDebugConsole = "agent.debug_console"
kernelParamDebugConsoleVPort = "agent.debug_console_vport"
kernelParamDebugConsoleVPortValue = "1026"
)
var (
@@ -195,13 +200,14 @@ func ephemeralPath() string {
// KataAgentConfig is a structure storing information needed
// to reach the Kata Containers agent.
type KataAgentConfig struct {
LongLiveConn bool
Debug bool
Trace bool
ContainerPipeSize uint32
TraceMode string
TraceType string
KernelModules []string
LongLiveConn bool
Debug bool
Trace bool
EnableDebugConsole bool
ContainerPipeSize uint32
TraceMode string
TraceType string
KernelModules []string
}
// KataAgentState is the structure describing the data stored from this
@@ -294,6 +300,11 @@ func KataAgentKernelParams(config KataAgentConfig) []Param {
params = append(params, Param{Key: vcAnnotations.ContainerPipeSizeKernelParam, Value: containerPipeSize})
}
if config.EnableDebugConsole {
params = append(params, Param{Key: kernelParamDebugConsole, Value: ""})
params = append(params, Param{Key: kernelParamDebugConsoleVPort, Value: kernelParamDebugConsoleVPortValue})
}
return params
}
@@ -1208,16 +1219,6 @@ func (k *kataAgent) buildContainerRootfs(sandbox *Sandbox, c *Container, rootPat
return nil, nil
}
func (k *kataAgent) hasAgentDebugConsole(sandbox *Sandbox) bool {
for _, p := range sandbox.config.HypervisorConfig.KernelParams {
if p.Key == "agent.debug_console" {
k.Logger().Info("agent has debug console")
return true
}
}
return false
}
func (k *kataAgent) createContainer(sandbox *Sandbox, c *Container) (p *Process, err error) {
span, _ := k.trace("createContainer")
defer span.Finish()

View File

@@ -1,4 +1,4 @@
// Copyright 2017 HyperHQ Inc.
// Copyright (c) 2017 HyperHQ Inc.
//
// SPDX-License-Identifier: Apache-2.0
//
@@ -178,7 +178,7 @@ func parse(sock string) (string, *url.URL, error) {
func agentDialer(addr *url.URL) dialer {
switch addr.Scheme {
case VSockSocketScheme:
return vsockDialer
return VsockDialer
case HybridVSockScheme:
return HybridVSockDialer
case MockHybridVSockScheme:
@@ -278,7 +278,7 @@ func commonDialer(timeout time.Duration, dialFunc func() (net.Conn, error), time
return conn, nil
}
func vsockDialer(sock string, timeout time.Duration) (net.Conn, error) {
func VsockDialer(sock string, timeout time.Duration) (net.Conn, error) {
cid, port, err := parseGrpcVsockAddr(sock)
if err != nil {
return nil, err

View File

@@ -245,7 +245,7 @@ const (
// The following example can be used to load two kernel modules with parameters
///
// annotations:
// io.kata-containers.config.agent.kernel_modules: "e1000e InterruptThrottleRate=3000,3000,3000 EEE=1; i915 enable_ppgtt=0"
// io.katacontainers.config.agent.kernel_modules: "e1000e InterruptThrottleRate=3000,3000,3000 EEE=1; i915 enable_ppgtt=0"
//
// The first word is considered as the module name and the rest as its parameters.
//

View File

@@ -17,7 +17,10 @@ docs/FsConfig.md
docs/InitramfsConfig.md
docs/KernelConfig.md
docs/MemoryConfig.md
docs/MemoryZoneConfig.md
docs/NetConfig.md
docs/NumaConfig.md
docs/NumaDistance.md
docs/PciDeviceInfo.md
docs/PmemConfig.md
docs/RestoreConfig.md
@@ -28,6 +31,7 @@ docs/VmConfig.md
docs/VmInfo.md
docs/VmRemoveDevice.md
docs/VmResize.md
docs/VmResizeZone.md
docs/VmSnapshotConfig.md
docs/VmmPingResponse.md
docs/VsockConfig.md
@@ -44,7 +48,10 @@ model_fs_config.go
model_initramfs_config.go
model_kernel_config.go
model_memory_config.go
model_memory_zone_config.go
model_net_config.go
model_numa_config.go
model_numa_distance.go
model_pci_device_info.go
model_pmem_config.go
model_restore_config.go
@@ -55,6 +62,7 @@ model_vm_config.go
model_vm_info.go
model_vm_remove_device.go
model_vm_resize.go
model_vm_resize_zone.go
model_vm_snapshot_config.go
model_vmm_ping_response.go
model_vsock_config.go

View File

@@ -50,6 +50,7 @@ Class | Method | HTTP request | Description
*DefaultApi* | [**VmInfoGet**](docs/DefaultApi.md#vminfoget) | **Get** /vm.info | Returns general information about the cloud-hypervisor Virtual Machine (VM) instance.
*DefaultApi* | [**VmRemoveDevicePut**](docs/DefaultApi.md#vmremovedeviceput) | **Put** /vm.remove-device | Remove a device from the VM
*DefaultApi* | [**VmResizePut**](docs/DefaultApi.md#vmresizeput) | **Put** /vm.resize | Resize the VM
*DefaultApi* | [**VmResizeZonePut**](docs/DefaultApi.md#vmresizezoneput) | **Put** /vm.resize-zone | Resize a memory zone
*DefaultApi* | [**VmRestorePut**](docs/DefaultApi.md#vmrestoreput) | **Put** /vm.restore | Restore a VM from a snapshot.
*DefaultApi* | [**VmSnapshotPut**](docs/DefaultApi.md#vmsnapshotput) | **Put** /vm.snapshot | Returns a VM snapshot.
*DefaultApi* | [**VmmPingGet**](docs/DefaultApi.md#vmmpingget) | **Get** /vmm.ping | Ping the VMM to check for API server availability
@@ -67,7 +68,10 @@ Class | Method | HTTP request | Description
- [InitramfsConfig](docs/InitramfsConfig.md)
- [KernelConfig](docs/KernelConfig.md)
- [MemoryConfig](docs/MemoryConfig.md)
- [MemoryZoneConfig](docs/MemoryZoneConfig.md)
- [NetConfig](docs/NetConfig.md)
- [NumaConfig](docs/NumaConfig.md)
- [NumaDistance](docs/NumaDistance.md)
- [PciDeviceInfo](docs/PciDeviceInfo.md)
- [PmemConfig](docs/PmemConfig.md)
- [RestoreConfig](docs/RestoreConfig.md)
@@ -78,6 +82,7 @@ Class | Method | HTTP request | Description
- [VmInfo](docs/VmInfo.md)
- [VmRemoveDevice](docs/VmRemoveDevice.md)
- [VmResize](docs/VmResize.md)
- [VmResizeZone](docs/VmResizeZone.md)
- [VmSnapshotConfig](docs/VmSnapshotConfig.md)
- [VmmPingResponse](docs/VmmPingResponse.md)
- [VsockConfig](docs/VsockConfig.md)

View File

@@ -138,6 +138,21 @@ paths:
"404":
description: The VM instance could not be resized because it is not created.
summary: Resize the VM
/vm.resize-zone:
put:
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/VmResizeZone'
description: The target size for the memory zone
required: true
responses:
"204":
description: The memory zone was successfully resized.
"500":
description: The memory zone could not be resized.
summary: Resize a memory zone
/vm.add-device:
put:
requestBody:
@@ -328,26 +343,46 @@ components:
shared: false
mergeable: false
balloon: false
file: file
size: 7
hotplugged_size: 3
zones:
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
hotplug_size: 9
hotplug_method: acpi
balloon_size: 2
disks:
- path: path
num_queues: 3
num_queues: 1
readonly: false
iommu: false
queue_size: 2
queue_size: 6
vhost_socket: vhost_socket
vhost_user: false
direct: false
poll_queue: true
id: id
- path: path
num_queues: 3
num_queues: 1
readonly: false
iommu: false
queue_size: 2
queue_size: 6
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -370,25 +405,50 @@ components:
id: id
kernel:
path: path
numa:
- distances:
- distance: 6
destination: 3
- distance: 6
destination: 3
cpus:
- 6
- 6
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 9
- distances:
- distance: 6
destination: 3
- distance: 6
destination: 3
cpus:
- 6
- 6
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 9
rng:
iommu: false
src: /dev/urandom
sgx_epc:
- prefault: false
size: 1
size: 8
- prefault: false
size: 1
size: 8
fs:
- num_queues: 1
queue_size: 1
cache_size: 1
- num_queues: 4
queue_size: 5
cache_size: 9
dax: true
tag: tag
socket: socket
id: id
- num_queues: 1
queue_size: 1
cache_size: 1
- num_queues: 4
queue_size: 5
cache_size: 9
dax: true
tag: tag
socket: socket
@@ -401,13 +461,13 @@ components:
pmem:
- mergeable: false
file: file
size: 6
size: 9
iommu: false
id: id
discard_writes: false
- mergeable: false
file: file
size: 6
size: 9
iommu: false
id: id
discard_writes: false
@@ -422,9 +482,9 @@ components:
path: path
net:
- tap: tap
num_queues: 4
num_queues: 7
iommu: false
queue_size: 7
queue_size: 1
vhost_socket: vhost_socket
vhost_user: false
ip: 192.168.249.1
@@ -432,9 +492,9 @@ components:
mac: mac
mask: 255.255.255.0
- tap: tap
num_queues: 4
num_queues: 7
iommu: false
queue_size: 7
queue_size: 1
vhost_socket: vhost_socket
vhost_user: false
ip: 192.168.249.1
@@ -488,26 +548,46 @@ components:
shared: false
mergeable: false
balloon: false
file: file
size: 7
hotplugged_size: 3
zones:
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
hotplug_size: 9
hotplug_method: acpi
balloon_size: 2
disks:
- path: path
num_queues: 3
num_queues: 1
readonly: false
iommu: false
queue_size: 2
queue_size: 6
vhost_socket: vhost_socket
vhost_user: false
direct: false
poll_queue: true
id: id
- path: path
num_queues: 3
num_queues: 1
readonly: false
iommu: false
queue_size: 2
queue_size: 6
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -530,25 +610,50 @@ components:
id: id
kernel:
path: path
numa:
- distances:
- distance: 6
destination: 3
- distance: 6
destination: 3
cpus:
- 6
- 6
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 9
- distances:
- distance: 6
destination: 3
- distance: 6
destination: 3
cpus:
- 6
- 6
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 9
rng:
iommu: false
src: /dev/urandom
sgx_epc:
- prefault: false
size: 1
size: 8
- prefault: false
size: 1
size: 8
fs:
- num_queues: 1
queue_size: 1
cache_size: 1
- num_queues: 4
queue_size: 5
cache_size: 9
dax: true
tag: tag
socket: socket
id: id
- num_queues: 1
queue_size: 1
cache_size: 1
- num_queues: 4
queue_size: 5
cache_size: 9
dax: true
tag: tag
socket: socket
@@ -561,13 +666,13 @@ components:
pmem:
- mergeable: false
file: file
size: 6
size: 9
iommu: false
id: id
discard_writes: false
- mergeable: false
file: file
size: 6
size: 9
iommu: false
id: id
discard_writes: false
@@ -582,9 +687,9 @@ components:
path: path
net:
- tap: tap
num_queues: 4
num_queues: 7
iommu: false
queue_size: 7
queue_size: 1
vhost_socket: vhost_socket
vhost_user: false
ip: 192.168.249.1
@@ -592,9 +697,9 @@ components:
mac: mac
mask: 255.255.255.0
- tap: tap
num_queues: 4
num_queues: 7
iommu: false
queue_size: 7
queue_size: 1
vhost_socket: vhost_socket
vhost_user: false
ip: 192.168.249.1
@@ -644,11 +749,14 @@ components:
items:
$ref: '#/components/schemas/SgxEpcConfig'
type: array
numa:
items:
$ref: '#/components/schemas/NumaConfig'
type: array
iommu:
default: false
type: boolean
required:
- cmdline
- kernel
type: object
CpuTopology:
@@ -691,16 +799,77 @@ components:
- boot_vcpus
- max_vcpus
type: object
MemoryZoneConfig:
example:
hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
properties:
id:
type: string
size:
format: int64
type: integer
file:
type: string
mergeable:
default: false
type: boolean
shared:
default: false
type: boolean
hugepages:
default: false
type: boolean
host_numa_node:
format: uint32
type: integer
hotplug_size:
format: int64
type: integer
hotplugged_size:
format: int64
type: integer
required:
- id
- size
type: object
MemoryConfig:
example:
hugepages: false
shared: false
mergeable: false
balloon: false
file: file
size: 7
hotplugged_size: 3
zones:
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
- hugepages: false
shared: false
mergeable: false
file: file
size: 4
hotplugged_size: 1
host_numa_node: 7
id: id
hotplug_size: 1
hotplug_size: 9
hotplug_method: acpi
balloon_size: 2
properties:
size:
format: int64
@@ -708,8 +877,9 @@ components:
hotplug_size:
format: int64
type: integer
file:
type: string
hotplugged_size:
format: int64
type: integer
mergeable:
default: false
type: boolean
@@ -725,6 +895,13 @@ components:
balloon:
default: false
type: boolean
balloon_size:
format: uint64
type: integer
zones:
items:
$ref: '#/components/schemas/MemoryZoneConfig'
type: array
required:
- size
type: object
@@ -759,10 +936,10 @@ components:
DiskConfig:
example:
path: path
num_queues: 3
num_queues: 1
readonly: false
iommu: false
queue_size: 2
queue_size: 6
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -802,9 +979,9 @@ components:
NetConfig:
example:
tap: tap
num_queues: 4
num_queues: 7
iommu: false
queue_size: 7
queue_size: 1
vhost_socket: vhost_socket
vhost_user: false
ip: 192.168.249.1
@@ -856,9 +1033,9 @@ components:
type: object
FsConfig:
example:
num_queues: 1
queue_size: 1
cache_size: 1
num_queues: 4
queue_size: 5
cache_size: 9
dax: true
tag: tag
socket: socket
@@ -890,7 +1067,7 @@ components:
example:
mergeable: false
file: file
size: 6
size: 9
iommu: false
id: id
discard_writes: false
@@ -978,7 +1155,7 @@ components:
SgxEpcConfig:
example:
prefault: false
size: 1
size: 8
properties:
size:
format: uint64
@@ -989,6 +1166,55 @@ components:
required:
- size
type: object
NumaDistance:
example:
distance: 6
destination: 3
properties:
destination:
format: uint32
type: integer
distance:
format: uint8
type: integer
required:
- destination
- distance
type: object
NumaConfig:
example:
distances:
- distance: 6
destination: 3
- distance: 6
destination: 3
cpus:
- 6
- 6
memory_zones:
- memory_zones
- memory_zones
guest_numa_id: 9
properties:
guest_numa_id:
format: uint32
type: integer
cpus:
items:
format: uint8
type: integer
type: array
distances:
items:
$ref: '#/components/schemas/NumaDistance'
type: array
memory_zones:
items:
type: string
type: array
required:
- guest_numa_id
type: object
VmResize:
example:
desired_vcpus: 1
@@ -1007,6 +1233,18 @@ components:
format: int64
type: integer
type: object
VmResizeZone:
example:
id: id
desired_ram: 0
properties:
id:
type: string
desired_ram:
description: desired memory zone size in bytes
format: int64
type: integer
type: object
VmAddDevice:
example:
path: path

View File

@@ -1292,6 +1292,73 @@ func (a *DefaultApiService) VmResizePut(ctx _context.Context, vmResize VmResize)
return localVarHTTPResponse, nil
}
/*
VmResizeZonePut Resize a memory zone
* @param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background().
* @param vmResizeZone The target size for the memory zone
*/
func (a *DefaultApiService) VmResizeZonePut(ctx _context.Context, vmResizeZone VmResizeZone) (*_nethttp.Response, error) {
var (
localVarHTTPMethod = _nethttp.MethodPut
localVarPostBody interface{}
localVarFormFileName string
localVarFileName string
localVarFileBytes []byte
)
// create path and map variables
localVarPath := a.client.cfg.BasePath + "/vm.resize-zone"
localVarHeaderParams := make(map[string]string)
localVarQueryParams := _neturl.Values{}
localVarFormParams := _neturl.Values{}
// to determine the Content-Type header
localVarHTTPContentTypes := []string{"application/json"}
// set Content-Type header
localVarHTTPContentType := selectHeaderContentType(localVarHTTPContentTypes)
if localVarHTTPContentType != "" {
localVarHeaderParams["Content-Type"] = localVarHTTPContentType
}
// to determine the Accept header
localVarHTTPHeaderAccepts := []string{}
// set Accept header
localVarHTTPHeaderAccept := selectHeaderAccept(localVarHTTPHeaderAccepts)
if localVarHTTPHeaderAccept != "" {
localVarHeaderParams["Accept"] = localVarHTTPHeaderAccept
}
// body params
localVarPostBody = &vmResizeZone
r, err := a.client.prepareRequest(ctx, localVarPath, localVarHTTPMethod, localVarPostBody, localVarHeaderParams, localVarQueryParams, localVarFormParams, localVarFormFileName, localVarFileName, localVarFileBytes)
if err != nil {
return nil, err
}
localVarHTTPResponse, err := a.client.callAPI(r)
if err != nil || localVarHTTPResponse == nil {
return localVarHTTPResponse, err
}
localVarBody, err := _ioutil.ReadAll(localVarHTTPResponse.Body)
localVarHTTPResponse.Body.Close()
localVarHTTPResponse.Body = _ioutil.NopCloser(_bytes.NewBuffer(localVarBody))
if err != nil {
return localVarHTTPResponse, err
}
if localVarHTTPResponse.StatusCode >= 300 {
newErr := GenericOpenAPIError{
body: localVarBody,
error: localVarHTTPResponse.Status,
}
return localVarHTTPResponse, newErr
}
return localVarHTTPResponse, nil
}
/*
VmRestorePut Restore a VM from a snapshot.
* @param ctx _context.Context - for authentication, logging, cancellation, deadlines, tracing, etc. Passed from http.Request or context.Background().

Some files were not shown because too many files have changed in this diff Show More