Compare commits

...

252 Commits

Author SHA1 Message Date
Fabiano Fidêncio
0f82291926 Merge pull request #1856 from fidencio/2.1.0-branch-bump
# Kata Containers 2.1.0
2021-05-14 19:59:06 +02:00
Fabiano Fidêncio
5d3610e25f release: Kata Containers 2.1.0
- stable-2.1 | The last round of backports before releasing 2.1.0
- back port: image_build: align image size to 128M for arm64
- stable-2.1 | runtime: make dialing timeout configurable
- stable-2.1 | agent: avoid reaping the exit signal of execute_hook in the reaper
- stable-2.1 | Get sandbox metrics cli
- packaging/kata-cleanup: add k3s containerd volume
- stable-2.1: First round of backports
- [backport]runtime: use s.ctx instead ctx for checking cancellation
- [2.1.0] kernel: configs: Open CONFIG_VIRTIO_MEM in x86_64 Linux kernel
- [2.1.0] Fix issue of virtio-mem

9266c246 rustjail: separated the propagation flags from mount flags
7086f91e runtime: sandbox delete should succeed after verifying sandbox state
0a7befa6 docs: Fix spell-check errors found after new text is discovered
eff70d2e docs: Remove horizontal ruler markers that disable spell checks
260f59df image_build: align image size to 128M for arm64
c0bdba23 runtime: make dialing timeout configurable
1b3cf2fb kata-monitor: export get stats for sandbox
59b9e5d0 kata-runtime: add `metrics` command
828a3048 agent: avoid reaping the exit signal of execute_hook in the reaper
d3690952 runtime: shim: dedup client, socket addr code
7f7c794d runtime: Short the shim-monitor path
3f1b7c91 cli: delete tracing code for kata-runtime binary
68cad377 agent: Set fixed NOFILE limit value for kata-agent
7c9067cc docs: add per-Pod Kata configurations for enable_pprof
dba86ef3 ci/install_yq.sh: install_yq: Check version before return
3883e4e2 kernel: configs: Open CONFIG_VIRTIO_MEM in x86_64 Linux kernel
79831faf runtime: use s.ctx instead ctx for checking cancellation
3212c7ae packaging/kata-cleanup: add k3s containerd volume
7f7c3fc8 qemu.go: qemu: resizeMemory: Fix virtio-mem resize overflow issue
c9053ea3 qemu.go: qemu: setupVirtioMem: let sizeMB be multiple of 2Mib

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-14 16:05:01 +02:00
Fabiano Fidêncio
ed01ac3e0c Merge pull request #1853 from fidencio/wip/last-round-of-backports-for-2.1.0
stable-2.1 | The last round of backports before releasing 2.1.0
2021-05-14 14:35:25 +02:00
fupan.lfp
9266c2460a rustjail: separated the propagation flags from mount flags
Since the propagation flags couldn't be combinted with the
standard mount flags, and they should be used with the remount,
thus it's better to split them from the standard mount flags.

Fixes: #1699

Signed-off-by: fupan.lfp <fupan.lfp@antgroup.com>
(cherry picked from commit e5fe572f51)
2021-05-14 09:42:00 +02:00
Peng Tao
7086f91e1f runtime: sandbox delete should succeed after verifying sandbox state
Otherwise we might block delete and create orphan containers.

Fixes: #1039

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
(cherry picked from commit 35151f1786)
2021-05-14 09:41:38 +02:00
Christophe de Dinechin
0a7befa645 docs: Fix spell-check errors found after new text is discovered
The spell-checker scripts has some bugs that caused large chunks of texts to not
be spell checked at all (see #1793). The previous commit worked around this bug,
which exposed another bug:

The following source text:

    are discussions about using VM save and restore to
    give [`criu`](https://github.com/checkpoint-restore/criu)-like
    functionality, which might provide a solution

yields the surprising error below:

    WARNING: Word 'givelike': did you mean one of the following?: give like, give-like, wavelike

Apparently, an extra space is removed, which is another issue with the
spell-checking script. This case is somewhat contrived because of the URL link,
so for now, I decided for a creative rewriting, inserting the word "a" knowing
that "alike" is a valid word ;-)

Fixes: #1793

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
(cherry picked from commit 5fdf617e7f)
2021-05-14 09:41:38 +02:00
Christophe de Dinechin
eff70d2eea docs: Remove horizontal ruler markers that disable spell checks
There is a bug in the CI script checking spelling that causes it
to skip any text that follows a horizontal ruler.
(https://github.com/kata-containers/tests/issues/3448)

Solution: replace one horizontal ruler marker with another that
does not trip the spell-checking script.

Fixes: #1793

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
(cherry picked from commit 42425456e7)
2021-05-14 09:38:50 +02:00
Fabiano Fidêncio
dd26aa5838 Merge pull request #1840 from jongwu/stable-2.1_image_align
back port: image_build: align image size to 128M for arm64
2021-05-13 10:37:50 +02:00
Jianyong Wu
260f59df38 image_build: align image size to 128M for arm64
There is an inconformity between qemu and kernel of memory alignment
check in memory hotplug. Both of qemu and kernel will do the start
address alignment check in memory hotplug. But it's 2M in qemu
while 128M in kernel. It leads to an issue when memory hotplug.

Currently, the kata image is a nvdimm device, which will plug into the VM as
a dimm. If another dimm is pluged, it will reside on top of that nvdimm.
So, the start address of the second dimm may not pass the alginment
check in kernel if the nvdimm size doesn't align with 128M.

There are 3 ways to address this issue I think:
1. fix the alignment size in kernel according to qemu. I think people
in linux kernel community will not accept it.
2. do alignment check in qemu and force the start address of hotplug
in alignment with 128M, which means there maybe holes between memory blocks.
3. obey the rule in user end, which means fix it in kata.

I think the second one is the best, but I can't do that for some reason.
Thus, the last one is the choice here.

Fixes: #1769
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2021-05-13 10:09:25 +08:00
Chelsea Mafrica
9a32a3e16d Merge pull request #1835 from snir911/backport_configure_timeout
stable-2.1 | runtime: make dialing timeout configurable
2021-05-12 13:14:37 -07:00
Fabiano Fidêncio
123f7d53cb Merge pull request #1830 from Tim-Zhang/fix-reap-for-stable-2.1
stable-2.1 | agent: avoid reaping the exit signal of execute_hook in the reaper
2021-05-12 20:26:42 +02:00
Fabiano Fidêncio
aa213fdc28 Merge pull request #1833 from fidencio/wip/stable-2.1-backport-of-1816
stable-2.1 | Get sandbox metrics cli
2021-05-12 19:34:20 +02:00
Snir Sheriber
c0bdba2350 runtime: make dialing timeout configurable
allow to set dialing timeout in configuration.toml
default is 30s

Fixes: #1789
(cherry-picked 01b56d6cbf)
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-05-12 14:17:34 +03:00
Eric Ernst
1b3cf2fb7d kata-monitor: export get stats for sandbox
Gathering stats for a given sandbox is pretty useful; let's export a
function from katamonitor pkg to do this.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
(cherry picked from commit 3787306107)
2021-05-12 11:44:58 +02:00
Eric Ernst
59b9e5d0f8 kata-runtime: add metrics command
For easier debug, let's add subcommand to kata-runtime for gathering
metrics associated with a given sandbox.

kata-runtime metrics --sandbox-id foobar

Fixes: #1815

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
(cherry picked from commit 8068a4692f)
2021-05-12 11:44:53 +02:00
Tim Zhang
828a304883 agent: avoid reaping the exit signal of execute_hook in the reaper
Fixes: #1826

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-05-12 16:33:44 +08:00
Fabiano Fidêncio
70734dfa17 Merge pull request #1803 from nubificus/stable-2.1
packaging/kata-cleanup: add k3s containerd volume
2021-05-11 19:38:57 +02:00
Fabiano Fidêncio
f170df6201 Merge pull request #1821 from fidencio/wip/first-round-of-backports
stable-2.1: First round of backports
2021-05-11 08:52:18 +02:00
Eric Ernst
d3690952e6 runtime: shim: dedup client, socket addr code
(1) Add an accessor function, SocketAddress, to the shim-v2 code for
determining the shim's abstract domain socket address, given the sandbox
ID.

(2) In kata monitor, create a function, BuildShimClient, for obtaining the appropriate
http.Client for communicating with the shim's monitoring endpoint.

(3) Update the kata CLI and kata-monitor code to make use of these.

(4) Migrate some kata monitor methods to be functions, in order to ease
future reuse.

(5) drop unused namespace from functions where it is no longer needed.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
(cherry picked from commit 3caed6f88d)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:35:53 +02:00
Fabiano Fidêncio
7f7c794da4 runtime: Short the shim-monitor path
Instead of having something like
"/containerd-shim/$namespace/$sandboxID/shim-monitor.sock", let's change
the approach to:
* create the file in a more neutral location "/run/vc", instead of
  "/containerd-shim";
* drop the namespace, as the sandboxID should be unique;
* remove ".sock" from the socket name.

This will result on a name that looks like:
"/run/vc/$sandboxID/shim-monitor"

Fixes: #497

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 4bc006c8a4)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:35:47 +02:00
bin
3f1b7c9127 cli: delete tracing code for kata-runtime binary
There are no pod/container operations in kata-runtime binary,
tracing in this package is meaningless.

Fixes: #1748

Signed-off-by: bin <bin@hyper.sh>
(cherry picked from commit 13c23fec11)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:35:36 +02:00
Snir Sheriber
68cad37720 agent: Set fixed NOFILE limit value for kata-agent
Some applications may fail if NOFILE limit is set to unlimited.
Although in some environments this value is explicitly overridden,
lets set it to a more sane value in case it doesn't.

Fixes #1715
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
(cherry picked from commit a188577ebf)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:34:28 +02:00
bin
7c9067cc9d docs: add per-Pod Kata configurations for enable_pprof
Now enabling enable_pprof for individual pods is supported,
but not documented.

This commit will add per-Pod Kata configurations for `enable_pprof`
in file `docs/how-to/how-to-set-sandbox-config-kata.md`

Fixes: #1744

Signed-off-by: bin <bin@hyper.sh>
(cherry picked from commit 95e54e3f48)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:34:05 +02:00
Hui Zhu
dba86ef31a ci/install_yq.sh: install_yq: Check version before return
Check the yq version before return.

Fixes: #1776

Signed-off-by: Hui Zhu <teawater@antfin.com>
(cherry picked from commit d8896157df)
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-10 15:33:33 +02:00
Tim Zhang
0e2df80bda Merge pull request #1814 from liubin/fix/1804-select-sandbox-ctx
[backport]runtime: use s.ctx instead ctx for checking cancellation
2021-05-07 19:43:14 +08:00
Bin Liu
8c4e187049 Merge pull request #1813 from teawater/open_vm
[2.1.0] kernel: configs: Open CONFIG_VIRTIO_MEM in x86_64 Linux kernel
2021-05-07 19:31:23 +08:00
Hui Zhu
3883e4e290 kernel: configs: Open CONFIG_VIRTIO_MEM in x86_64 Linux kernel
Open CONFIG_VIRTIO_MEM in x86_64 Linux kernel.

Fixes: #1798

Signed-off-by: Hui Zhu <teawater@antfin.com>
2021-05-07 15:19:06 +08:00
Fabiano Fidêncio
3bcdc26008 Merge pull request #1812 from teawater/fix_vm
[2.1.0] Fix issue of virtio-mem
2021-05-07 08:14:19 +02:00
bin
79831fafaf runtime: use s.ctx instead ctx for checking cancellation
s.ctx should be used for checking cancellation, and the
local ctx is used for tracing.

Fixes: #1804

Signed-off-by: bin <bin@hyper.sh>
2021-05-06 17:22:53 +08:00
Orestis Lagkas Nikolos
3212c7ae00 packaging/kata-cleanup: add k3s containerd volume
kata-deploy cleanup expects to find containerd configuration
in /etc/containerd/config.toml. In case of k3s mount the k3s
containerd config as a volume.

Original PR #1802

Fixes #1801

Signed-off-by: Orestis Lagkas Nikolos <olagkasn@nubificus.co.uk>
2021-05-06 03:36:38 -05:00
Hui Zhu
7f7c3fc8ec qemu.go: qemu: resizeMemory: Fix virtio-mem resize overflow issue
This commit change sizeByte from uint32 to uint64 to fix overflow issue.

Fixes: #1796

Signed-off-by: Hui Zhu <teawater@antfin.com>
2021-05-06 14:13:50 +08:00
Hui Zhu
c9053ea3fb qemu.go: qemu: setupVirtioMem: let sizeMB be multiple of 2Mib
Got:
FATA[0000] run pod sandbox: rpc error: code = Unknown desc = failed to
create containerd task: Add 189759MB virtio-mem-pci fail QMP command
failed: backend memory size must be multiple of 0x200000: unknown

This commit let sizeMB be multiple of 2Mib to fix the issue.

Fixes: #1796

Signed-off-by: Hui Zhu <teawater@antfin.com>
2021-05-06 14:13:48 +08:00
Fabiano Fidêncio
d0eda5ecfd Merge pull request #1786 from fidencio/2.1.0-rc0-branch-bump
# Kata Containers 2.1.0-rc0
2021-05-01 01:13:22 +02:00
Fabiano Fidêncio
799433d863 release: Kata Containers 2.1.0-rc0
- Update kata-deploy to use CRI-O drop-in files
- Update dependencies versions
- fix build kernel shell error when setup with `-f`
- virtcontainers: Fix virtio-fs on s390x
- Runtimeclass updates
- versions: Upgrade to cloud-hypervisor v15.0
- clh: return error if apiSocketPath failed
- runtime: fix dropped error
- agent: Update seccomp configuration for errnoRet and flags
- Fix the issue that sandbox size is not right after update
- docs: Document limitation regarding subpaths
- qemu: kill virtiofsd if failure to start VMM
- runtime/virtcontainers: Fix typo on qmp error msg
- cli: delete not used files
- runtime: delete not used function parameter builtIn
- add io.katacontainers.config.hypervisor.virtio_fs_extra_args handling
- Entropy source annotation
- runtime: Fix stdout/stderr output from container being truncated
- fix the issue of missing set fsGroup for EphemeralStorage
- qemu: Fix assertion failure on shutdown
- Assorted clippy fixes for Rust agent
- agent: use channel instead of pipe(2) to send exit signal of process
- Improve agent shutdown handling
- Enable virtio-fs on s390x
- block: Generate PCI path for virtio-blk devices on clh
- runtime: Disable trace for healthcheck
- agent/rustjail: Fix accidental damage from tokio conversion
- cli: Use genericGetExpectedHostDetails on s390x
- runtime/tests: Change "moo FAILURE" message
- Update the information about the release process
- remove ProcessListContainer API

2047f26f kata-deploy: Adapt CRI-O config to use drop-in files
8de2f914 kata-deploy: Rely on CRIO default's values for manage_ns_lifecycle
ea9936e0 versions: Bump runc to v1.0.0-rc93
9c333b2c versions: Bump CRI-O version to 1.21.x
e33f207b versions: Bump critools version to 1.21.0
8e5df723 versions: Bump kubernetes version to 1.21.0
d15f84c9 versions: Remove Docker entry
516f4ec0 versions: Remove OpenShift entry
be101ac1 versions: Remove CRI-O meta dependencies
1ca6bedf versions: Upgrade to cloud-hypervisor v15.0
906c0df4 kata-deploy: don't update worker pool nodes
3ee61776 virtcontainers: Enable virtio-fs on s390x
8385ff95 runtime: Re-vendor GoVMM
adba4532 virtcontainers: Revert "virtcontainers: Allow s390x appendVhostUserDevice"
ede078bc kata-deploy: aks-test: bump kubernetes/containerd
484af12b kata-deploy: update to handle new runtimeclass path
05c224c3 runtimeclass: add nodeSelector
ee7de8ab tools: fix build kernel shell error
7d5a4252 docs: Document limitation regarding subpaths
36776408 runtime/virtcontainers: Fix typo on qmp error msg
12a65d23 runtimeclass: drop stale runtimeclass definitions
0787ea80 cgroupsCreate: not set resources to c.config.Resources
831224aa Sandbox: Fix ContainerConfig ptr in CreateContainer and createContainers
a57c8ab1 qemu: kill virtiofsd if failure to start VMM
ff2b9e54 cli: delete not used files
0d0a520d clh: return error if apiSocketPath failed
fc6bb01a runtime: fix dropped error
30ff6ee8 runtime: handle io.katacontainers.config.hypervisor.virtio_fs_extra_args
677f0d99 runtime: delete not used function parameter builtIn
dcb9f403 config: Protect annotation for entropy_source
f4c26aad agent: fix the issue of missing set fsGroup for EphemeralStorage
628d55bf kata-agent: fix the issue of fsGroup missing
0405beb2 agent: Remove unused Default implementation for NamespaceType
7b83b7ec agent/uevent: Better initialize Uevent in test
b0190a40 agent: Use vec![] macro rather than init-then-push
1c43245e agent/device: Remove unneeded Result<> wrappers from uev matchers
e41cdb8b agent: Use str::is_empty() method in config::get_string_value()
2377c097 agent: Use CamelCase for NamespaceType values
75eca6d5 agent/rustjail: Clean up error path in execute_hook()s async task
6ce1e56d agent/rustjail: Remove an unnecessary PathBuf
3c4485ec agent/rustjail: Clean up some static definitions with vec! macro
eaec5a6c agent/oci: Change name case to make clippy happy
3f5fdae0 agent/rustjail: (trivial) Clean up comment on process_grpc_to_oci()
210f39a4 agent/rustjail: Simplify renaming imports
d4a54137 runtime: Fix stdout/stderr output from container being truncated
8ecf8e5c agent: use channel instead of pipe to send exit signal of process
81c5ff12 agent: Update seccomp configuration for errnoRet and flags
8a33bd4c qemu: Fix assertion failure on shutdown
7f609113 virtcontainers: Allow s390x appendVhostUserDevice
67ac4f45 runtime: update GoVMM for memory backend support
6577b01a agent/rustjail: Fix accidental damage from tokio conversion
de2631e7 utils: Make WaitLocalProcess safer
9256e590 shutdown: Don't sever console watcher too early
51ab8700 utils: Improve WaitLocalProcess
507ef636 utils: Add waitLocalProcess function
1d5098de agent/block: Generate PCI path for virtio-blk devices on clh
e7c97f0f runtime/tests: Change "moo FAILURE" message
8bc53498 docs: Simplify the repo bumping section
8a47b05a docs: Mention that an app token should be used with hub
d434c2e9 docs: OBS account is not require anymore
543f9da3 runtime: Disable trace for healthcheck
421439c6 API: remove ProcessListContainer/ListProcesses
1366f0fb cli: Use genericGetExpectedHostDetails on s390x

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-05-01 00:14:17 +02:00
Fabiano Fidêncio
ebca056ef8 Merge pull request #1782 from fidencio/wip/kata-deploy-update-crio-config
Update kata-deploy to use CRI-O drop-in files
2021-05-01 00:09:51 +02:00
Fabiano Fidêncio
239cc51199 Merge pull request #1689 from fidencio/wip/update-dependencies-versions
Update dependencies versions
2021-05-01 00:01:45 +02:00
Fabiano Fidêncio
2047f26fa3 kata-deploy: Adapt CRI-O config to use drop-in files
By using drop-in file it simplifies the deployment and maintenance of
the CRI-O configurations by a lot, and all versions of CRI-O that should
be used together with the currently supported versions of kubenertes
support the drop-in configuration file.

Depends-on: github.com/kata-containers/kata-containers#1689
Fixes #1781

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 23:14:19 +02:00
Fabiano Fidêncio
8de2f914ab kata-deploy: Rely on CRIO default's values for manage_ns_lifecycle
manage_ns_lifecycle (previously known as manage_network_ns_lifecycle)
has its default value as `true` for all CRI-O versions that should be
used with the kubernetes versions that are still supported / didn't
reach their EOL.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 23:14:19 +02:00
Fabiano Fidêncio
d11d0796e1 Merge pull request #1766 from zyt312074545/fix_build_kernel_shell
fix build kernel shell error when setup with `-f`
2021-04-30 19:37:45 +02:00
Eric Ernst
1c0d3afd55 Merge pull request #1754 from Jakob-Naucke/fix-virtiofs-s390x
virtcontainers: Fix virtio-fs on s390x
2021-04-30 09:28:12 -07:00
Fabiano Fidêncio
04660b1af2 Merge pull request #1763 from egernst/runtimeclass-updates
Runtimeclass updates
2021-04-30 18:21:33 +02:00
Fabiano Fidêncio
2e0221125a Merge pull request #1780 from likebreath/0429/clh_v15.0
versions: Upgrade to cloud-hypervisor v15.0
2021-04-30 18:20:36 +02:00
Fabiano Fidêncio
29fdfcfebc Merge pull request #1725 from liubin/liubin/1724-not-return-if-get-api-socket-failed
clh: return error if apiSocketPath failed
2021-04-30 18:16:45 +02:00
Fabiano Fidêncio
dc23adcd50 Merge pull request #1743 from alrs/fix-runtime-err
runtime: fix dropped error
2021-04-30 18:15:22 +02:00
Fabiano Fidêncio
ea9936e004 versions: Bump runc to v1.0.0-rc93
Let's bump runc to its latest release.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:43:18 +02:00
Fabiano Fidêncio
9c333b2c79 versions: Bump CRI-O version to 1.21.x
For CRI-O, let's rely on the "release-1.21" branch, as this is the
branch getting backports for the 1.21.x cycle.

Relying on the branch avoids our needs to keep bumping it every now and
then.

Fixes: #1688

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
e33f207b7d versions: Bump critools version to 1.21.0
Let's bump critools version to the same version of the kubernetes.

Fixes: #1686

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
8e5df72302 versions: Bump kubernetes version to 1.21.0
1.21.0 is the latest k8s release.

Fixes: #1685

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
d15f84c956 versions: Remove Docker entry
It's been some time already, since
https://github.com/kata-containers/tests/pull/3272, that we don't depend
on a specific version of docker.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
516f4ec06e versions: Remove OpenShift entry
Tested between Kata Containers and OpenShift are already being done via
the OpenShift CI.  This entry is only related to the OpenShift 3.x,
which is not tested anymore via our CI in any possible way.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
be101ac1ef versions: Remove CRI-O meta dependencies
CRI-O meta dependencies (crictl and openshift) are a left over from the
OCP 3.x era.  Currently we don't need those as we have Kata Containers
onboard with the OpenShift CI, and we don't test OCP 3.x in any way
nowadays.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-30 11:42:42 +02:00
Fabiano Fidêncio
bd486f7bf3 Merge pull request #1720 from ManaSugi/update-seccomp-spec
agent: Update seccomp configuration for errnoRet and flags
2021-04-30 10:52:42 +02:00
Bo Chen
1ca6bedf3e versions: Upgrade to cloud-hypervisor v15.0
Quotes from the cloud-hypervisor release v15.0:

This release is the first in a new version numbering scheme to represent that
we believe Cloud Hypervisor is maturing and entering a period of stability.
With this new release we are beginning our new stability guarantees.

Other highlights from the latest release include: 1) Network device rate
limiting; 2) Support for runtime control of `virtio-net` guest offload;
3) `--api-socket` supports file descriptor parameter; 4) Bug fixes on
`virtio-pmem`, PCI BARs alignment, `virtio-net`, etc.; 5) Deprecation of
the "LinuxBoot" protocol for ELF and bzImage in the coming release.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v15.0

Note: The client code of cloud-hypervisor's OpenAPI is automatically
generated by `openapi-generator` [1-2]. As the API changes do not
impact usages in Kata, no additional changes in kata's runtime are
needed to work with the current version of cloud-hypervisor.

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Fixes: #1779

Signed-off-by: Bo Chen <chen.bo@intel.com>
2021-04-29 10:56:22 -07:00
Eric Ernst
906c0df405 kata-deploy: don't update worker pool nodes
Our cluster's life is shorter than time it takes to update nodes; for
better stability of the kata-deploy test, let's not update the nodes.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-29 09:24:51 -07:00
Jakob Naucke
3ee61776d6 virtcontainers: Enable virtio-fs on s390x
Allow and configure vhost-user-fs devices (virtio-fs) on s390x. As a
consequence, appendVhostUserDevice now takes a context, which affects
its signature for other architectures.

Fixes: #1753

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-04-29 09:54:08 +02:00
Jakob Naucke
8385ff9554 runtime: Re-vendor GoVMM
for vhost-user-fs-ccw devno support

shortlog:
f0e9a35 Merge pull request #171 from Jakob-Naucke/fix-virtiofs-s390x
abd3c7e qemu: VhostUserDevice CCW device numbers
3eaeda7 qemu: Refactor vhostuserDev.QemuParams
7183b12 Merge pull request #166 from kata-containers/egernst-patch-1
092293f Merge pull request #169 from QiuMike/master
511cf58 Fix qemu commandline issue with empty romfile
8ba62b0 Merge pull request #164 from devimc/2021-03-30/tdxSupport
b3eac95 qmp: remove frequent, chatty log
3141894 qemu: add support for tdx-guest object

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-04-29 09:53:54 +02:00
Jakob Naucke
adba4532a4 virtcontainers: Revert "virtcontainers: Allow s390x appendVhostUserDevice"
This reverts commit 7f60911333.
Patch allowed other vhost user devices besides FS not supported on s390x
and failed to attach a CCW device number, which results in the
inavailability to use more devices after vhost-user-fs-ccw.

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-04-29 09:43:33 +02:00
Eric Ernst
b20dff8027 Merge pull request #1759 from kata-containers/fix_update
Fix the issue that sandbox size is not right after update
2021-04-28 14:48:24 -07:00
Eric Ernst
ede078bc85 kata-deploy: aks-test: bump kubernetes/containerd
Bump to 1.20, latest aks-engine

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-28 10:41:28 -07:00
Eric Ernst
484af12b54 kata-deploy: update to handle new runtimeclass path
Runtimeclass paths changed. Update the kata-deploy action's test
accordingly.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-28 10:41:28 -07:00
Eric Ernst
05c224c3d4 runtimeclass: add nodeSelector
To ensure we run on nodes which have Kata installed, let's add the
nodeSelector to the runtimeclass definition, and have it match the label
that we applied during installation of the kata artifacts.

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-28 10:41:28 -07:00
zyt312074545
ee7de8abcc tools: fix build kernel shell error
Build kernel shell setup with -f, don't find patches directory path,
because patches_path is none, so fix this error.

Fixes: #1768

Signed-off-by: zyt312074545 <zyt312074545@hotmail.com>
2021-04-28 12:54:18 +00:00
Fabiano Fidêncio
783f5aba68 Merge pull request #1733 from c3d/issue/1728-subpath-limitation
docs: Document limitation regarding subpaths
2021-04-28 08:27:58 +02:00
Eric Ernst
23a8179184 Merge pull request #1756 from egernst/leave-no-virtiofs-behind
qemu: kill virtiofsd if failure to start VMM
2021-04-27 17:16:33 -07:00
Fabiano Fidêncio
cd1c1ae239 Merge pull request #1765 from wainersm/qemu_1
runtime/virtcontainers: Fix typo on qmp error msg
2021-04-27 21:23:32 +02:00
Christophe de Dinechin
7d5a4252b6 docs: Document limitation regarding subpaths
Subpaths are not supported at the moment. Document that fact.

Fixes: #1728

Signed-off-by: Christophe de Dinechin <christophe@dinechin.org>
2021-04-27 18:53:45 +02:00
Wainer dos Santos Moschetta
3677640811 runtime/virtcontainers: Fix typo on qmp error msg
"negotiate" was misspelled on qemu's qmp error message.

Fixes #1764
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2021-04-27 11:52:42 -04:00
Eric Ernst
12a65d2359 runtimeclass: drop stale runtimeclass definitions
- 1.13/1.14 are very old now; let's drop
- move from k8s-1.18 to just runtimeclasses directoy
- update docs to reflect the new reality

Fixes: #1425

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-27 08:06:09 -07:00
Hui Zhu
0787ea8073 cgroupsCreate: not set resources to c.config.Resources
cgroupsCreate will just keep the CPU resources infomation but not the
others.
Set it to c.config.Resources will clean most of resources of the
container.

This commit remove it to handle the issue.

Fixes: #1758

Signed-off-by: Hui Zhu <teawater@antfin.com>
2021-04-27 16:44:30 +08:00
Hui Zhu
831224aa22 Sandbox: Fix ContainerConfig ptr in CreateContainer and createContainers
The pointer that send to newContainer in CreateContainer and
createContainers is not the pointer that point to the address in
s.config.Containers.

This commit fix this issue.

Fixes: #1758

Signed-off-by: Hui Zhu <teawater@antfin.com>
2021-04-27 16:44:22 +08:00
Eric Ernst
a57c8ab1be qemu: kill virtiofsd if failure to start VMM
If the QEMU VMM fails to launch, we currently fail to kill virtiofsd,
resulting in leftover processes running on the host. Let's make sure we
kill these, and explicitly cleanup the virtiofs socket on the
filesystem.

Ideally we'll migrate QEMU to utilize the same virtiofsd interface that
CLH uses, but let's fix this bug as a first step.

Fixes: #1755

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-26 21:07:20 -07:00
Fabiano Fidêncio
fb30c58847 Merge pull request #1747 from liubin/fix/1746-deleted-not-used-files
cli: delete not used files
2021-04-26 09:57:19 +02:00
bin
ff2b9e5478 cli: delete not used files
Delete two files that not used anymore:
- src/runtime/cli/console.go
- src/runtime/cli/console_test.go

Fixes: #1746

Signed-off-by: bin <bin@hyper.sh>
2021-04-25 17:46:56 +08:00
bin
0d0a520d42 clh: return error if apiSocketPath failed
If apiSocketPath failed, should return the error, but not nil

Fixes: #1724

Signed-off-by: bin <bin@hyper.sh>
2021-04-25 10:25:42 +08:00
Lars Lehtonen
fc6bb01a7f runtime: fix dropped error
Fixes: #212

Signed-off-by: Lars Lehtonen <lars.lehtonen@gmail.com>
2021-04-24 14:18:50 -07:00
Chelsea Mafrica
8587e3a00b Merge pull request #1732 from liubin/fix/1731-delete-builtin-parameter
runtime: delete not used function parameter builtIn
2021-04-23 18:30:55 -07:00
Fabiano Fidêncio
fe2311cd4c Merge pull request #1739 from pmores/virtiofsd-extra-args-annotation-handling
add io.katacontainers.config.hypervisor.virtio_fs_extra_args handling
2021-04-23 23:22:01 +02:00
Pavel Mores
30ff6ee88b runtime: handle io.katacontainers.config.hypervisor.virtio_fs_extra_args
Users can specify extra arguments for virtiofsd in a pod spec using the
io.katacontainers.config.hypervisor.virtio_fs_extra_args annontation.
However, this annotation was ignored so far by the runtime.  This commit
fixes the issue by processing the annotation value (if present) and
translating it to the corresponding hypervisor configuration item.

Fixes #1523

Signed-off-by: Pavel Mores <pmores@redhat.com>
2021-04-23 21:09:28 +02:00
Fabiano Fidêncio
5eaf7a9982 Merge pull request #1049 from c3d/feature/1043-entropy-source-annotation
Entropy source annotation
2021-04-23 20:16:11 +02:00
bin
677f0d9904 runtime: delete not used function parameter builtIn
Parametr builtIn is not used in function updateRuntimeConfigAgent,
delete it from updateRuntimeConfigAgent and LoadConfiguration
function signature.

Fixes: #1731

Signed-off-by: bin <bin@hyper.sh>
2021-04-23 17:42:42 +08:00
Fabiano Fidêncio
a4fffa1f22 Merge pull request #1714 from littlejawa/issue_1713
runtime: Fix stdout/stderr output from container being truncated
2021-04-22 23:00:47 +02:00
Fabiano Fidêncio
b41d9a99b4 Merge pull request #1703 from lifupan/main_fix
fix the issue of missing set fsGroup for EphemeralStorage
2021-04-22 20:29:36 +02:00
Christophe de Dinechin
dcb9f40394 config: Protect annotation for entropy_source
It would be undesirable to be given an annotation like "/dev/null".
Filter out bad annotation values.

Fixes: #1043

Suggested-by: James O. D. Hunt <james.o.hunt@intel.com>
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-04-22 15:26:40 +02:00
fupan.lfp
f4c26aad00 agent: fix the issue of missing set fsGroup for EphemeralStorage
For k8s emptyDir volume, a specific fsGroup would
be set for it, thus guest should get this fsGroup
from runtime and set it properly on the EphemeralStorage
volume in guest.

Fixes: #1580

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2021-04-22 21:09:02 +08:00
fupan.lfp
628d55bf4c kata-agent: fix the issue of fsGroup missing
For k8s emptyDir volume, a specific fsGroup would
be set for it, thus runtime should pass this fsGroup
for EphemeralStorage to guest and set it properly on
the emptyDir volume in guest.

Fixes: #1580

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2021-04-22 21:08:52 +08:00
Fabiano Fidêncio
14dca3fe1f Merge pull request #1718 from egernst/qemu-assert-fix
qemu: Fix assertion failure on shutdown
2021-04-22 12:57:25 +02:00
David Gibson
e91591fff2 Merge pull request #1701 from dgibson/clippy
Assorted clippy fixes for Rust agent
2021-04-22 20:36:49 +10:00
Bin Liu
db4fbac1d3 Merge pull request #1722 from Tim-Zhang/use-channle-for-process-exit
agent: use channel instead of pipe(2) to send exit signal of process
2021-04-22 15:27:36 +08:00
David Gibson
0405beb2d8 agent: Remove unused Default implementation for NamespaceType
Currently we implement the Default trait for NamespaceType.  It doesn't
really make sense to have a default for this type though - you really need
to know what type of namespace you're setting.  In fact the Default
implementation is never used, so we can just drop it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:54:02 +10:00
David Gibson
7b83b7ec1f agent/uevent: Better initialize Uevent in test
We had some code that initialized a Uevent to the default value, then set
specific fields to various values.  This can be accomplished inside the one
initialized using the ..Default::default() syntax.  Making this change
stops clippy from complaining.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:57 +10:00
David Gibson
b0190a407f agent: Use vec![] macro rather than init-then-push
We have one place where we create an empty vector then immediately push
something into it.  We can do this in one step using the vec![] macro,
which stops clippy complaining.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:56 +10:00
David Gibson
1c43245e3e agent/device: Remove unneeded Result<> wrappers from uev matchers
The various type implementing the UeventMatcher trait have new() methods
which return a Result<>, however none of them can actually fail.  This is
a leftover from their development where some versions could fail to
initialize.  Remove the unneccessary wrappers to silence clippy.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:34 +10:00
David Gibson
e41cdb8b9f agent: Use str::is_empty() method in config::get_string_value()
An explicit check against "" is a bit less clear and makes clippy complain.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:29 +10:00
David Gibson
2377c0975c agent: Use CamelCase for NamespaceType values
Currently these are in all-caps, to match typical capitalization of IPC,
UTS and PID in the world at large.  However, this violates Rust's
capitalization conventions and makes clippy complain.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:24 +10:00
David Gibson
75eca6d56f agent/rustjail: Clean up error path in execute_hook()s async task
Clippy (in Rust 1.51 at least) has some complaints about this closure
inside execute_hook() because it uses explicit returns in some places
where it doesn't need them, because they're the last expression in the
function.

That isn't necessarily obvious from a glance, but we can make clippy happy
and also make things a little clearer: first we replace a somewhat verbose
'match' using Option::ok_or_else(), then rearrange the remaining code to
put all the error path first with an explicit return then the "happy" path
as the stright line exit with an implicit return.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:23 +10:00
David Gibson
6ce1e56d20 agent/rustjail: Remove an unnecessary PathBuf
PathBuf is an owned, mutable Path.  We don't need those properties in
get_value_from_cgroup() so we can use a Path instead.  This may be slightly
safer, and definitely stops clippy (version 1.51 at least) from
complaining.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:53:04 +10:00
David Gibson
3c4485ece3 agent/rustjail: Clean up some static definitions with vec! macro
DEFAULT_ALLOWED_DEVICES and DEFAULT_DEVICES are essentially global
constant lists.  They're implemented as a lazy_static! initialized Vec
values.

The code to initialize them creates an empty Vec then pushes values
onto it.  We can simplify this a bit by using the vec! macro.  This
might be slightly more efficient, and it definitely stops recent
clippy versions (e.g. 1.51) from complaining about it.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:52:59 +10:00
David Gibson
eaec5a6c06 agent/oci: Change name case to make clippy happy
Recent versions of clippy (e.g. in Rust 1.51) complain about a number
of names in the oci crate, which don't obey Rust's normal CamelCasing
conventions.

It's pretty clear that these don't obey the usual rules because they
are attempting to preserve conventional casing of existing acronyms
they incorporate ("VM", "POSIX", etc.).  However, it's been my
experience that matching the case and name conventions of your
environs is more important than matching case with external norms.

Therefore, this patch changes all the identifiers in the oci crate to
match Rust conventions.  Their users in the rustjail crate are updated
to match.

fixes #1611

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:52:54 +10:00
David Gibson
3f5fdae0d8 agent/rustjail: (trivial) Clean up comment on process_grpc_to_oci()
This comment appears to be connected specifically with this function, but
has some other items separating it for no particular reason.  It also has
a typo.  Correct both.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:52:45 +10:00
David Gibson
210f39a46f agent/rustjail: Simplify renaming imports
Functions in rustjail deal with both the local oci module's data structure
and the protocol::oci module's data structure.  Since these both cover the
OCI container config they are quite similar and have many identically named
types.

To avoid conflicts, we import many things from those modules with altered
names.  However the names we use oci* and grpc* don't fit the normal Rust
capitalization convention for types.

However by renaming the import of the 'protocols::oci' module itself to
'grpc', we can actually get rid of the many renames by just qualifying at
each use site with only a very small increase in verbosity.  As a bonus
this gets rid of multiple 'use' items scattered through the file.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-22 11:52:42 +10:00
Julien Ropé
d4a5413774 runtime: Fix stdout/stderr output from container being truncated
Do not close the tty as part of the stdout redirection routine.
The close is already happening a couple lines below, after all routines
have finished.

Fixes #1713

Signed-off-by: Julien Ropé <jrope@redhat.com>
2021-04-21 17:09:09 +02:00
Tim Zhang
8ecf8e5c1f agent: use channel instead of pipe to send exit signal of process
The situation is not a IPC scene, pipe(2) is too heavy.

We have tokio::sync::watch::channel after tokio has been introduced.
The channel has better performance and easy to use.

Fixes: #1721

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-04-21 16:47:41 +08:00
Chelsea Mafrica
1c222c75ac Merge pull request #1697 from jodh-intel/improve-agent-shutdown-handling
Improve agent shutdown handling
2021-04-20 21:25:36 -07:00
Manabu Sugimoto
81c5ff1231 agent: Update seccomp configuration for errnoRet and flags
Update:
- Make the type of errnoRet in oci.proto oneof
- Update seccomp_grpc_to_oci that can set errnoRet as EPREM if the
value is empty.
- Update the oci.pb.go based on the above fixes
- Add seccomp errnoRet and flags option to configs in rustjail

Fixes: #1719

Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
2021-04-21 12:16:58 +09:00
Eric Ernst
8a33bd4c19 qemu: Fix assertion failure on shutdown
Occassional coredumps and OOMs observed on shutdown path due to improper
BH handling during aio cleanup in QEMU. Thankfully this has been fixed
in upstream already -- let's carry this patch.

Fixes: #1717

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-04-20 15:28:30 -07:00
Fabiano Fidêncio
4c177b5c40 Merge pull request #1599 from Jakob-Naucke/virtiofs-s390x
Enable virtio-fs on s390x
2021-04-20 21:07:15 +02:00
Carlos Venegas
cd27308755 Merge pull request #1432 from dgibson/bug1431
block: Generate PCI path for virtio-blk devices on clh
2021-04-20 12:00:09 -05:00
Fabiano Fidêncio
9df86d28a5 Merge pull request #1678 from cmaf/remove-spans-healthcheck
runtime: Disable trace for healthcheck
2021-04-20 18:38:47 +02:00
Jakob Naucke
7f60911333 virtcontainers: Allow s390x appendVhostUserDevice
Remove the prohibition of vhost-user devices on s390x, which are by now
supported (e.g. vhost-user-fs-ccw). As a consequence,
appendVhostUserDevice no longer needs an error in its signature.
This enables virtio-fs support on s390x.

Fixes: #1469

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-04-20 12:20:32 +02:00
Jakob Naucke
67ac4f4585 runtime: update GoVMM for memory backend support
Update GoVMM to get memory backend support for non-DIMM setups. This is
necessary for virtio-fs on s390x.

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-04-20 12:19:52 +02:00
Tim Zhang
34becbf289 Merge pull request #1705 from dgibson/bug1702
agent/rustjail: Fix accidental damage from tokio conversion
2021-04-19 17:45:41 +08:00
David Gibson
6577b01a5c agent/rustjail: Fix accidental damage from tokio conversion
register_memory_event_v2() includes a closure spawned as an async task
with tokio.  At the end of that closure, there's a test for a closed fd
exiting if so.  But this is right at the end of the closure when it was
about to exit anyway, so this does nothing.

This code was originally an explicit thread, converted to a tokio task
by 332fa4c "agent: switch to async runtime".  It looks like there was an
error during conversion, where this logic was accidentally moved out of the
while loop above, where it makes a lot more sense.

Put it back into the loop.

fixes #1702

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-19 16:54:43 +10:00
James O. D. Hunt
de2631e711 utils: Make WaitLocalProcess safer
Rather than relying on the system clock, use a channel timeout to avoid
problems if the system time changed.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2021-04-15 15:46:42 +01:00
James O. D. Hunt
9256e590dc shutdown: Don't sever console watcher too early
Fixed logic used to handle static agent tracing.

For a standard (untraced) hypervisor shutdown, the runtime kills the VM
process once the workload has finished. But if static agent tracing is
enabled, the agent running inside the VM is responsible for the
shutdown. The existing code handled this scenario but did not wait for
the hypervisor process to end. The outcome of this being that the
console watcher thread was killed too early.

Although not a problem for an untraced system, if static agent tracing
was enabled, the logs from the hypervisor would be truncated, missing the
crucial final stages of the agents shutdown sequence.

The fix necessitated adding a new parameter to the `stopSandbox()` API,
which if true requests the runtime hypervisor logic simply to wait for
the hypervisor process to exit rather than killing it.

Fixes: #1696.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2021-04-15 15:22:00 +01:00
James O. D. Hunt
51ab870091 utils: Improve WaitLocalProcess
Previously, the hypervisors were sending a signal and then checking to
see if the process had died by sending the magic null signal (`0`). However,
that doesn't work as it was written: the logic was assuming sending the
null signal to a process that was dead would return `ESRCH`, but it
doesn't: you first need to you `wait(2)` for the process before sending
that signal. This means that previously, all affected hypervisors would
appear to take `timeout` seconds to end, even though they had _already_
finished.

Now, the hypervisors true end time will be seen as we wait for the
processes before sending the null signal to ensure the process has
finished.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2021-04-15 14:51:06 +01:00
James O. D. Hunt
507ef6369e utils: Add waitLocalProcess function
Refactored some of the hypervisors to remove the duplicated code used to
trigger a shutdown.

Also added some unit tests.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2021-04-15 14:51:03 +01:00
Chelsea Mafrica
0f2fe4a418 Merge pull request #1565 from Jakob-Naucke/s390x-fix-cli-test
cli: Use genericGetExpectedHostDetails on s390x
2021-04-14 10:25:23 -07:00
Chelsea Mafrica
038cecaa3f Merge pull request #1684 from dgibson/moo
runtime/tests: Change "moo FAILURE" message
2021-04-13 09:08:46 -07:00
David Gibson
1d5098de70 agent/block: Generate PCI path for virtio-blk devices on clh
Currently runtime and agent special case virtio-blk devices under clh,
ostensibly because the PCI address information is not available in that
case.

In fact, cloud-hypervisor's VmAddDiskPut API does return a PciDeviceInfo,
which includes a PCI address.  That API is broken, because PCI addressing
depends on guest (firmware or OS) actions that the hypervisor won't know
about.  clh only gets away with this because it only uses a single PCI root
and never uses PCI bridges, in which case the guest addresses are
accurately predictable: they always have domain and bus zero.

Until https://github.com/kata-containers/kata-containers/pull/1190, Kata
couldn't handle PCI addressing unless there was exactly one bridge, which
might be why this was actually special-cased for clh.

With #1190 merged, we can handle more general PCI paths, and we can derive
a trivial (one element) PCI path from the information that the clh API
gives us.  We can use that to remove this special case.

fixes #1431

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-13 13:29:24 +10:00
David Gibson
e7c97f0f5d runtime/tests: Change "moo FAILURE" message
Change the "moo FAILURE" message shown in a couple of the unit tests to
"moo message".  This means that searching for unrelated failures in the
test output by looking for "FAIL" won't show these messages as false
positives any more.

fixes #1683

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-13 13:25:03 +10:00
GabyCT
2e60a9d3d9 Merge pull request #1681 from fidencio/wip/improve-release-process-documentation
Update the information about the release process
2021-04-12 15:05:36 -05:00
Fabiano Fidêncio
8bc53498b4 docs: Simplify the repo bumping section
Instead of giving too many options, let's just mention the script and
rely on it entirely for the release.

This helps to simplify the document and have one well stablished
process.

Fixes: #1680

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-12 11:47:18 +02:00
Fabiano Fidêncio
8a47b05a7c docs: Mention that an app token should be used with hub
During the 2.1.0-alpha2 / 2.0.3 release, I had a hard time trying to
perform anything related to hub as the app token should be used instead
of the user password.  Thankfully Carlos pointed me out to that
direction, but it'd be good to have it explicitly documented.

Fixes: #1680

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-12 11:47:15 +02:00
Fabiano Fidêncio
d434c2e9c6 docs: OBS account is not require anymore
Since we stopped building kata-containers packages as part of our
release process, there's no need to have an OBS account to be able to do
the release.

Fixes: #1680

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-12 11:47:09 +02:00
Fupan Li
17d33868c2 Merge pull request #1670 from liubin/1668-remove-ProcessListContainer-API
remove ProcessListContainer API
2021-04-12 10:22:37 +08:00
Chelsea Mafrica
543f9da3ba runtime: Disable trace for healthcheck
With tracing enabled, grpc health check generates a large number of
spans which creates too much data for tasks running longer than a few
minutes. To solve this, remove span creation from kata agent check() and
sendReq() where the majority of the spans come from. Leave contexts in
functions for subsequent calls that create spans.

Fixes #1395

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
2021-04-09 15:47:00 -07:00
Fabiano Fidêncio
4e62d596db Merge pull request #1675 from fidencio/2.1.0-alpha2-branch-bump
# Kata Containers 2.1.0-alpha2
2021-04-09 20:14:19 +02:00
Fabiano Fidêncio
4f164b5246 release: Kata Containers 2.1.0-alpha2
- release: Do not git add kata-{deploy,cleanup}.yaml for the tests repo
- kata-deploy: add runtimeclass that includes pod overhead
- release: automatically bump the version of the kata-deploy images
- Refine uevent matching conditions
- docs: update dev-guide to include fixes from 1.x
- virtcontainers: replace newStore by store in Sandbox struct
- agent: log the mount point if it is already mounted
- tools/agent-ctl: Update Cargo.lock
- agent: Rework the debug console
- oci: Update seccomp configuration
- kernel: update experimental kernel to 5.10.x
- kata-deploy: Fix `test-kata.sh` and do some small cleanups / improvements in the kata-deploy script
- github: Fix slash-command-action usage
- rustjail: fix the issue of missing default home env
- Make uevent watching mechanism more flexible
- ci/openshift-ci: Prepare to build on CentOS 8
- docs: update configuration for passing annotations in conatinerd
- Revert "github: Remove kata-deploy-test action"
- runtime: increase dial timeout
- qemu experimental: Move to latest tree on virtio-fs-dev (qemu 6.0 + DAX patches).
- github: Remove kata-deploy-test action
- agent: s390x statfs constants
- kernel: upgrade kernel to 5.10.x for arm64.
- Don't do anything in Pipestream::shutdown
- Fix fsgroup
- agent: Remove many "panic message is not string literal" warnings
- osbuilder: Update QAT Dockerfile with new QAT driver version
- osbuilder: update dockerfiles to utilize IMAGE_REGISTRY
- Only keep one VERSION file
- Dechat deruntime
- runtime: Format auto-generated client code for cloud-hypervisor API
- runtime: use concrete KataAgentConfig instead of interface type
- versions: Update cloud-hypervisor to release v0.14.1
- runtime: import runtime/v2/runc/options to decode request from Docker
- virtcontainers/fc: Upgrade Firecracker to v0.23.1
- docs: Remove ubuntu installation guide
- docs: Update snap install guide
- docs: update how-to-use-k8s-with-cri-containerd-and-kata.md
- Update install docs for Fedora and CentOS
- action: fix missing qemu tag
- Remove installation guides for SLE and openSUSE
- kernel: Enable OVERLAY_FS_{METACOPY,XINO_AUTO}
- versions: kernel 5.10.x
- virtcontainers: Fix missing contexts in s390x
- runtime: makefile allow override DAX value

11897248 release: Do not git add kata-{deploy,cleanup}.yaml for the tests repo
2b5f79d6 release: automatically bump the version of the kata-deploy images
8682d6b7 docs: update dev-guide to include fixes from 1.x
f444adb5 kata-cleanup: Explicitly add tag to the container image
12582c2f kata-deploy: add runtimeclass that includes pod overhead
d75fe956 virtcontainers: replace newStore by store in Sandbox struct
342eb765 tools/agent-ctl: Update Cargo.lock
24b0703f agent: fix test for the debug console
79033257 agent: async the debug console
8ea2ce9a agent/device: Remove legacy uevent matching
5d007743 agent/device: Refine uevent matching for pmem devices
9017e110 agent: start to rework the debug console
a59e07c1 agent/define: Refine uevent matching for virtio-scsi devices
484a3647 agent/device: Rework uevent handling for virtio-blk devices
7873b7a1 github: Fix slash-command-action usage
eda8da1e github: Revert "github: Remove kata-deploy-test action"
a938d903 rustjail: fix the issue of missing default home env
b0e4618e docs: update configuration for passing annotations in conatinerd
d43098ec kata-deploy: Adapt regex for testing kata-deploy
107ceca6 kernel: update experimental kernel to 5.10.x
ca4dccf9 release: Get rid of "master"
c2197cbf release: Use sudo to install hub
49eec920 agent: log the tag and mount point if it is already mounted
16f732fc ci/lib: Use git to clone the tests repository
9281e567 ci/openshift-ci: Add build root dockerfile
1cce9300 github: Remove kata-deploy-test action
0828f9ba agent/uevent: Introduce wait_for_uevent() helper
16ed55e4 agent/device: Use consistent matching for past and future uevents
4b16681d agent/uevent: Put matcher object rather than "device address" in watch list
b8b32248 agent/uevent: Consolidate event matching logic
d2caff6c agent: Re-organize uevent processing
55ed2ddd agent: Store uevent watchers in Vec rather than HashMap
91e0ef5c agent/uevent: Report whole Uevents to device watchers
36420054 agent: Store whole Uevent in map, rather than just /dev name
06162025 agent/device: Move GLOBAL_DEVICE_WATCHER into Sandbox
11ae32e3 agent/device: Fix path matching for PCI devices
4f608804 agent/device: Update test_get_device_name()
ee6a590d agent: add test test_pipestream_shutdown
4a2d4370 agent: don't do anything in Pipestream::shutdown
e3e670c5 agent/device: Forward port test for get_device_name() from Kata 1.x
ed08980f agent: Remove many "panic message is not string literal" warnings
f365bdb7 versions: qemu-experimental: 6.0~rc 470dd6
6491b9d7 qemu: Add support to build static qemu for dev tree
13653e7b runtime: increase dial timeout
935460e5 osbuilder: update dockerfiles to utilize IMAGE_REGISTRY
010d57f4 osbuilder: Update QAT Dockerfile with new QAT driver version
adb866ad kata-deploy: Adapt to the correct tag name
60adc7f0 VERSION: Use the correct form
a4c125a8 trace: move gRPC requests from debug to trace
50fff977 trace: move trace span chatter to trace rather than info
28bd8c11 kernel: upgrade kernel to 5.10.x for arm64.
6fe48329 runtime: use concrete KataAgentConfig instead of interface type
64939425 mount: fix the issue of missing set fsGroup
88e58a4f agent: fix the issue of missing pass fsGroup
572aff53 build: Only keep one VERSION file
0c38d9ec runtime: Fix the format of the client code of cloud-hypervisor APIs
52cacf88 runtime: Format auto-generated client code for cloud-hypervisor API
84b62dc3 versions: Update cloud-hypervisor to release v0.14.1
4a38ff41 docs: Update snap install guide
ede1ab86 docs: Remove ubuntu installation guide
6255cc19 virtcontainers/fc: Upgrade Firecracker to v0.23.1
2c47277c docs: update how-to-use-k8s-with-cri-containerd-and-kata.md
317f55f8 docs: Update minimum version for Fedora
1ce29fc9 docs: Update CentOS install docs
3f90561b docs: Update Fedora install docs
8a1c6c3f action: fix missing qemu tag
a9ff9c87 docs: Remove openSUSE installation guide
2888ceb0 docs: Remove SLE installation guide
09d454ac runtime: import runtime/v2/runc/options to decode request from Docker
0b502d15 runtime: makefile allow override DAX value
a65519b9 versions: keep using kernel 5.4.x for ARM
31ced01e virtcontainers: Fix missing contexts in s390x
52a276fb agent: Fix type for PROC_SUPER_MAGIC on s390x
5b7c8b7d agent: Update cgroups-rs to 0.2.5
c035cdb3 versions: kernel 5.10.x
660b0473 oci: Update seccomp configuration
8c1e0d30 kernel: Enable OVERLAY_FS_{METACOPY,XINO_AUTO}

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-09 17:56:29 +02:00
Fabiano Fidêncio
73aa74b4bb Merge pull request #1673 from fidencio/wip/release-dont-git-add-kata-deploy-cleanup-on-tests-repo
release: Do not git add kata-{deploy,cleanup}.yaml for the tests repo
2021-04-09 16:21:14 +02:00
Fabiano Fidêncio
1189724822 release: Do not git add kata-{deploy,cleanup}.yaml for the tests repo
I was, mistakenly, `git add`ing those files unconditionally.

Fixes: #1672

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-09 14:38:13 +02:00
Fabiano Fidêncio
43a9d4e90a Merge pull request #1666 from egernst/rc-overhead
kata-deploy: add runtimeclass that includes pod overhead
2021-04-09 12:44:41 +02:00
bin
421439c633 API: remove ProcessListContainer/ListProcesses
This commit will remove ProcessListContainer API from VCSandbox
and ListProcesses from agent.proto.

Fixes: #1668

Signed-off-by: bin <bin@hyper.sh>
2021-04-09 17:34:25 +08:00
Peng Tao
efd5d6f1fe Merge pull request #1667 from fidencio/wip/automatically-bump-kata-deploy-image-version
release: automatically bump the version of the kata-deploy images
2021-04-09 16:06:07 +08:00
David Gibson
0e04d6299b Merge pull request #1642 from dgibson/ueventplus
Refine uevent matching conditions
2021-04-09 13:10:52 +10:00
Fabiano Fidêncio
2b5f79d685 release: automatically bump the version of the kata-deploy images
Let's teach `update-repository-version.sh` to automatically bump the
version of the kata-deploy images to be used within that release, when
running against the `kata-containers` repo.

Fixes: #1665

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-09 00:31:27 +02:00
GabyCT
368ab486f6 Merge pull request #1662 from egernst/docs-cleanup
docs: update dev-guide to include fixes from 1.x
2021-04-08 15:43:43 -05:00
Eric Ernst
2334b858a0 Merge pull request #1661 from liubin/1660-replace-newStore-by-store
virtcontainers: replace newStore by store in Sandbox struct
2021-04-08 13:17:44 -07:00
Eric Ernst
8682d6b7ea docs: update dev-guide to include fixes from 1.x
This addresses a few gaps with respect to fixes in 1.x docs:
  - Cleanup QEMU information in order to drop references to qemu-lite
  - Make sure we include directions for debug console in case of QEMU

Fixes: #574

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2021-04-08 12:55:59 -07:00
Fabiano Fidêncio
f444adb51b kata-cleanup: Explicitly add tag to the container image
We have the tags explicitly set on kata-deploy, let's do the same for
kata-cleanup.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-08 21:43:59 +02:00
Eric Ernst
12582c2f6d kata-deploy: add runtimeclass that includes pod overhead
The overhead values may not be perfect, but this is a start, and a good
reference.

Fixes: #580

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2021-04-08 12:42:15 -07:00
bin
d75fe95685 virtcontainers: replace newStore by store in Sandbox struct
The property name make newcomers confused when reading code.
Since in Kata Containers 2.0 there will only be one type of store,
so it's safe to replace it by `store` simply.

Fixes: #1660

Signed-off-by: bin <bin@hyper.sh>
2021-04-08 23:59:16 +08:00
Eric Ernst
324b026a77 Merge pull request #1604 from wainersm/agent_mount-1
agent: log the mount point if it is already mounted
2021-04-08 08:26:12 -07:00
Eric Ernst
6d3053be4c Merge pull request #1656 from fidencio/wip/update-agent-ctl-cargo-lock
tools/agent-ctl: Update Cargo.lock
2021-04-08 08:25:17 -07:00
Fupan Li
521887db16 Merge pull request #1648 from Tim-Zhang/rework-debug-console
agent: Rework the debug console
2021-04-08 23:14:52 +08:00
Fabiano Fidêncio
342eb765c2 tools/agent-ctl: Update Cargo.lock
While performing a `make` in the top directory of the project, I've
noticed that agent-ctl's Cargo.lock file was not up-to-date.

Fixes: #1655

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-08 10:24:08 +02:00
Tim Zhang
24b0703fda agent: fix test for the debug console
Fix test for the debug console.

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-04-08 14:57:40 +08:00
Tim Zhang
790332575b agent: async the debug console
Make the debug console in this commit.
Finish the rework of debug console.

Fixes: #1647

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-04-08 14:57:36 +08:00
David Gibson
8ea2ce9a31 agent/device: Remove legacy uevent matching
DevAddrMatcher existed purely as a transitional step as we refined the
uevent matching logic for each of the different device types we care about.
We've now done that, so it can be removed along with several related
pieces.

fixes #1628

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-08 12:30:18 +10:00
David Gibson
5d007743c1 agent/device: Refine uevent matching for pmem devices
Use the new uevent matching infrastructure to refine the matching for pmem
devices to something more pinned down to that device type.  While we're
there, fix a few anciliary problems with get_pmem_device_name():

- The name is poor - the *input* to this function is the expected device
  name, so the result isn't helpful, except that it needs to wait for the
  device to be ready in the guest.  Change it to wait_for_pmem_device() and
  explicitly check that the returned device name matches the one expected.
- Remove an incorrect comment in nvdimm_storage_handler() (the only caller)
  which appears to have been copied from the virtio-blk path, but then
  become stale.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-08 12:02:39 +10:00
James O. D. Hunt
9017e1100b agent: start to rework the debug console
It's the first commit of the rework.

Fixes: #1647

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2021-04-08 09:57:48 +08:00
David Gibson
a59e07c1f9 agent/define: Refine uevent matching for virtio-scsi devices
Current get_scsi_device_name() uses the legacy uevent matching which
isn't very precise.  This refines it to use a specific matcher
implementation.  While we're at it:

- No longer insist on the SCSI controller being under the PCI root.
  It generally will be, but there's no particular reason to require
  it.

The matcher still has a problem in that it won't work sensibly if
there are multiple SCSI busses in the guest.  Fixing that requires
changes on the runtime side as well, though, so it's beyond scope for
this change.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-08 11:13:00 +10:00
David Gibson
484a364729 agent/device: Rework uevent handling for virtio-blk devices
There are some problems with get_pci_device_name():

1) It's misnamed: in fact it is only used for handling virtio-blk PCI
   devices.  It's also only correct for virtio-blk devices, the event
   matching doesn't locate the "raw" PCI device, but rather the block
   device created by virtio-blk as a child of the PCI device itself.

2) The uevent matching is imprecise.  As all things using the legacy
   DevAddrMatcher, it matches on a bunch of conditions used across several
   different device types, not all of which make sense for virtio-blk pci
   devices specifically.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-08 11:13:00 +10:00
Eric Ernst
15c2d7ed30 Merge pull request #1400 from ManaSugi/update-oci-seccomp
oci: Update seccomp configuration
2021-04-07 15:18:19 -07:00
Carlos Venegas
ec36883fe3 Merge pull request #1638 from jcvenegas/2021-04-06/linux-5.10.x-qemu-experimental
kernel: update experimental kernel to 5.10.x
2021-04-07 16:02:44 -05:00
Wainer Moschetta
69dbcaa32b Merge pull request #1633 from fidencio/wip/fix-test-kata-deploy-script
kata-deploy: Fix `test-kata.sh` and do some small cleanups / improvements in the kata-deploy script
2021-04-07 16:13:17 -03:00
GabyCT
a374b007bd Merge pull request #1646 from fidencio/wip/fix-slash-command-action-usage
github: Fix slash-command-action usage
2021-04-07 10:17:04 -05:00
GabyCT
d922070c50 Merge pull request #1644 from lifupan/fix_env
rustjail: fix the issue of missing default home env
2021-04-07 10:16:07 -05:00
GabyCT
81bcded9a3 Merge pull request #1492 from dgibson/uevent
Make uevent watching mechanism more flexible
2021-04-07 10:15:33 -05:00
Fabiano Fidêncio
d2fda148fa Merge pull request #1637 from wainersm/openshift_ci_centos8
ci/openshift-ci: Prepare to build on CentOS 8
2021-04-07 15:37:41 +02:00
Fabiano Fidêncio
514ba369fd Merge pull request #1630 from liubin/1629-update-containerd-config
docs: update configuration for passing annotations in conatinerd
2021-04-07 15:36:21 +02:00
Fabiano Fidêncio
7873b7a1f9 github: Fix slash-command-action usage
`/test-kata-deploy` command does **not** work, and the output returned
is:
```
Error: Comment didn't contain a valid slash command
```

So, why does this happen?

This is the regex used: `^\/([\w]+)\b *(.*)?$`, being the important part
of the command "\/([\w]+)\b", with the rest being arguments to it.
Okay, `\w` is the key here, as `\w` means: a-z, A-Z, 0-9, including the
_.

Our command is `/test-kata-deploy`, and `-` is not present as part of
`\w`.  Knowing this we need to update the command to something like:
`/test_kata_deploy`

Fixes: #1645

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-07 14:46:21 +02:00
Fabiano Fidêncio
b09fda36bd Merge pull request #1641 from fidencio/wip/re-add-kata-deploy-test-action
Revert "github: Remove kata-deploy-test action"
2021-04-07 13:18:25 +02:00
Fabiano Fidêncio
eda8da1ec5 github: Revert "github: Remove kata-deploy-test action"
This partially reverts commit 1cce930071.

As mentioned in #1635, the malformed yaml wouldn't allow us to actually
test changes that were supposed to be test by this action.

So, this is now reverted and adapted accordingly.

Main differences from what we had before:
* As it tests kata-deploy itself, not the statically built binaries,
  let's just use the binaries from 2.0.0 release;
* Adapt download and deploy location to the
  `kata-containers/kata-containers` repo, as the original action was
  based on 1.x repos;

Fixes: #1640

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-07 12:07:30 +02:00
fupan.lfp
a938d90310 rustjail: fix the issue of missing default home env
first get the "HOME" env from "/etc/passwd", if
there's no corresponding uid entry in /etc/passwd,
then set "/" as the home env.

Fixes: #1643

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2021-04-07 15:11:28 +08:00
bin
b0e4618e84 docs: update configuration for passing annotations in conatinerd
Using "io.containerd.kata.v2" instead of deprecated "io.containerd.runc.v1".

Fixes: #1629

Signed-off-by: bin <bin@hyper.sh>
2021-04-07 10:16:02 +08:00
Fabiano Fidêncio
d43098ec21 kata-deploy: Adapt regex for testing kata-deploy
On commit 60f6315 we've started adding the specific version of the image
to be used, in order to ensure people using our content from a tarball
would be relying on the correct image.

However, later on, @bergwolf figured out it had some undesired side
effects, such as
https://github.com/kata-containers/kata-containers/runs/2235812941?check_suite_focus=true

What happens there is that the regular expression used to point the
image to a testing one doesn't take into consideration the $VERSION, and
that breaks the deployment.

Depends-on: github.com/kata-containers/kata-containers#1641
Fixes: #1632

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-06 23:41:24 +02:00
GabyCT
0b87fd436f Merge pull request #1544 from snir911/timeout
runtime: increase dial timeout
2021-04-06 16:10:51 -05:00
Carlos Venegas
84e453a643 Merge pull request #1625 from jcvenegas/2021-04-05/qemu-experimental-6.0
qemu experimental: Move to latest tree on virtio-fs-dev (qemu 6.0 + DAX patches).
2021-04-06 15:29:05 -05:00
Carlos Venegas
107ceca680 kernel: update experimental kernel to 5.10.x
Relevant changes for experimental :

42d3e2d04 virtiofs: calculate number of scatter-gather elements accurately
413daa1a3 fuse: connection remove fix
bf109c640 fuse: implement crossmounts
1866d779d fuse: Allow fuse_fill_super_common() for submounts
fcee216be fuse: split fuse_mount off of fuse_conn
8f622e949 fuse: drop fuse_conn parameter where possible
24754db27 fuse: store fuse_conn in fuse_req
c6ff213fe fuse: add submount support to <uapi/linux/fuse.h>
d78092e49 fuse: fix page dereference after free
9a752d18c virtiofs: add logic to free up a memory range
d0cfb9dcb virtiofs: maintain a list of busy elements
6ae330cad virtiofs: serialize truncate/punch_hole and dax fault path
9483e7d58 virtiofs: define dax address space operations
2a9a609a0 virtiofs: add DAX mmap support
c2d0ad00d virtiofs: implement dax read/write operations
ceec02d43 virtiofs: introduce setupmapping/removemapping commands
fd1a1dc6f virtiofs: implement FUSE_INIT map_alignment field
45f2348ec virtiofs: keep a list of free dax memory ranges
1dd539577 virtiofs: add a mount option to enable dax
22f3787e9 virtiofs: set up virtio_fs dax_device
f4fd4ae35 virtiofs: get rid of no_mount_options
b43b7e81e virtiofs: provide a helper function for virtqueue initialization

Fixes: #1639

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-04-06 19:06:09 +00:00
Fabiano Fidêncio
ca4dccf980 release: Get rid of "master"
We don't use the "master" branch for anything in
`kata-containers/kata-containers`.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-06 20:28:42 +02:00
Fabiano Fidêncio
c2197cbf2b release: Use sudo to install hub
This doesn't make much difference for the automated process we have in
place, but makes a whole lot of difference for those trying to have the
binaries deployed locally.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-06 20:28:42 +02:00
Fabiano Fidêncio
dc373d0161 Merge pull request #1635 from fidencio/wip/remove-kata-deploy-test-action
github: Remove kata-deploy-test action
2021-04-06 20:27:54 +02:00
Wainer dos Santos Moschetta
49eec92038 agent: log the tag and mount point if it is already mounted
On commit 17e9a2cff5 it was introduced a guard for the case the mount point is already
mounted. Instead of log only the mount tag ("kataShared") with this change it will print
both tag and mount point path.

Fixes: #1398
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2021-04-06 14:14:59 -04:00
Wainer dos Santos Moschetta
16f732fc18 ci/lib: Use git to clone the tests repository
On clone_tests_repo() use git instead of `go get` to clone and/or
update the tests repository.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2021-04-06 13:44:02 -04:00
Wainer dos Santos Moschetta
9281e56705 ci/openshift-ci: Add build root dockerfile
This adds the dockerfile which is used by the OpenShift CI operator to build
the build root image. It is installed git as it is required by the operator
to clone repositories. The sudo package is also installed because many scripts
relies on the command but it is not installed by tests/.ci/setup_env_centos.sh.

Fixes #1636

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2021-04-06 13:44:02 -04:00
Fabiano Fidêncio
1cce930071 github: Remove kata-deploy-test action
Currently the action is not running because it's broken, and it was
broken by 50fea9f.

Sadly, I cannot just test a fix on a PR as every single time we end up
triggering what's currently on main, rather than triggering the content
of the PR itself.

With this in mind, let me just remove the file and re-add it as part of
a new PR and, hopefully, have it tested in this way.

Sorry for the breakage, by the way.

Fixes: #1634

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-06 19:01:53 +02:00
GabyCT
aac852a0bc Merge pull request #1561 from Jakob-Naucke/s390x-statfs-constants
agent: s390x statfs constants
2021-04-06 11:11:40 -05:00
GabyCT
201ad249c2 Merge pull request #1597 from jongwu/PA
kernel: upgrade kernel to 5.10.x for arm64.
2021-04-06 09:44:16 -05:00
David Gibson
0828f9ba70 agent/uevent: Introduce wait_for_uevent() helper
get_device_name() contains logic to wait for a specific uevent, then
extract the /dev node name from it.  In future we're going to want similar
logic to wait on uevents, but using different match criteria, or getting
different information out.

To simplify this, add a wait_for_uevent() helper in the uevent module,
which takes an explicit UeventMatcher object and returns the whole uevent
found.

To make testing easier, we also extract the cut down uevent watcher from
test_get_device_name() into a new spawn_test_watcher() helper.  Its used
for both test_get_device_name() and a new test_wait_for_uevent() amd will
be useful for more tests in future.

fixes #1484

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 21:14:52 +10:00
David Gibson
16ed55e440 agent/device: Use consistent matching for past and future uevents
get_device_name() looks at kernel uevents to work out the device name for
a given PCI (usually) address.  However, when we call it we can't know if
the uevent we're interested in has already happened (in which case it will
have been recorded in Sandbox::uevent_map) or yet to come, in which case
we need to register to watch it.

However, we currently match differently against past and future events.
For past events we simply look for a sysfs path including the address, but
for future events we use a complex bit of logic in the is_match() closure.
Change it to use the exact same matching logic in both cases.

fixes #1397

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 21:14:33 +10:00
David Gibson
4b16681d87 agent/uevent: Put matcher object rather than "device address" in watch list
Currently, Sandbox::uevent_watchers lists uevents to watch for by a
"device address" string.  This is not very clearly defined, and is
matched against events with a rather complex closure created in
Uevent::process_add().

That closure makes a bunch of fragile assumptions about what sort of
events we could ever be interested in.  In some ways it is too
restrictive (requires everything to be a block device), but in others
is not restrictive enough (allows things matching NVDIMM paths, even
if we're looking for a PCI block device).

To allow the clients more precise control over uevent matching, we
define a new UeventMatcher trait with a method to match uevents.  We
then have the atchers list include UeventMatcher trait objects which
are used directly by Uevent::process_add(), instead of constructing
our match directly from dev_addr.

For now we don't actually change the matching function, or even use
multiple different trait implementations, but we'll refine that in
future.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 21:14:18 +10:00
David Gibson
b8b322482c agent/uevent: Consolidate event matching logic
The event matching logic in Uevent::process_add() is split into two parts.
The first checks if we care about the event at all, the second checks
whether the event is relevant to a particular watcher.

However, we're going to be adding more types of watchers in future, which
will make the global filter too restrictive.  Fold the two bits of logic
together into a per-watcher filter function.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:59:43 +10:00
David Gibson
d2caff6c55 agent: Re-organize uevent processing
Uevent::process() is a bit oddly organized.  It treats the onlining of
hotplugged memory as the "default" case, although that's quite specific,
while treating the handling of hotplugged block devices more like a special
case, although that's pretty close to being very general.

Furthermore splitting Uevent::is_block_add_event() from
Uevent::handle_block_add_event() doesn't make a lot of sense, since their
logic is intimately related to each other.

Alter the code to be a bit more sensible: first split on the "action" type
since that's the most fundamental difference, then handle the memory
onlining special case, then the block device add (which will become a lot
more general in future changes).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:59:20 +10:00
David Gibson
55ed2ddd07 agent: Store uevent watchers in Vec rather than HashMap
Sandbox:dev_watcher is a HashMap from a "device address" to a channel used
to notify get_device_name() that a suitable uevent has been found.
However, "device address" isn't well defined, having somewhat different
meanings for different device/event types.  We never actually look up this
HashMap by key, except to remove entries.

Not looking up by key suggests that a map is not the appropriate data
structure here.  Furthermore, HashMap imposes limitations on the types
which will prevent some future extensions we want.

So, replace the HashMap with a Vec<Option<>>.  We need the Option<> so that
we can remove entries by index (removing them from the Vec completely would
hange the indices of other entries, possibly breaking concurrent work.

This does mean that the vector will keep growing as we watch for different
events during startup.  However, we don't expect the number of device
events we watch for during a run to be very large, so that shouldn't be
a problem.  We can optimize this later if it becomes a problem.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:59:19 +10:00
David Gibson
91e0ef5c90 agent/uevent: Report whole Uevents to device watchers
Currently, when Uevent::handle_block_add_event() receives an event matching
a registered watcher, it reports the /dev node name from the event back
to the watcher.

This changes it to report the entire uevent, not just the /dev node name.
This will allow various future extensions.  It also makes the client side
of the uevent watching - get_device_name() - more consistent between its
two paths: finding a past uevent in Sandbox::uevent_map() or waiting for
a new uevent via a watcher.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:58:47 +10:00
David Gibson
3642005479 agent: Store whole Uevent in map, rather than just /dev name
Sandbox::pci_device_map contains a mapping from sysfs paths to /dev entries
which is used by get_device_name() to look up the right /dev node.  But,
the map only supplies the answer if the uevent for the device has already
been received, otherwise get_device_name() has to wait for it.

However the matching for already-received and yet-to-come uevents isn't
quite the same which makes the whole system fragile.

In order to make sure the matching for both cases is identical, we need the
already-received side to store the whole uevent to match against, not just
the sysfs path and device name.

So, rename pci_device_map to uevent_map and store the whole uevent there
verbatim.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:58:47 +10:00
David Gibson
0616202580 agent/device: Move GLOBAL_DEVICE_WATCHER into Sandbox
In Kata 1.x, both the sysToDevMap and the deviceWatchers are in the sandbox
structure.  For some reason in Kata 2.x, the device watchers have moved to
a separate global variable, GLOBAL_DEVICE_WATCHER.

This is a bad idea: apart from introducing an extra global variable
unnecessarily, it means that Sandbox::pci_device_map and
GLOBAL_DEVICE_WATCHER are protected by separate mutexes.  Since the
information in these two structures has to be kept in sync with each other,
it makes much more sense to keep them both under the same single Sandbox
mutex.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:58:45 +10:00
David Gibson
11ae32e3c0 agent/device: Fix path matching for PCI devices
For the case of virtio-blk PCI devices, when matching uevents we create
a pci_p temporary.  However, we build it incorrectly: the dev_addr values
we use for PCI devices are a relative sysfs paths from the PCI root to the
device in question *including an initial /*.  But when we construct pci_p
we add an extra /, meaning the resulting path will *not* match properly.

AFAICT the only reason we got away with this is because in practice the
virtio-blk devices where discovered by the kernel before we looked for them
meaning the loosed matching in get_device_name() was used, rather than the
pci_p logic in handle_block_add_event().

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:58:06 +10:00
David Gibson
4f60880414 agent/device: Update test_get_device_name()
The current test_get_device_name(), ported from Kata 1.x doesn't really
reflect how the function is used in practice.  The example path appears
to be for a virtio-blk device, but it's an s390 specific variant, not a
PCI device.  The s390 form isn't actually supported by any of the existing
users of get_device_name().

Change it to a plausible virtio-blk-pci style path to better test how
get_device_name() will actually be used in practice.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 20:49:48 +10:00
Bin Liu
117c59150d Merge pull request #1613 from Tim-Zhang/pipestream-shutdown-do-nothing
Don't do anything in Pipestream::shutdown
2021-04-06 14:03:00 +08:00
Tim Zhang
ee6a590db1 agent: add test test_pipestream_shutdown
Make sure PipeStream::shutdown() do not close the inner fd.

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-04-06 11:44:56 +08:00
Tim Zhang
4a2d437043 agent: don't do anything in Pipestream::shutdown
The only right way to shutdown pipe is drop it
Otherwise PipeStream will conflict with its twins
Because they both have the same fd, and both registered.

Fixes: #1614

Signed-off-by: Tim Zhang <tim@hyper.sh>
2021-04-06 11:44:38 +08:00
Peng Tao
d5600641dd Merge pull request #1603 from lifupan/fix_fsgroup
Fix fsgroup
2021-04-06 11:35:03 +08:00
David Gibson
e3e670c56f agent/device: Forward port test for get_device_name() from Kata 1.x
Kata 1.x had a testcase for the equivalent getDeviceName function in Go,
this adapts it to Rust and adds it to Kata 2.x.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 13:29:37 +10:00
David Gibson
b980965c6b Merge pull request #1627 from dgibson/bug1626
agent: Remove many "panic message is not string literal" warnings
2021-04-06 13:29:11 +10:00
David Gibson
ed08980fc1 agent: Remove many "panic message is not string literal" warnings
Rust 1.51 appears to have added a new warning in anticipation of Rust 2021,
which requires the format string for panic!()s (including via the various
assert!() macros) to be a string literal.  This triggers quite a few times
in the agent code.  This patch fixes them.

fixes #1626

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-04-06 11:51:34 +10:00
Carlos Venegas
f365bdb7cf versions: qemu-experimental: 6.0~rc 470dd6
Move to next 6.0 dev tree for qemu experimental,
the qemu version is the same base as:

https://gitlab.com/virtio-fs/qemu/-/commits/virtio-fs-dev/

Using qemu 6.0-rc1 some patches does not apply.

Fixes: #1624

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-04-05 21:01:58 +00:00
Carlos Venegas
6491b9d7aa qemu: Add support to build static qemu for dev tree
Update static build scripts to allow build qemu dev tree.
When qemu starts the process for a new version the patch number
from the qemu version is more than 50. Add logic to detect it
and not apply patches fro the base branch.

For example:

Qemu 5.2.50 means the beginning for 6.0 development. After detect a
development version, patches for 5.2.x will not be applied.

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-04-05 20:55:55 +00:00
GabyCT
503039482b Merge pull request #1620 from eadamsintel/update-qat-dockerfile
osbuilder: Update QAT Dockerfile with new QAT driver version
2021-04-05 09:51:14 -05:00
Snir Sheriber
13653e7b55 runtime: increase dial timeout
On some setups, starting multiple kata pods (qemu) simultaneously on the same node
might cause kata VMs booting time to increase and the pods to fail with:
Failed to check if grpc server is working: rpc error: code = DeadlineExceeded desc = timed
out connecting to vsock 1358662990:1024: unknown

Increasing default dialing timeout to 30s should cover most cases.

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
Fixes: #1543
2021-04-04 09:37:38 +03:00
Eric Ernst
d44412fe24 Merge pull request #1623 from egernst/custom-image-registry
osbuilder: update dockerfiles to utilize IMAGE_REGISTRY
2021-04-02 14:18:36 -07:00
Chelsea Mafrica
17b1452c2a Merge pull request #1607 from fidencio/wip/only-keep-one-VERSION-file
Only keep one VERSION file
2021-04-02 11:14:12 -07:00
Eric Ernst
935460e549 osbuilder: update dockerfiles to utilize IMAGE_REGISTRY
While we introduced IMAGE_REGISTRY, we didn't actually update the
corresponding Dockerfiles to utilize it. Let's add

Fixes: #1622

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2021-04-02 09:46:16 -07:00
Eric Adams
010d57f4e2 osbuilder: Update QAT Dockerfile with new QAT driver version
This fixes the QAT driver version and provides a check early in the
building process to make sure the driver exists. It also provides
hints to users on how to fix themselves if the driver changes again.

Fixes: #1618

Signed-off-by: Eric Adams <eric.adams@intel.com>
2021-04-01 19:20:44 +00:00
Fabiano Fidêncio
adb866ad64 kata-deploy: Adapt to the correct tag name
Use 2.1.0-alpha1 instead of 2.1-alpha1

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-01 20:45:30 +02:00
Fabiano Fidêncio
60adc7f02b VERSION: Use the correct form
Following what's been done in the past for 1.x repos, the version should
be 2.1.0-alpha1 (instead of 2.1-alpha1).

Fixes: #1617

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-04-01 20:43:34 +02:00
Bo Chen
1511d966aa Merge pull request #1616 from egernst/dechat-deruntime
Dechat deruntime
2021-04-01 11:02:27 -07:00
Chelsea Mafrica
4a3282cf1a Merge pull request #1608 from likebreath/0331/go_fmt_clh_clinet_code
runtime: Format auto-generated client code for cloud-hypervisor API
2021-04-01 10:39:02 -07:00
Eric Ernst
a4c125a8b9 trace: move gRPC requests from debug to trace
There are many requests to the agent that happen with relatively
high frequency when a workload is running (checkRequest, as an example).

Let's move from Debug to Trace to avoid bombarding journal.

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2021-04-01 09:03:26 -07:00
Eric Ernst
50fff97753 trace: move trace span chatter to trace rather than info
No human should ever read that ouptut. Let's at least move it to trace for now.

Fixes: #1615

Signed-off-by: Eric Ernst <eric.g.ernst@gmail.com>
2021-04-01 09:02:56 -07:00
Fupan Li
5524bc806b Merge pull request #1612 from liubin/1610/use-concrete-kata-agent-config-type
runtime: use concrete KataAgentConfig instead of interface type
2021-04-01 21:26:38 +08:00
Jianyong Wu
28bd8c1110 kernel: upgrade kernel to 5.10.x for arm64.
In kernel 5.10.x on arm64 side, When CONFIG_RANDOM_BASE is enabled,
physical base address can be a negative number. It may lead to bug
when a PA is taken as a unsigned number in comparison, as PA is
calculated based on the physical base address. The bug has been fixed
latest code by commit ee7febce051945be2 in memory hotplug zone. We can
eliminate the bug in an easy way by casting the PA as a signed value in
the current code base to avoid lots of backport.

Depends-on: github.com/kata-containers/tests#3388
Fixes: #1596
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2021-04-01 17:01:44 +08:00
bin
6fe48329b5 runtime: use concrete KataAgentConfig instead of interface type
Kata Containers 2.0 only have one type of agent, so there is no
need to use interface as config's type

Fixes: #1610

Signed-off-by: bin <bin@hyper.sh>
2021-04-01 13:44:45 +08:00
fupan.lfp
6493942568 mount: fix the issue of missing set fsGroup
For k8s emptyDir volume, a specific fsGroup would
be set for it, thus guest should get this fsGroup
from runtime and set it properly on the emptyDir volume
in guest.

Fixes: #1580

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2021-04-01 11:33:26 +08:00
fupan.lfp
88e58a4f4b agent: fix the issue of missing pass fsGroup
For k8s emptyDir volume, a specific fsGroup would
be set for it, thus runtime should pass this fsGroup
to guest and set it properly on the emptyDir volume
in guest.

Fixes: #1580

Signed-off-by: fupan.lfp <fupan.lfp@antfin.com>
2021-04-01 11:33:18 +08:00
Fabiano Fidêncio
572aff53e8 build: Only keep one VERSION file
Instead of having different VERSION files spread accross the project,
let's always use the one in the topsrcdir and remove all the others,
keeping only a synlink to the topsrcdir one.

Fixes: #1579

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 23:51:20 +02:00
Bo Chen
0c38d9ecc4 runtime: Fix the format of the client code of cloud-hypervisor APIs
Regenerate the client code with the added `go-fmt` step. No functional
changes.

Fixes: #1606

Signed-off-by: Bo Chen <chen.bo@intel.com>
2021-03-31 14:41:44 -07:00
Bo Chen
52cacf8838 runtime: Format auto-generated client code for cloud-hypervisor API
This patch extends the current process of generating client code for
cloud-hypervisor API with an additional step, `go-fmt`, which will remove
the generated `client/go.mod` file and format all auto-generated code.

Fixes: #1606

Signed-off-by: Bo Chen <chen.bo@intel.com>
2021-03-31 14:36:24 -07:00
Eric Ernst
c0c7bef2b8 Merge pull request #1592 from likebreath/0330/versions_clh_v0.14.0
versions: Update cloud-hypervisor to release v0.14.1
2021-03-31 12:39:35 -07:00
Fabiano Fidêncio
a3d8554ab9 Merge pull request #1577 from liubin/feature/1576-import-runc-v2-options-types
runtime: import runtime/v2/runc/options to decode request from Docker
2021-03-31 20:35:24 +02:00
Fabiano Fidêncio
a6a53698c1 Merge pull request #1519 from nubificus/fc-v0.23.1
virtcontainers/fc: Upgrade Firecracker to v0.23.1
2021-03-31 20:34:25 +02:00
Bo Chen
84b62dc3b1 versions: Update cloud-hypervisor to release v0.14.1
Highlights for cloud-hypervisor version 0.14.0 include: 1) Structured
event monitoring; 2) MSHV improvements; 3) Improved aarch64 platform; 4)
Updated hotplug documentation; 6) PTY control for serial and
virtio-console; 7) Block device rate limiting; 8) Plan to deprecate the
support of "LinuxBoot" protocol and support PVH protocol only.

Highlights for cloud-hypervisor version 0.13.0 include: 1) Wider VFIO
device support; 2) Improve huge page support; 3) MACvTAP support; 4) VHD
disk image support; 5) Improved Virtio device threading; 6) Clean
shutdown support via synthetic power button.

Details can be found:
https://github.com/cloud-hypervisor/cloud-hypervisor/releases

Note: The client code of cloud-hypervisor's OpenAPI is automatically
generated by `openapi-generator` [1-2]. As the API changes do not
impact usages in Kata, no additional changes in kata's runtime are
needed to work with the latest version of cloud-hypervisor.

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Fixes: #1591

Signed-off-by: Bo Chen <chen.bo@intel.com>
2021-03-31 11:09:47 -07:00
Carlos Venegas
ed2e736df7 Merge pull request #1589 from fidencio/wip/update-install-docs-for-ubuntu
docs: Remove ubuntu installation guide
2021-03-31 11:54:34 -06:00
Carlos Venegas
0e7af7f27f Merge pull request #1602 from fidencio/wip/update-install-for-snap
docs: Update snap install guide
2021-03-31 10:52:48 -06:00
Fabiano Fidêncio
4a38ff41f0 docs: Update snap install guide
As this repo is specific to the kata-containers 2.x, let's stop
mentioning / referring to the 1.x here, including how to setup and use
the snap package for 1.x.

Fixes: #1601

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 15:26:47 +02:00
Fabiano Fidêncio
ede1ab8670 docs: Remove ubuntu installation guide
The installation guide points to 1.x packages from OBS.  For 2.x we
decided to stop building packages on OBS in favour of advertising
kata-deploy.

Apart from this, Ubuntu itself doesn't provide packages for
kata-containers.

Fixes: #1588

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 14:49:41 +02:00
Tim Zhang
3cc27610ab Merge pull request #1354 from liubin/fix/1325-update-doc-for-using-k8s
docs: update how-to-use-k8s-with-cri-containerd-and-kata.md
2021-03-31 19:19:00 +08:00
Orestis Lagkas Nikolos
6255cc1959 virtcontainers/fc: Upgrade Firecracker to v0.23.1
This patch upgrades Firecracker version from v0.21.1 to v0.23.1

* Generate swagger models for v0.23.1 (from firecracker.yaml)
* Change uint64 types in TokenBucket object according to rate-limiter
implementation (introduced in commit #cfeb966)
* Update Firecracker Logger/Metrics to support the new API
* Update payload in fc.vmRunning to support the new API
* Add Metrics type to fcConfig

Fixes: #1518

Signed-off-by: Orestis Lagkas Nikolos <olagkasn@nubificus.co.uk>
2021-03-31 04:55:40 -05:00
Fabiano Fidêncio
9c8e95c820 Merge pull request #1584 from fidencio/wip/update-install-docs-for-fedora-and-centos
Update install docs for Fedora and CentOS
2021-03-31 11:31:11 +02:00
bin
2c47277ca1 docs: update how-to-use-k8s-with-cri-containerd-and-kata.md
Update how-to-use-k8s-with-cri-containerd-and-kata.md to fit the latest
Kubernetes way.
And also changed CNI plugin from flannel to bridge, that will be easy to run.

Fixes: #1325

Signed-off-by: bin <bin@hyper.sh>
2021-03-31 17:10:39 +08:00
Bin Liu
a8756887f6 Merge pull request #1594 from bergwolf/action
action: fix missing qemu tag
2021-03-31 16:58:03 +08:00
Fabiano Fidêncio
317f55f89e docs: Update minimum version for Fedora
The minimum version where everything was running out-of-the-box, for 2.x
package, is Fedora 34.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 10:34:27 +02:00
Fabiano Fidêncio
1ce29fc959 docs: Update CentOS install docs
There are two changes here.  There first one being relying on the
`centos-release-advanced-virtualization` package instead providing the
content of the repo ourselves; and the second one being installing
`kata-containers` (2.x) instead of the `kata-runtime` one (1.x).

Fixes: #1583

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 10:01:03 +02:00
Fabiano Fidêncio
3f90561bf1 docs: Update Fedora install docs
The package to be installed on Fedora is `kata-containers` instead of
`kata-runtime`.  The difference being `kata-runtime` is the 1.x package,
while `kata-containers` is the 2.x one.

Fixes: #1582

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-31 10:01:03 +02:00
Fabiano Fidêncio
a85d235e0e Merge pull request #1587 from fidencio/wip/update-install-docs-for-sle-and-opensuse
Remove installation guides for SLE and openSUSE
2021-03-31 09:54:21 +02:00
Peng Tao
8a1c6c3ff0 action: fix missing qemu tag
Otherwise it breaks qemu build.

Fixes: #1593
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-03-31 11:47:16 +08:00
Fabiano Fidêncio
bf707209df Merge pull request #1384 from fidencio/wip/update-kernel-config-for-overlayfs
kernel: Enable OVERLAY_FS_{METACOPY,XINO_AUTO}
2021-03-30 23:20:20 +02:00
Fabiano Fidêncio
a9ff9c8707 docs: Remove openSUSE installation guide
The content of the openSUSE installation guide is related to the 1.x
packages, as openSUSE doesn't provide katacontainers 2.x packages.

Fixes: #1585

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-30 22:24:19 +02:00
Fabiano Fidêncio
2888ceb024 docs: Remove SLE installation guide
The content of the SLE installation guide is related to the 1.x
packages, as SUSE doesn't provide katacontainers 2.x packages.

Fixes: #1586

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-03-30 22:23:43 +02:00
Carlos Venegas
8e48fecc2c Merge pull request #1540 from jcvenegas/2021-03-23/kernel-5.10.x
versions: kernel 5.10.x
2021-03-30 12:12:53 -06:00
Chelsea Mafrica
e5aa4e7eb4 Merge pull request #1563 from Jakob-Naucke/s390x-missing-contexts
virtcontainers: Fix missing contexts in s390x
2021-03-30 09:38:28 -07:00
Carlos Venegas
c748a9c278 Merge pull request #1549 from jcvenegas/2021-03-24/makefile-enable-dax-env-var
runtime: makefile allow override DAX value
2021-03-30 10:06:16 -06:00
bin
09d454ac74 runtime: import runtime/v2/runc/options to decode request from Docker
Shimv2 protocol CreateTaskRequest.Options has a type of *google_protobuf.Any.
If the call is from Docker, to decode the request,
the proto types(github.com/containerd/containerd/runtime/v2/runc/options)
should be imported.

Fixes: #1576

Signed-off-by: bin <bin@hyper.sh>
2021-03-30 19:44:00 +08:00
Carlos Venegas
0b502d15b2 runtime: makefile allow override DAX value
Allow enable DAX using env variable

Fixes: #1547

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-03-29 21:28:22 +00:00
Carlos Venegas
a65519b9d3 versions: keep using kernel 5.4.x for ARM
ARM CI fails with new kernel. Lets use 5.4.x until
fixed.

Depends-on: github.com/kata-containers/tests#3363

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-03-29 21:24:14 +00:00
Jakob Naucke
1366f0fb9c cli: Use genericGetExpectedHostDetails on s390x
getExpectedHostDetails did not offload any work to
genericGetExpectedHostDetails on s390x. By using that function, much
redundant code can be saved. This also resolves 2 issues with the
previous version:

- The number of CPUs was not calculated.
- vcUtils.SupportsVsocks() still used the Kata v1 signature.

Fixes: #1564

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-03-29 17:58:16 +02:00
Jakob Naucke
31ced01eba virtcontainers: Fix missing contexts in s390x
#1389 has added a context for many signatures to improve trace spans.
Functions specific to s390x lack this. Add context where required. This
affects some common code signatures, since some functions that do not
require context on other architectures do require it on s390x.
Also remove an unnecessary import in test_qemu_s390x.go.

Fixes: #1562

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-03-29 17:49:27 +02:00
Jakob Naucke
52a276fbdb agent: Fix type for PROC_SUPER_MAGIC on s390x
statfs f_types are long on most architectures, but not on s390x, where
they are uint. Following the fix in rust-lang/libc at
https://github.com/rust-lang/libc/pull/1999, the custom defined
PROC_SUPER_MAGIC must be updated in a similar way.

Fixes: #1204

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-03-29 17:25:19 +02:00
Jakob Naucke
5b7c8b7d26 agent: Update cgroups-rs to 0.2.5
to pull in the chain of https://github.com/rust-lang/libc/pull/1999,
https://github.com/nix-rust/nix/pull/1372, and
https://github.com/kata-containers/cgroups-rs/pull/38. This adds statfs
constants on s390x. cgroups-rs 0.2.4 also contains this fix, but let's
move to the latest 0.2.5 right away.

Fixes: #1204

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-03-29 17:25:14 +02:00
Carlos Venegas
c035cdb3ef versions: kernel 5.10.x
Linux 5.10.x is the new LTS branch, move
kata to a more recent kernel branch.

Fixes: #1288

Signed-off-by: Carlos Venegas <jos.c.venegas.munoz@intel.com>
2021-03-26 17:58:09 +00:00
Manabu Sugimoto
660b047306 oci: Update seccomp configuration
Seccomp configuration should be updated to prepare for the future seccomp support based on the latest OCI specification.

Add:
- flags which is used with seccomp(2) in struct LinuxSeccomp
- errnoRet which is errno return code in struct LinuxSyscall
- some new seccomp actions and an architecture

Fixes: #1391

Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
2021-02-11 22:37:02 +09:00
Fabiano Fidêncio
8c1e0d3002 kernel: Enable OVERLAY_FS_{METACOPY,XINO_AUTO}
* CONFIG_OVERLAY_FS_METACOPY is needed to have reasonable performance
  for chmod and similar calls;
* CONFIG_OVERLAY_FS_XINO_AUTO is recommended for POSIX compliance.

Fixes: #1075

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-02-09 13:01:01 +01:00
253 changed files with 10921 additions and 4095 deletions

View File

@@ -1,9 +1,12 @@
on: issue_comment
on:
issue_comment:
types: [created, edited]
name: test-kata-deploy
jobs:
check_comments:
if: ${{ github.event.issue.pull_request }}
types: [created, edited]
runs-on: ubuntu-latest
steps:
- name: Check for Command
@@ -11,7 +14,7 @@ jobs:
uses: kata-containers/slash-command-action@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
command: "test-kata-deploy"
command: "test_kata_deploy"
reaction: "true"
reaction-type: "eyes"
allow-edits: "false"
@@ -19,6 +22,7 @@ jobs:
- name: verify command arg is kata-deploy
run: |
echo "The command was '${{ steps.command.outputs.command-name }}' with arguments '${{ steps.command.outputs.command-arguments }}'"
create-and-test-container:
needs: check_comments
runs-on: ubuntu-latest
@@ -29,22 +33,26 @@ jobs:
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
echo "reference for PR: " ${ref}
echo "##[set-output name=pr-ref;]${ref}"
- uses: actions/checkout@v2-beta
- name: check out
uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
- name: build-container-image
id: build-container-image
run: |
PR_SHA=$(git log --format=format:%H -n1)
VERSION=$(curl https://raw.githubusercontent.com/kata-containers/kata-containers/main/VERSION)
VERSION="2.0.0"
ARTIFACT_URL="https://github.com/kata-containers/kata-containers/releases/download/${VERSION}/kata-static-${VERSION}-x86_64.tar.xz"
wget "${ARTIFACT_URL}" -O ./kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t katadocker/kata-deploy-ci:${PR_SHA} ./kata-deploy
wget "${ARTIFACT_URL}" -O tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t katadocker/kata-deploy-ci:${PR_SHA} ./tools/packaging/kata-deploy
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker push katadocker/kata-deploy-ci:$PR_SHA
echo "##[set-output name=pr-sha;]${PR_SHA}"
- name: test-kata-deploy-ci-in-aks
uses: ./kata-deploy/action
uses: ./tools/packaging/kata-deploy/action
with:
packaging-sha: ${{ steps.build-container-image.outputs.pr-sha }}
env:

View File

@@ -1 +1 @@
2.1-alpha1
2.1.0

View File

@@ -18,7 +18,9 @@ function install_yq() {
GOPATH=${GOPATH:-${HOME}/go}
local yq_path="${GOPATH}/bin/yq"
local yq_pkg="github.com/mikefarah/yq"
[ -x "${GOPATH}/bin/yq" ] && return
local yq_version=3.4.1
[ -x "${GOPATH}/bin/yq" ] && [ "`${GOPATH}/bin/yq --version`"X == "yq version ${yq_version}"X ] && return
read -r -a sysInfo <<< "$(uname -sm)"
@@ -56,8 +58,6 @@ function install_yq() {
die "Please install curl"
fi
local yq_version=3.4.1
## NOTE: ${var,,} => gives lowercase value of var
local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos,,}_${goarch}"
curl -o "${yq_path}" -LSsf "${yq_url}"

View File

@@ -7,16 +7,25 @@ export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
export tests_repo_dir="$GOPATH/src/$tests_repo"
export branch="${branch:-main}"
# Clones the tests repository and checkout to the branch pointed out by
# the global $branch variable.
# If the clone exists and `CI` is exported then it does nothing. Otherwise
# it will clone the repository or `git pull` the latest code.
#
clone_tests_repo()
{
if [ -d "$tests_repo_dir" -a -n "$CI" ]
then
return
if [ -d "$tests_repo_dir" ]; then
[ -n "$CI" ] && return
pushd "${tests_repo_dir}"
git checkout "${branch}"
git pull
popd
else
git clone -q "https://${tests_repo}" "$tests_repo_dir"
pushd "${tests_repo_dir}"
git checkout "${branch}"
popd
fi
go get -d -u "$tests_repo" || true
pushd "${tests_repo_dir}" && git checkout "${branch}" && popd
}
run_static_checks()

View File

@@ -0,0 +1,9 @@
# Copyright (c) 2021 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# This is the build root image for Kata Containers on OpenShift CI.
#
FROM centos:8
RUN yum -y update && yum -y install git sudo wget

View File

@@ -1,54 +1,54 @@
* [Warning](#warning)
* [Assumptions](#assumptions)
* [Initial setup](#initial-setup)
* [Requirements to build individual components](#requirements-to-build-individual-components)
* [Build and install the Kata Containers runtime](#build-and-install-the-kata-containers-runtime)
* [Check hardware requirements](#check-hardware-requirements)
* [Configure to use initrd or rootfs image](#configure-to-use-initrd-or-rootfs-image)
* [Enable full debug](#enable-full-debug)
* [debug logs and shimv2](#debug-logs-and-shimv2)
* [Enabling full `containerd` debug](#enabling-full-containerd-debug)
* [Enabling just `containerd shim` debug](#enabling-just-containerd-shim-debug)
* [Enabling `CRI-O` and `shimv2` debug](#enabling-cri-o-and-shimv2-debug)
* [journald rate limiting](#journald-rate-limiting)
* [`systemd-journald` suppressing messages](#systemd-journald-suppressing-messages)
* [Disabling `systemd-journald` rate limiting](#disabling-systemd-journald-rate-limiting)
* [Create and install rootfs and initrd image](#create-and-install-rootfs-and-initrd-image)
* [Build a custom Kata agent - OPTIONAL](#build-a-custom-kata-agent---optional)
* [Get the osbuilder](#get-the-osbuilder)
* [Create a rootfs image](#create-a-rootfs-image)
* [Create a local rootfs](#create-a-local-rootfs)
* [Add a custom agent to the image - OPTIONAL](#add-a-custom-agent-to-the-image---optional)
* [Build a rootfs image](#build-a-rootfs-image)
* [Install the rootfs image](#install-the-rootfs-image)
* [Create an initrd image - OPTIONAL](#create-an-initrd-image---optional)
* [Create a local rootfs for initrd image](#create-a-local-rootfs-for-initrd-image)
* [Build an initrd image](#build-an-initrd-image)
* [Install the initrd image](#install-the-initrd-image)
* [Install guest kernel images](#install-guest-kernel-images)
* [Install a hypervisor](#install-a-hypervisor)
* [Build a custom QEMU](#build-a-custom-qemu)
* [Build a custom QEMU for aarch64/arm64 - REQUIRED](#build-a-custom-qemu-for-aarch64arm64---required)
* [Run Kata Containers with Containerd](#run-kata-containers-with-containerd)
* [Run Kata Containers with Kubernetes](#run-kata-containers-with-kubernetes)
* [Troubleshoot Kata Containers](#troubleshoot-kata-containers)
* [Appendices](#appendices)
* [Checking Docker default runtime](#checking-docker-default-runtime)
* [Set up a debug console](#set-up-a-debug-console)
* [Simple debug console setup](#simple-debug-console-setup)
* [Enable agent debug console](#enable-agent-debug-console)
* [Connect to debug console](#connect-to-debug-console)
* [Traditional debug console setup](#traditional-debug-console-setup)
* [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell)
* [Build the debug image](#build-the-debug-image)
* [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Enabling debug console for QEMU](#enabling-debug-console-for-qemu)
* [Enabling debug console for cloud-hypervisor / firecracker](#enabling-debug-console-for-cloud-hypervisor--firecracker)
* [Create a container](#create-a-container)
* [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
* [Obtain details of the image](#obtain-details-of-the-image)
* [Capturing kernel boot logs](#capturing-kernel-boot-logs)
- [Warning](#warning)
- [Assumptions](#assumptions)
- [Initial setup](#initial-setup)
- [Requirements to build individual components](#requirements-to-build-individual-components)
- [Build and install the Kata Containers runtime](#build-and-install-the-kata-containers-runtime)
- [Check hardware requirements](#check-hardware-requirements)
- [Configure to use initrd or rootfs image](#configure-to-use-initrd-or-rootfs-image)
- [Enable full debug](#enable-full-debug)
- [debug logs and shimv2](#debug-logs-and-shimv2)
- [Enabling full `containerd` debug](#enabling-full-containerd-debug)
- [Enabling just `containerd shim` debug](#enabling-just-containerd-shim-debug)
- [Enabling `CRI-O` and `shimv2` debug](#enabling-cri-o-and-shimv2-debug)
- [journald rate limiting](#journald-rate-limiting)
- [`systemd-journald` suppressing messages](#systemd-journald-suppressing-messages)
- [Disabling `systemd-journald` rate limiting](#disabling-systemd-journald-rate-limiting)
- [Create and install rootfs and initrd image](#create-and-install-rootfs-and-initrd-image)
- [Build a custom Kata agent - OPTIONAL](#build-a-custom-kata-agent---optional)
- [Get the osbuilder](#get-the-osbuilder)
- [Create a rootfs image](#create-a-rootfs-image)
- [Create a local rootfs](#create-a-local-rootfs)
- [Add a custom agent to the image - OPTIONAL](#add-a-custom-agent-to-the-image---optional)
- [Build a rootfs image](#build-a-rootfs-image)
- [Install the rootfs image](#install-the-rootfs-image)
- [Create an initrd image - OPTIONAL](#create-an-initrd-image---optional)
- [Create a local rootfs for initrd image](#create-a-local-rootfs-for-initrd-image)
- [Build an initrd image](#build-an-initrd-image)
- [Install the initrd image](#install-the-initrd-image)
- [Install guest kernel images](#install-guest-kernel-images)
- [Install a hypervisor](#install-a-hypervisor)
- [Build a custom QEMU](#build-a-custom-qemu)
- [Build a custom QEMU for aarch64/arm64 - REQUIRED](#build-a-custom-qemu-for-aarch64arm64---required)
- [Run Kata Containers with Containerd](#run-kata-containers-with-containerd)
- [Run Kata Containers with Kubernetes](#run-kata-containers-with-kubernetes)
- [Troubleshoot Kata Containers](#troubleshoot-kata-containers)
- [Appendices](#appendices)
- [Checking Docker default runtime](#checking-docker-default-runtime)
- [Set up a debug console](#set-up-a-debug-console)
- [Simple debug console setup](#simple-debug-console-setup)
- [Enable agent debug console](#enable-agent-debug-console)
- [Connect to debug console](#connect-to-debug-console)
- [Traditional debug console setup](#traditional-debug-console-setup)
- [Create a custom image containing a shell](#create-a-custom-image-containing-a-shell)
- [Build the debug image](#build-the-debug-image)
- [Configure runtime for custom debug image](#configure-runtime-for-custom-debug-image)
- [Create a container](#create-a-container)
- [Connect to the virtual machine using the debug console](#connect-to-the-virtual-machine-using-the-debug-console)
- [Enabling debug console for QEMU](#enabling-debug-console-for-qemu)
- [Enabling debug console for cloud-hypervisor / firecracker](#enabling-debug-console-for-cloud-hypervisor--firecracker)
- [Connecting to the debug console](#connecting-to-the-debug-console)
- [Obtain details of the image](#obtain-details-of-the-image)
- [Capturing kernel boot logs](#capturing-kernel-boot-logs)
# Warning
@@ -384,22 +384,19 @@ You can build and install the guest kernel image as shown [here](../tools/packag
# Install a hypervisor
When setting up Kata using a [packaged installation method](install/README.md#installing-on-a-linux-system), the `qemu-lite` hypervisor is installed automatically. For other installation methods, you will need to manually install a suitable hypervisor.
When setting up Kata using a [packaged installation method](install/README.md#installing-on-a-linux-system), the
`QEMU` VMM is installed automatically. Cloud-Hypervisor and Firecracker VMMs are available from the [release tarballs](https://github.com/kata-containers/kata-containers/releases), as well as through [`kata-deploy`](../tools/packaging/kata-deploy/README.md).
You may choose to manually build your VMM/hypervisor.
## Build a custom QEMU
Your QEMU directory need to be prepared with source code. Alternatively, you can use the [Kata containers QEMU](https://github.com/kata-containers/qemu/tree/master) and checkout the recommended branch:
Kata Containers makes use of upstream QEMU branch. The exact version
and repository utilized can be found by looking at the [versions file](../versions.yaml).
```
$ go get -d github.com/kata-containers/qemu
$ qemu_branch=$(grep qemu-lite- ${GOPATH}/src/github.com/kata-containers/kata-containers/versions.yaml | cut -d '"' -f2)
$ cd ${GOPATH}/src/github.com/kata-containers/qemu
$ git checkout -b $qemu_branch remotes/origin/$qemu_branch
$ your_qemu_directory=${GOPATH}/src/github.com/kata-containers/qemu
```
To build a version of QEMU using the same options as the default `qemu-lite` version , you could use the `configure-hypervisor.sh` script:
Kata often utilizes patches for not-yet-upstream fixes for components,
including QEMU. These can be found in the [packaging/QEMU directory](../tools/packaging/qemu/patches)
To build utilizing the same options as Kata, you should make use of the `configure-hypervisor.sh` script. For example:
```
$ go get -d github.com/kata-containers/kata-containers/tools/packaging
$ cd $your_qemu_directory
@@ -409,6 +406,8 @@ $ make -j $(nproc)
$ sudo -E make install
```
See the [static-build script for QEMU](../tools/packaging/static-build/qemu/build-static-qemu.sh) for a reference on how to get, setup, configure and build QEMU for Kata.
### Build a custom QEMU for aarch64/arm64 - REQUIRED
> **Note:**
>
@@ -613,8 +612,11 @@ sudo sed -i -e 's/^kernel_params = "\(.*\)"/kernel_params = "\1 agent.debug_cons
> **Note** Ports 1024 and 1025 are reserved for communication with the agent
> and gathering of agent logs respectively.
Next, connect to the debug console. The VSOCKS paths vary slightly between
cloud-hypervisor and firecracker.
##### Connecting to the debug console
Next, connect to the debug console. The VSOCKS paths vary slightly between each
VMM solution.
In case of cloud-hypervisor, connect to the `vsock` as shown:
```
$ sudo su -c 'cd /var/run/vc/vm/{sandbox_id}/root/ && socat stdin unix-connect:clh.sock'
@@ -631,6 +633,12 @@ CONNECT 1026
**Note**: You need to press the `RETURN` key to see the shell prompt.
For QEMU, connect to the `vsock` as shown:
```
$ sudo su -c 'cd /var/run/vc/vm/{sandbox_id} && socat "stdin,raw,echo=0,escape=0x11" "unix-connect:console.sock"
```
To disconnect from the virtual machine, type `CONTROL+q` (hold down the
`CONTROL` key and press `q`).

View File

@@ -19,6 +19,8 @@
* [Support for joining an existing VM network](#support-for-joining-an-existing-vm-network)
* [docker --net=host](#docker---nethost)
* [docker run --link](#docker-run---link)
* [Storage limitations](#storage-limitations)
* [Kubernetes `volumeMounts.subPaths`](#kubernetes-volumemountssubpaths)
* [Host resource sharing](#host-resource-sharing)
* [docker run --privileged](#docker-run---privileged)
* [Miscellaneous](#miscellaneous)
@@ -26,7 +28,7 @@
* [Appendices](#appendices)
* [The constraints challenge](#the-constraints-challenge)
---
***
# Overview
@@ -92,7 +94,9 @@ This section lists items that might be possible to fix.
### checkpoint and restore
The runtime does not provide `checkpoint` and `restore` commands. There
are discussions about using VM save and restore to give [`criu`](https://github.com/checkpoint-restore/criu)-like functionality, which might provide a solution.
are discussions about using VM save and restore to give us a
`[criu](https://github.com/checkpoint-restore/criu)`-like functionality,
which might provide a solution.
Note that the OCI standard does not specify `checkpoint` and `restore`
commands.
@@ -216,6 +220,17 @@ Equivalent functionality can be achieved with the newer docker networking comman
See more documentation at
[docs.docker.com](https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/).
## Storage limitations
### Kubernetes `volumeMounts.subPaths`
Kubernetes `volumeMount.subPath` is not supported by Kata Containers at the
moment.
See [this issue](https://github.com/kata-containers/runtime/issues/2812) for more details.
[Another issue](https://github.com/kata-containers/kata-containers/issues/1728) focuses on the case of `emptyDir`.
## Host resource sharing
### docker run --privileged
@@ -224,7 +239,7 @@ Privileged support in Kata is essentially different from `runc` containers.
Kata does support `docker run --privileged` command, but in this case full access
to the guest VM is provided in addition to some host access.
The container runs with elevated capabilities within the guest and is granted
The container runs with elevated capabilities within the guest and is granted
access to guest devices instead of the host devices.
This is also true with using `securityContext privileged=true` with Kubernetes.

View File

@@ -18,8 +18,7 @@
## Requirements
- [hub](https://github.com/github/hub)
- OBS account with permissions on [`/home:katacontainers`](https://build.opensuse.org/project/subprojects/home:katacontainers)
* Using an [application token](https://github.com/settings/tokens) is required for hub.
- GitHub permissions to push tags and create releases in Kata repositories.
@@ -32,14 +31,9 @@
### Bump all Kata repositories
- We have set up a Jenkins job to bump the version in the `VERSION` file in all Kata repositories. Go to the [Jenkins bump-job page](http://jenkins.katacontainers.io/job/release/build) to trigger a new job.
- Start a new job with variables for the job passed as:
- `BRANCH=<the-branch-you-want-to-bump>`
- `NEW_VERSION=<the-new-kata-version>`
For example, in the case where you want to make a patch release `1.10.2`, the variable `NEW_VERSION` should be `1.10.2` and `BRANCH` should point to `stable-1.10`. In case of an alpha or release candidate release, `BRANCH` should point to `master` branch.
Alternatively, you can also bump the repositories using a script in the Kata packaging repo
Bump the repositories using a script in the Kata packaging repo, where:
- `BRANCH=<the-branch-you-want-to-bump>`
- `NEW_VERSION=<the-new-kata-version>`
```
$ cd ${GOPATH}/src/github.com/kata-containers/kata-containers/tools/packaging/release
$ export NEW_VERSION=<the-new-kata-version>

View File

@@ -26,6 +26,7 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.runtime.disable_new_netns` | `boolean` | determines if a new netns is created for the hypervisor process |
| `io.katacontainers.config.runtime.internetworking_model` | string| determines how the VM should be connected to the container network interface. Valid values are `macvtap`, `tcfilter` and `none` |
| `io.katacontainers.config.runtime.sandbox_cgroup_only`| `boolean` | determines if Kata processes are managed only in sandbox cgroup |
| `io.katacontainers.config.runtime.enable_pprof` | `boolean` | enables Golang `pprof` for `containerd-shim-kata-v2` process |
## Agent Options
| Key | Value Type | Comments |
@@ -60,7 +61,7 @@ There are several kinds of Kata configurations and they are listed below.
| `io.katacontainers.config.hypervisor.enable_swap` | `boolean` | enable swap of VM memory |
| `io.katacontainers.config.hypervisor.enable_vhost_user_store` | `boolean` | enable vhost-user storage device (QEMU) |
| `io.katacontainers.config.hypervisor.enable_virtio_mem` | `boolean` | enable virtio-mem (QEMU) |
| `io.katacontainers.config.hypervisor.entropy_source` | string| the path to a host source of entropy (`/dev/random`, `/dev/urandom` or real hardware RNG device) |
| `io.katacontainers.config.hypervisor.entropy_source` (R) | string| the path to a host source of entropy (`/dev/random`, `/dev/urandom` or real hardware RNG device) |
| `io.katacontainers.config.hypervisor.file_mem_backend` (R) | string | file based memory backend root directory |
| `io.katacontainers.config.hypervisor.firmware_hash` | string | container firmware SHA-512 hash value |
| `io.katacontainers.config.hypervisor.firmware` | string | the guest firmware that will run the container VM |
@@ -95,6 +96,8 @@ There are several kinds of Kata configurations and they are listed below.
In case of CRI-O, all annotations specified in the pod spec are passed down to Kata.
# containerd Configuration
For containerd, annotations specified in the pod spec are passed down to Kata
starting with version `1.3.0` of containerd. Additionally, extra configuration is
needed for containerd, by providing a `pod_annotations` field in the containerd config
@@ -107,11 +110,9 @@ for passing annotations to Kata from containerd:
$ cat /etc/containerd/config
....
[plugins.cri.containerd.runtimes.kata]
runtime_type = "io.containerd.runc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata]
runtime_type = "io.containerd.kata.v2"
pod_annotations = ["io.katacontainers.*"]
[plugins.cri.containerd.runtimes.kata.options]
BinaryName = "/usr/bin/kata-runtime"
....
```
@@ -197,6 +198,7 @@ the configuration entry:
| Key | Config file entry | Comments |
|-------| ----- | ----- |
| `ctlpath` | `valid_ctlpaths` | Valid paths for `acrnctl` binary |
| `entropy_source` | `valid_entropy_sources` | Valid entropy sources, e.g. `/dev/random` |
| `file_mem_backend` | `valid_file_mem_backends` | Valid locations for the file-based memory backend root directory |
| `jailer_path` | `valid_jailer_paths`| Valid paths for the jailer constraining the container VM (Firecracker) |
| `path` | `valid_hypervisor_paths` | Valid hypervisors to run the container VM |

View File

@@ -7,9 +7,10 @@
* [Configure Kubelet to use containerd](#configure-kubelet-to-use-containerd)
* [Configure HTTP proxy - OPTIONAL](#configure-http-proxy---optional)
* [Start Kubernetes](#start-kubernetes)
* [Install a Pod Network](#install-a-pod-network)
* [Configure Pod Network](#configure-pod-network)
* [Allow pods to run in the master node](#allow-pods-to-run-in-the-master-node)
* [Create an untrusted pod using Kata Containers](#create-an-untrusted-pod-using-kata-containers)
* [Create runtime class for Kata Containers](#create-runtime-class-for-kata-containers)
* [Run pod in Kata Containers](#run-pod-in-kata-containers)
* [Delete created pod](#delete-created-pod)
This document describes how to set up a single-machine Kubernetes (k8s) cluster.
@@ -18,9 +19,6 @@ The Kubernetes cluster will use the
[CRI containerd plugin](https://github.com/containerd/cri) and
[Kata Containers](https://katacontainers.io) to launch untrusted workloads.
For Kata Containers 1.5.0-rc2 and above, we will use `containerd-shim-kata-v2` (short as `shimv2` in this documentation)
to launch Kata Containers. For the previous version of Kata Containers, the Pods are launched with `kata-runtime`.
## Requirements
- Kubernetes, Kubelet, `kubeadm`
@@ -125,43 +123,33 @@ $ sudo systemctl daemon-reload
$ sudo -E kubectl get pods
```
## Install a Pod Network
## Configure Pod Network
A pod network plugin is needed to allow pods to communicate with each other.
You can find more about CNI plugins from the [Creating a cluster with `kubeadm`](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#instructions) guide.
- Install the `flannel` plugin by following the
[Using `kubeadm` to Create a Cluster](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#instructions)
guide, starting from the **Installing a pod network** section.
- Create a pod network using flannel
> **Note:** There is no known way to determine programmatically the best version (commit) to use.
> See https://github.com/coreos/flannel/issues/995.
By default the CNI plugin binaries is installed under `/opt/cni/bin` (in package `kubernetes-cni`), you only need to create a configuration file for CNI plugin.
```bash
$ sudo -E kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
```
$ sudo -E mkdir -p /etc/cni/net.d
- Wait for the pod network to become available
```bash
# number of seconds to wait for pod network to become available
$ timeout_dns=420
$ while [ "$timeout_dns" -gt 0 ]; do
if sudo -E kubectl get pods --all-namespaces | grep dns | grep Running; then
break
fi
sleep 1s
((timeout_dns--))
done
```
- Check the pod network is running
```bash
$ sudo -E kubectl get pods --all-namespaces | grep dns | grep Running && echo "OK" || ( echo "FAIL" && false )
$ sudo -E cat > /etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "172.19.0.0/24",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
```
## Allow pods to run in the master node
@@ -172,24 +160,38 @@ By default, the cluster will not schedule pods in the master node. To enable mas
$ sudo -E kubectl taint nodes --all node-role.kubernetes.io/master-
```
## Create an untrusted pod using Kata Containers
## Create runtime class for Kata Containers
By default, all pods are created with the default runtime configured in CRI containerd plugin.
From Kubernetes v1.12, users can use [`RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/#runtime-class) to specify a different runtime for Pods.
If a pod has the `io.kubernetes.cri.untrusted-workload` annotation set to `"true"`, the CRI plugin runs the pod with the
```bash
$ cat > runtime.yaml <<EOF
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: kata
handler: kata
EOF
$ sudo -E kubectl apply -f runtime.yaml
```
## Run pod in Kata Containers
If a pod has the `runtimeClassName` set to `kata`, the CRI plugin runs the pod with the
[Kata Containers runtime](../../src/runtime/README.md).
- Create an untrusted pod configuration
- Create an pod configuration that using Kata Containers runtime
```bash
$ cat << EOT | tee nginx-untrusted.yaml
$ cat << EOT | tee nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
name: nginx-kata
spec:
runtimeClassName: kata
containers:
- name: nginx
image: nginx
@@ -197,9 +199,9 @@ If a pod has the `io.kubernetes.cri.untrusted-workload` annotation set to `"true
EOT
```
- Create an untrusted pod
- Create the pod
```bash
$ sudo -E kubectl apply -f nginx-untrusted.yaml
$ sudo -E kubectl apply -f nginx-kata.yaml
```
- Check pod is running
@@ -216,5 +218,5 @@ If a pod has the `io.kubernetes.cri.untrusted-workload` annotation set to `"true
## Delete created pod
```bash
$ sudo -E kubectl delete -f nginx-untrusted.yaml
$ sudo -E kubectl delete -f nginx-kata.yaml
```

View File

@@ -50,9 +50,7 @@ Kata packages are provided by official distribution repositories for:
| Distribution (link to installation guide) | Minimum versions |
|----------------------------------------------------------|--------------------------------------------------------------------------------|
| [CentOS](centos-installation-guide.md) | 8 |
| [Fedora](fedora-installation-guide.md) | 32, Rawhide |
| [openSUSE](opensuse-installation-guide.md) | [Leap 15.1](opensuse-leap-15.1-installation-guide.md)<br>Leap 15.2, Tumbleweed |
| [SUSE Linux Enterprise (SLE)](sle-installation-guide.md) | SLE 15 SP1, 15 SP2 |
| [Fedora](fedora-installation-guide.md) | 34 |
> **Note::**
>

View File

@@ -3,15 +3,9 @@
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E dnf install -y centos-release-advanced-virtualization
$ sudo -E dnf module disable -y virt:rhel
$ source /etc/os-release
$ cat <<EOF | sudo -E tee /etc/yum.repos.d/advanced-virt.repo
[advanced-virt]
name=Advanced Virtualization
baseurl=http://mirror.centos.org/\$contentdir/\$releasever/virt/\$basearch/advanced-virtualization
enabled=1
gpgcheck=1
skip_if_unavailable=1
EOF
$ cat <<EOF | sudo -E tee /etc/yum.repos.d/kata-containers.repo
[kata-containers]
name=Kata Containers
@@ -20,8 +14,7 @@
gpgcheck=1
skip_if_unavailable=1
EOF
$ sudo -E dnf module disable -y virt:rhel
$ sudo -E dnf install -y kata-runtime
$ sudo -E dnf install -y kata-containers
```
2. Decide which container manager to use and select the corresponding link that follows:

View File

@@ -3,7 +3,7 @@
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E dnf -y install kata-runtime
$ sudo -E dnf -y install kata-containers
```
2. Decide which container manager to use and select the corresponding link that follows:

View File

@@ -1,10 +0,0 @@
# Install Kata Containers on openSUSE
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E zypper -n install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,11 +0,0 @@
# Install Kata Containers on openSUSE Leap 15.1
1. Install the Kata Containers components with the following commands:
```bash
$ sudo -E zypper addrepo --refresh "https://download.opensuse.org/repositories/devel:/kubic/openSUSE_Leap_15.1/devel:kubic.repo"
$ sudo -E zypper -n --gpg-auto-import-keys install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -1,13 +0,0 @@
# Install Kata Containers on SLE
1. Install the Kata Containers components with the following commands:
```bash
$ source /etc/os-release
$ DISTRO_VERSION=$(sed "s/-/_/g" <<< "$VERSION")
$ sudo -E zypper addrepo --refresh "https://download.opensuse.org/repositories/devel:/kubic/SLE_${DISTRO_VERSION}_Backports/devel:kubic.repo"
$ sudo -E zypper -n --gpg-auto-import-keys install katacontainers
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -2,9 +2,6 @@
* [Install Kata Containers](#install-kata-containers)
* [Configure Kata Containers](#configure-kata-containers)
* [Integration with non-compatible shim v2 Container Engines](#integration-with-non-compatible-shim-v2-container-engines)
* [Integration with Docker](#integration-with-docker)
* [Integration with Podman](#integration-with-podman)
* [Integration with shim v2 Container Engines](#integration-with-shim-v2-container-engines)
* [Remove Kata Containers snap package](#remove-kata-containers-snap-package)
@@ -14,20 +11,7 @@
Kata Containers can be installed in any Linux distribution that supports
[snapd](https://docs.snapcraft.io/installing-snapd).
> NOTE: From Kata Containers 2.x, only the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)
> is supported, note that some container engines (`docker`, `podman`, etc) may not
> be able to run Kata Containers 2.x.
Kata Containers 1.x is released through the *stable* channel while Kata Containers
2.x is available in the *candidate* channel.
Run the following command to install **Kata Containers 1.x**:
```sh
$ sudo snap install kata-containers --classic
```
Run the following command to install **Kata Containers 2.x**:
Run the following command to install **Kata Containers**:
```sh
$ sudo snap install kata-containers --candidate --classic
@@ -46,55 +30,6 @@ $ sudo cp /snap/kata-containers/current/usr/share/defaults/kata-containers/confi
$ $EDITOR /etc/kata-containers/configuration.toml
```
## Integration with non-compatible shim v2 Container Engines
At the time of writing this document, `docker` and `podman` **do not support Kata
Containers 2.x, therefore Kata Containers 1.x must be used instead.**
The path to the runtime provided by the Kata Containers 1.x snap package is
`/snap/bin/kata-containers.runtime`, it should be used to run Kata Containers 1.x.
### Integration with Docker
`/etc/docker/daemon.json` is the configuration file for `docker`, use the
following configuration to add a new runtime (`kata`) to `docker`.
```json
{
"runtimes": {
"kata": {
"path": "/snap/bin/kata-containers.runtime"
}
}
}
```
Once the above configuration has been applied, use the
following commands to restart `docker` and run Kata Containers 1.x.
```sh
$ sudo systemctl restart docker
$ docker run -ti --runtime kata busybox sh
```
### Integration with Podman
`/usr/share/containers/containers.conf` is the configuration file for `podman`,
add the following configuration in the `[engine.runtimes]` section.
```toml
kata = [
"/snap/bin/kata-containers.runtime"
]
```
Once the above configuration has been applied, use the following command to run
Kata Containers 1.x with `podman`
```sh
$ sudo podman run -ti --runtime kata docker.io/library/busybox sh
```
## Integration with shim v2 Container Engines
The Container engine daemon (`cri-o`, `containerd`, etc) needs to be able to find the

View File

@@ -1,15 +0,0 @@
# Install Kata Containers on Ubuntu
1. Install the Kata Containers components with the following commands:
```bash
$ ARCH=$(arch)
$ BRANCH="${BRANCH:-master}"
$ sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/ /' > /etc/apt/sources.list.d/kata-containers.list"
$ curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
$ sudo -E apt-get update
$ sudo -E apt-get -y install kata-runtime kata-proxy kata-shim
```
2. Decide which container manager to use and select the corresponding link that follows:
- [Kubernetes](../Developer-Guide.md#run-kata-containers-with-kubernetes)

View File

@@ -10,9 +10,6 @@ Currently, the instructions are based on the following links:
- https://docs.openstack.org/zun/latest/admin/clear-containers.html
- ../install/ubuntu-installation-guide.md
## Install Git to use with DevStack
```sh
@@ -54,7 +51,7 @@ $ zun delete test
## Install Kata Containers
Follow [these instructions](../install/ubuntu-installation-guide.md)
Follow [these instructions](../install/README.md)
to install the Kata Containers components.
## Update Docker with new Kata Containers runtime

71
src/agent/Cargo.lock generated
View File

@@ -147,14 +147,13 @@ checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "cgroups-rs"
version = "0.2.2"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "52d1133681d746cc4807ad3b8005019af9299a086f61d5ed1de0e3a2000184f2"
checksum = "d4cec688ee0fcd143ffd7893ce2c9857bfc656eb1f2a27202244b72f08f5f8ed"
dependencies = [
"libc",
"log",
"nix 0.18.0",
"procinfo",
"nix 0.20.0",
"regex",
]
@@ -511,9 +510,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.81"
version = "0.2.91"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1482821306169ec4d07f6aca392a4681f66c75c9918aa49641a2595db64053cb"
checksum = "8916b1f6ca17130ec6568feccee27c156ad12037880833a3b842a823236502e7"
[[package]]
name = "libflate"
@@ -704,18 +703,6 @@ dependencies = [
"void",
]
[[package]]
name = "nix"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83450fe6a6142ddd95fb064b746083fc4ef1705fe81f64a64e1d4b39f54a1055"
dependencies = [
"bitflags",
"cc",
"cfg-if 0.1.10",
"libc",
]
[[package]]
name = "nix"
version = "0.19.1"
@@ -729,10 +716,16 @@ dependencies = [
]
[[package]]
name = "nom"
version = "2.2.1"
name = "nix"
version = "0.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf51a729ecf40266a2368ad335a5fdde43471f545a967109cd62146ecf8b66ff"
checksum = "fa9b4819da1bc61c0ea48b63b7bc8604064dd43013e7cc325df098d49cd7c18a"
dependencies = [
"bitflags",
"cc",
"cfg-if 1.0.0",
"libc",
]
[[package]]
name = "ntapi"
@@ -932,18 +925,6 @@ dependencies = [
"libflate",
]
[[package]]
name = "procinfo"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ab1427f3d2635891f842892dda177883dca0639e05fe66796a62c9d2f23b49c"
dependencies = [
"byteorder",
"libc",
"nom",
"rustc_version",
]
[[package]]
name = "prometheus"
version = "0.9.0"
@@ -1176,15 +1157,6 @@ version = "0.1.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e3bad0ee36814ca07d7968269dd4b7ec89ec2da10c4bb613928d3077083c232"
[[package]]
name = "rustc_version"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a"
dependencies = [
"semver",
]
[[package]]
name = "rustjail"
version = "0.1.0"
@@ -1238,21 +1210,6 @@ version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "semver"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403"
dependencies = [
"semver-parser",
]
[[package]]
name = "semver-parser"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
[[package]]
name = "serde"
version = "1.0.118"

View File

@@ -45,7 +45,7 @@ tempfile = "3.1.0"
prometheus = { version = "0.9.0", features = ["process"] }
procfs = "0.7.9"
anyhow = "1.0.32"
cgroups = { package = "cgroups-rs", version = "0.2.2" }
cgroups = { package = "cgroups-rs", version = "0.2.5" }
[workspace]
members = [

View File

@@ -1 +0,0 @@
2.0.0

1
src/agent/VERSION Symbolic link
View File

@@ -0,0 +1 @@
../../VERSION

View File

@@ -15,7 +15,7 @@ Wants=kata-containers.target
StandardOutput=tty
Type=simple
ExecStart=@BINDIR@/@AGENT_NAME@
LimitNOFILE=infinity
LimitNOFILE=1048576
# ExecStop is required for static agent tracing; in all other scenarios
# the runtime handles shutting down the VM.
ExecStop=/bin/sync ; /usr/bin/systemctl --force poweroff

View File

@@ -8,7 +8,7 @@ extern crate serde;
extern crate serde_derive;
extern crate serde_json;
use libc::mode_t;
use libc::{self, mode_t};
use std::collections::HashMap;
mod serialize;
@@ -27,6 +27,10 @@ where
*d == T::default()
}
fn default_seccomp_errno() -> u32 {
libc::EPERM as u32
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Spec {
#[serde(
@@ -54,7 +58,7 @@ pub struct Spec {
#[serde(skip_serializing_if = "Option::is_none")]
pub windows: Option<Windows<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub vm: Option<VM>,
pub vm: Option<Vm>,
}
impl Spec {
@@ -67,7 +71,7 @@ impl Spec {
}
}
pub type LinuxRlimit = POSIXRlimit;
pub type LinuxRlimit = PosixRlimit;
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Process {
@@ -89,7 +93,7 @@ pub struct Process {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub capabilities: Option<LinuxCapabilities>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub rlimits: Vec<POSIXRlimit>,
pub rlimits: Vec<PosixRlimit>,
#[serde(default, rename = "noNewPrivileges")]
pub no_new_privileges: bool,
#[serde(
@@ -195,9 +199,9 @@ pub struct Hooks {
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct Linux {
#[serde(default, rename = "uidMappings", skip_serializing_if = "Vec::is_empty")]
pub uid_mappings: Vec<LinuxIDMapping>,
pub uid_mappings: Vec<LinuxIdMapping>,
#[serde(default, rename = "gidMappings", skip_serializing_if = "Vec::is_empty")]
pub gid_mappings: Vec<LinuxIDMapping>,
pub gid_mappings: Vec<LinuxIdMapping>,
#[serde(default, skip_serializing_if = "HashMap::is_empty")]
pub sysctl: HashMap<String, String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
@@ -257,7 +261,7 @@ pub const UTSNAMESPACE: &str = "uts";
pub const CGROUPNAMESPACE: &str = "cgroup";
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxIDMapping {
pub struct LinuxIdMapping {
#[serde(default, rename = "containerID")]
pub container_id: u32,
#[serde(default, rename = "hostID")]
@@ -267,7 +271,7 @@ pub struct LinuxIDMapping {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct POSIXRlimit {
pub struct PosixRlimit {
#[serde(default)]
pub r#type: String,
#[serde(default)]
@@ -293,7 +297,7 @@ pub struct LinuxInterfacePriority {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxBlockIODevice {
pub struct LinuxBlockIoDevice {
#[serde(default)]
pub major: i64,
#[serde(default)]
@@ -303,7 +307,7 @@ pub struct LinuxBlockIODevice {
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxWeightDevice {
#[serde(flatten)]
pub blk: LinuxBlockIODevice,
pub blk: LinuxBlockIoDevice,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub weight: Option<u16>,
#[serde(
@@ -317,13 +321,13 @@ pub struct LinuxWeightDevice {
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxThrottleDevice {
#[serde(flatten)]
pub blk: LinuxBlockIODevice,
pub blk: LinuxBlockIoDevice,
#[serde(default)]
pub rate: u64,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxBlockIO {
pub struct LinuxBlockIo {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub weight: Option<u16>,
#[serde(
@@ -387,7 +391,7 @@ pub struct LinuxMemory {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct LinuxCPU {
pub struct LinuxCpu {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub shares: Option<u64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
@@ -449,11 +453,11 @@ pub struct LinuxResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub memory: Option<LinuxMemory>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub cpu: Option<LinuxCPU>,
pub cpu: Option<LinuxCpu>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub pids: Option<LinuxPids>,
#[serde(skip_serializing_if = "Option::is_none", rename = "blockIO")]
pub block_io: Option<LinuxBlockIO>,
pub block_io: Option<LinuxBlockIo>,
#[serde(
default,
skip_serializing_if = "Vec::is_empty",
@@ -513,7 +517,7 @@ pub struct Solaris {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub anet: Vec<SolarisAnet>,
#[serde(default, skip_serializing_if = "Option::is_none", rename = "cappedCPU")]
pub capped_cpu: Option<SolarisCappedCPU>,
pub capped_cpu: Option<SolarisCappedCpu>,
#[serde(
default,
skip_serializing_if = "Option::is_none",
@@ -523,7 +527,7 @@ pub struct Solaris {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct SolarisCappedCPU {
pub struct SolarisCappedCpu {
#[serde(default, skip_serializing_if = "String::is_empty")]
pub ncpus: String,
}
@@ -601,7 +605,7 @@ pub struct WindowsResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub memory: Option<WindowsMemoryResources>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub cpu: Option<WindowsCPUResources>,
pub cpu: Option<WindowsCpuResources>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub storage: Option<WindowsStorageResources>,
}
@@ -613,7 +617,7 @@ pub struct WindowsMemoryResources {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct WindowsCPUResources {
pub struct WindowsCpuResources {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub count: Option<u64>,
#[serde(default, skip_serializing_if = "Option::is_none")]
@@ -671,14 +675,14 @@ pub struct WindowsHyperV {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VM {
pub hypervisor: VMHypervisor,
pub kernel: VMKernel,
pub image: VMImage,
pub struct Vm {
pub hypervisor: VmHypervisor,
pub kernel: VmKernel,
pub image: VmImage,
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMHypervisor {
pub struct VmHypervisor {
#[serde(default)]
pub path: String,
#[serde(default, skip_serializing_if = "String::is_empty")]
@@ -686,7 +690,7 @@ pub struct VMHypervisor {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMKernel {
pub struct VmKernel {
#[serde(default)]
pub path: String,
#[serde(default, skip_serializing_if = "String::is_empty")]
@@ -696,7 +700,7 @@ pub struct VMKernel {
}
#[derive(Serialize, Deserialize, Debug, Default, Clone, PartialEq)]
pub struct VMImage {
pub struct VmImage {
#[serde(default)]
pub path: String,
#[serde(default)]
@@ -710,6 +714,8 @@ pub struct LinuxSeccomp {
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub architectures: Vec<Arch>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub flags: Vec<LinuxSeccompFlag>,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub syscalls: Vec<LinuxSyscall>,
}
@@ -733,14 +739,20 @@ pub const ARCHS390: &str = "SCMP_ARCH_S390";
pub const ARCHS390X: &str = "SCMP_ARCH_S390X";
pub const ARCHPARISC: &str = "SCMP_ARCH_PARISC";
pub const ARCHPARISC64: &str = "SCMP_ARCH_PARISC64";
pub const ARCHRISCV64: &str = "SCMP_ARCH_RISCV64";
pub type LinuxSeccompFlag = String;
pub type LinuxSeccompAction = String;
pub const ACTKILL: &str = "SCMP_ACT_KILL";
pub const ACTKILLPROCESS: &str = "SCMP_ACT_KILL_PROCESS";
pub const ACTKILLTHREAD: &str = "SCMP_ACT_KILL_THREAD";
pub const ACTTRAP: &str = "SCMP_ACT_TRAP";
pub const ACTERRNO: &str = "SCMP_ACT_ERRNO";
pub const ACTTRACE: &str = "SCMP_ACT_TRACE";
pub const ACTALLOW: &str = "SCMP_ACT_ALLOW";
pub const ACTLOG: &str = "SCMP_ACT_LOG";
pub type LinuxSeccompOperator = String;
@@ -770,6 +782,8 @@ pub struct LinuxSyscall {
pub names: Vec<String>,
#[serde(default, skip_serializing_if = "String::is_empty")]
pub action: LinuxSeccompAction,
#[serde(default = "default_seccomp_errno", rename = "errnoRet")]
pub errno_ret: u32,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub args: Vec<LinuxSeccompArg>,
}
@@ -787,11 +801,11 @@ pub struct LinuxIntelRdt {
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq)]
#[serde(rename_all = "lowercase")]
pub enum ContainerState {
CREATING,
CREATED,
RUNNING,
STOPPED,
PAUSED,
Creating,
Created,
Running,
Stopped,
Paused,
}
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]
@@ -832,7 +846,7 @@ mod tests {
let expected = State {
version: "0.2.0".to_string(),
id: "oci-container1".to_string(),
status: ContainerState::RUNNING,
status: ContainerState::Running,
pid: 4422,
bundle: "/containers/redis".to_string(),
annotations: [("myKey".to_string(), "myValue".to_string())]
@@ -1257,12 +1271,12 @@ mod tests {
ambient: vec!["CAP_NET_BIND_SERVICE".to_string()],
}),
rlimits: vec![
crate::POSIXRlimit {
crate::PosixRlimit {
r#type: "RLIMIT_CORE".to_string(),
hard: 1024,
soft: 1024,
},
crate::POSIXRlimit {
crate::PosixRlimit {
r#type: "RLIMIT_NOFILE".to_string(),
hard: 1024,
soft: 1024,
@@ -1394,12 +1408,12 @@ mod tests {
.cloned()
.collect(),
linux: Some(crate::Linux {
uid_mappings: vec![crate::LinuxIDMapping {
uid_mappings: vec![crate::LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 32000,
}],
gid_mappings: vec![crate::LinuxIDMapping {
gid_mappings: vec![crate::LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 32000,
@@ -1444,7 +1458,7 @@ mod tests {
swappiness: Some(0),
disable_oom_killer: Some(false),
}),
cpu: Some(crate::LinuxCPU {
cpu: Some(crate::LinuxCpu {
shares: Some(1024),
quota: Some(1000000),
period: Some(500000),
@@ -1454,17 +1468,17 @@ mod tests {
mems: "0-7".to_string(),
}),
pids: Some(crate::LinuxPids { limit: 32771 }),
block_io: Some(crate::LinuxBlockIO {
block_io: Some(crate::LinuxBlockIo {
weight: Some(10),
leaf_weight: Some(10),
weight_device: vec![
crate::LinuxWeightDevice {
blk: crate::LinuxBlockIODevice { major: 8, minor: 0 },
blk: crate::LinuxBlockIoDevice { major: 8, minor: 0 },
weight: Some(500),
leaf_weight: Some(300),
},
crate::LinuxWeightDevice {
blk: crate::LinuxBlockIODevice {
blk: crate::LinuxBlockIoDevice {
major: 8,
minor: 16,
},
@@ -1473,13 +1487,13 @@ mod tests {
},
],
throttle_read_bps_device: vec![crate::LinuxThrottleDevice {
blk: crate::LinuxBlockIODevice { major: 8, minor: 0 },
blk: crate::LinuxBlockIoDevice { major: 8, minor: 0 },
rate: 600,
}],
throttle_write_bps_device: vec![],
throttle_read_iops_device: vec![],
throttle_write_iops_device: vec![crate::LinuxThrottleDevice {
blk: crate::LinuxBlockIODevice {
blk: crate::LinuxBlockIoDevice {
major: 8,
minor: 16,
},
@@ -1565,9 +1579,11 @@ mod tests {
seccomp: Some(crate::LinuxSeccomp {
default_action: "SCMP_ACT_ALLOW".to_string(),
architectures: vec!["SCMP_ARCH_X86".to_string(), "SCMP_ARCH_X32".to_string()],
flags: vec![],
syscalls: vec![crate::LinuxSyscall {
names: vec!["getcwd".to_string(), "chmod".to_string()],
action: "SCMP_ACT_ERRNO".to_string(),
errno_ret: crate::default_seccomp_errno(),
args: vec![],
}],
}),

View File

@@ -65,7 +65,7 @@ $GOPATH/src/github.com/kata-containers/kata-containers/src/agent/protocols/proto
}
if [ "$(basename $(pwd))" != "agent" ]; then
die "Please go to directory of protocols before execute this shell"
die "Please go to root directory of agent before execute this shell"
fi
# Protocol buffer files required to generate golang/rust bindings.

View File

@@ -32,7 +32,6 @@ service AgentService {
rpc ExecProcess(ExecProcessRequest) returns (google.protobuf.Empty);
rpc SignalProcess(SignalProcessRequest) returns (google.protobuf.Empty);
rpc WaitProcess(WaitProcessRequest) returns (WaitProcessResponse); // wait & reap like waitpid(2)
rpc ListProcesses(ListProcessesRequest) returns (ListProcessesResponse);
rpc UpdateContainer(UpdateContainerRequest) returns (google.protobuf.Empty);
rpc StatsContainer(StatsContainerRequest) returns (StatsContainerResponse);
rpc PauseContainer(PauseContainerRequest) returns (google.protobuf.Empty);
@@ -126,18 +125,6 @@ message WaitProcessResponse {
int32 status = 1;
}
// ListProcessesRequest contains the options used to list running processes inside the container
message ListProcessesRequest {
string container_id = 1;
string format = 2;
repeated string args = 3;
}
// ListProcessesResponse represents the list of running processes inside the container
message ListProcessesResponse {
bytes process_list = 1;
}
message UpdateContainerRequest {
string container_id = 1;
LinuxResources resources = 2;

View File

@@ -441,7 +441,8 @@ message LinuxInterfacePriority {
message LinuxSeccomp {
string DefaultAction = 1;
repeated string Architectures = 2;
repeated LinuxSyscall Syscalls = 3 [(gogoproto.nullable) = false];
repeated string Flags = 3;
repeated LinuxSyscall Syscalls = 4 [(gogoproto.nullable) = false];
}
message LinuxSeccompArg {
@@ -454,7 +455,10 @@ message LinuxSeccompArg {
message LinuxSyscall {
repeated string Names = 1;
string Action = 2;
repeated LinuxSeccompArg Args = 3 [(gogoproto.nullable) = false];
oneof ErrnoRet {
uint32 errnoret = 3;
}
repeated LinuxSeccompArg Args = 4 [(gogoproto.nullable) = false];
}
message LinuxIntelRdt {

View File

@@ -23,7 +23,7 @@ scan_fmt = "0.2"
regex = "1.1"
path-absolutize = "1.2.0"
anyhow = "1.0.32"
cgroups = { package = "cgroups-rs", version = "0.2.1" }
cgroups = { package = "cgroups-rs", version = "0.2.5" }
tempfile = "3.1.0"
rlimit = "0.5.3"

View File

@@ -24,7 +24,7 @@ use anyhow::{anyhow, Context, Result};
use libc::{self, pid_t};
use nix::errno::Errno;
use oci::{
LinuxBlockIO, LinuxCPU, LinuxDevice, LinuxDeviceCgroup, LinuxHugepageLimit, LinuxMemory,
LinuxBlockIo, LinuxCpu, LinuxDevice, LinuxDeviceCgroup, LinuxHugepageLimit, LinuxMemory,
LinuxNetwork, LinuxPids, LinuxResources,
};
@@ -272,7 +272,7 @@ fn set_hugepages_resources(
fn set_block_io_resources(
_cg: &cgroups::Cgroup,
blkio: &LinuxBlockIO,
blkio: &LinuxBlockIo,
res: &mut cgroups::Resources,
) {
info!(sl!(), "cgroup manager set block io");
@@ -302,7 +302,7 @@ fn set_block_io_resources(
build_blk_io_device_throttle_resource(&blkio.throttle_write_iops_device);
}
fn set_cpu_resources(cg: &cgroups::Cgroup, cpu: &LinuxCPU) -> Result<()> {
fn set_cpu_resources(cg: &cgroups::Cgroup, cpu: &LinuxCpu) -> Result<()> {
info!(sl!(), "cgroup manager set cpu");
let cpuset_controller: &CpuSetController = cg.controller_of().unwrap();
@@ -489,63 +489,61 @@ lazy_static! {
};
pub static ref DEFAULT_ALLOWED_DEVICES: Vec<LinuxDeviceCgroup> = {
let mut v = Vec::new();
vec![
// all mknod to all char devices
LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
},
// all mknod to all char devices
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
// all mknod to all block devices
LinuxDeviceCgroup {
allow: true,
r#type: "b".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
},
// all mknod to all block devices
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "b".to_string(),
major: Some(WILDCARD),
minor: Some(WILDCARD),
access: "m".to_string(),
});
// all read/write/mknod to char device /dev/console
LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(1),
access: "rwm".to_string(),
},
// all read/write/mknod to char device /dev/console
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(1),
access: "rwm".to_string(),
});
// all read/write/mknod to char device /dev/pts/<N>
LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(136),
minor: Some(WILDCARD),
access: "rwm".to_string(),
},
// all read/write/mknod to char device /dev/pts/<N>
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(136),
minor: Some(WILDCARD),
access: "rwm".to_string(),
});
// all read/write/mknod to char device /dev/ptmx
LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(2),
access: "rwm".to_string(),
},
// all read/write/mknod to char device /dev/ptmx
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(5),
minor: Some(2),
access: "rwm".to_string(),
});
// all read/write/mknod to char device /dev/net/tun
v.push(LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(10),
minor: Some(200),
access: "rwm".to_string(),
});
v
// all read/write/mknod to char device /dev/net/tun
LinuxDeviceCgroup {
allow: true,
r#type: "c".to_string(),
major: Some(10),
minor: Some(200),
access: "rwm".to_string(),
},
]
};
}

View File

@@ -8,7 +8,7 @@ use eventfd::{eventfd, EfdFlags};
use nix::sys::eventfd;
use std::fs::{self, File};
use std::os::unix::io::{AsRawFd, FromRawFd};
use std::path::{Path, PathBuf};
use std::path::Path;
use crate::pipestream::PipeStream;
use futures::StreamExt as _;
@@ -35,7 +35,7 @@ pub async fn notify_oom(cid: &str, cg_dir: String) -> Result<Receiver<String>> {
// Flat keyed file format:
// KEY0 VAL0\n
// KEY1 VAL1\n
fn get_value_from_cgroup(path: &PathBuf, key: &str) -> Result<i64> {
fn get_value_from_cgroup(path: &Path, key: &str) -> Result<i64> {
let content = fs::read_to_string(path)?;
info!(
sl!(),
@@ -117,12 +117,12 @@ async fn register_memory_event_v2(
return;
}
}
}
// When a cgroup is destroyed, an event is sent to eventfd.
// So if the control path is gone, return instead of notifying.
if !Path::new(&event_control_path).exists() {
return;
// When a cgroup is destroyed, an event is sent to eventfd.
// So if the control path is gone, return instead of notifying.
if !Path::new(&event_control_path).exists() {
return;
}
}
});

View File

@@ -54,6 +54,8 @@ pub struct Seccomp {
#[serde(default)]
architectures: Vec<String>,
#[serde(default)]
flags: Vec<String>,
#[serde(default)]
syscalls: Vec<Syscall>,
}
@@ -74,9 +76,11 @@ pub struct Arg {
#[derive(Serialize, Deserialize, Debug)]
pub struct Syscall {
#[serde(default, skip_serializing_if = "String::is_empty")]
name: String,
names: String,
#[serde(default)]
action: Action,
#[serde(default, rename = "errnoRet")]
errno_ret: u32,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
args: Vec<Arg>,
}

View File

@@ -5,7 +5,7 @@
use anyhow::{anyhow, Context, Result};
use libc::pid_t;
use oci::{ContainerState, LinuxDevice, LinuxIDMapping};
use oci::{ContainerState, LinuxDevice, LinuxIdMapping};
use oci::{Hook, Linux, LinuxNamespace, LinuxResources, Spec};
use std::clone::Clone;
use std::ffi::{CStr, CString};
@@ -48,6 +48,7 @@ use oci::State as OCIState;
use std::collections::HashMap;
use std::os::unix::io::FromRawFd;
use std::str::FromStr;
use std::sync::Arc;
use slog::{info, o, Logger};
@@ -57,6 +58,7 @@ use crate::sync_with_async::{read_async, write_async};
use async_trait::async_trait;
use rlimit::{setrlimit, Resource, Rlim};
use tokio::io::AsyncBufReadExt;
use tokio::sync::Mutex;
use crate::utils;
@@ -83,8 +85,8 @@ pub struct ContainerStatus {
impl ContainerStatus {
fn new() -> Self {
ContainerStatus {
pre_status: ContainerState::CREATED,
cur_status: ContainerState::CREATED,
pre_status: ContainerState::Created,
cur_status: ContainerState::Created,
}
}
@@ -106,6 +108,9 @@ pub type Config = CreateOpts;
type NamespaceType = String;
lazy_static! {
// This locker ensures the child exit signal will be received by the right receiver.
pub static ref WAIT_PID_LOCKER: Arc<Mutex<bool>> = Arc::new(Mutex::new(false));
static ref NAMESPACES: HashMap<&'static str, CloneFlags> = {
let mut m = HashMap::new();
m.insert("user", CloneFlags::CLONE_NEWUSER);
@@ -132,62 +137,62 @@ lazy_static! {
};
pub static ref DEFAULT_DEVICES: Vec<LinuxDevice> = {
let mut v = Vec::new();
v.push(LinuxDevice {
path: "/dev/null".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 3,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v.push(LinuxDevice {
path: "/dev/zero".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 5,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v.push(LinuxDevice {
path: "/dev/full".to_string(),
r#type: String::from("c"),
major: 1,
minor: 7,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v.push(LinuxDevice {
path: "/dev/tty".to_string(),
r#type: "c".to_string(),
major: 5,
minor: 0,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v.push(LinuxDevice {
path: "/dev/urandom".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 9,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v.push(LinuxDevice {
path: "/dev/random".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 8,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
});
v
vec![
LinuxDevice {
path: "/dev/null".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 3,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
LinuxDevice {
path: "/dev/zero".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 5,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
LinuxDevice {
path: "/dev/full".to_string(),
r#type: String::from("c"),
major: 1,
minor: 7,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
LinuxDevice {
path: "/dev/tty".to_string(),
r#type: "c".to_string(),
major: 5,
minor: 0,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
LinuxDevice {
path: "/dev/urandom".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 9,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
LinuxDevice {
path: "/dev/random".to_string(),
r#type: "c".to_string(),
major: 1,
minor: 8,
file_mode: Some(0o666),
uid: Some(0xffffffff),
gid: Some(0xffffffff),
},
]
};
}
@@ -255,7 +260,7 @@ pub struct State {
}
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct SyncPC {
pub struct SyncPc {
#[serde(default)]
pid: pid_t,
}
@@ -268,7 +273,7 @@ pub trait Container: BaseContainer {
impl Container for LinuxContainer {
fn pause(&mut self) -> Result<()> {
let status = self.status();
if status != ContainerState::RUNNING && status != ContainerState::CREATED {
if status != ContainerState::Running && status != ContainerState::Created {
return Err(anyhow!(
"failed to pause container: current status is: {:?}",
status
@@ -281,7 +286,7 @@ impl Container for LinuxContainer {
.unwrap()
.freeze(FreezerState::Frozen)?;
self.status.transition(ContainerState::PAUSED);
self.status.transition(ContainerState::Paused);
return Ok(());
}
Err(anyhow!("failed to get container's cgroup manager"))
@@ -289,7 +294,7 @@ impl Container for LinuxContainer {
fn resume(&mut self) -> Result<()> {
let status = self.status();
if status != ContainerState::PAUSED {
if status != ContainerState::Paused {
return Err(anyhow!("container status is: {:?}, not paused", status));
}
@@ -299,7 +304,7 @@ impl Container for LinuxContainer {
.unwrap()
.freeze(FreezerState::Thawed)?;
self.status.transition(ContainerState::RUNNING);
self.status.transition(ContainerState::Running);
return Ok(());
}
Err(anyhow!("failed to get container's cgroup manager"))
@@ -634,12 +639,12 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {
env::set_var(v[0], v[1]);
}
// set the "HOME" env getting from "/etc/passwd"
// set the "HOME" env getting from "/etc/passwd", if
// there's no uid entry in /etc/passwd, set "/" as the
// home env.
if env::var_os(HOME_ENV_KEY).is_none() {
match utils::home_dir(guser.uid) {
Ok(home_dir) => env::set_var(HOME_ENV_KEY, home_dir),
Err(e) => log_child!(cfd_log, "failed to get home dir: {:?}", e),
}
let home_dir = utils::home_dir(guser.uid).unwrap_or_else(|_| String::from("/"));
env::set_var(HOME_ENV_KEY, home_dir);
}
let exec_file = Path::new(&args[0]);
@@ -734,7 +739,7 @@ impl BaseContainer for LinuxContainer {
};
let status = self.status();
let pid = if status != ContainerState::STOPPED {
let pid = if status != ContainerState::Stopped {
self.init_process_pid
} else {
0
@@ -908,7 +913,7 @@ impl BaseContainer for LinuxContainer {
child = child.env(PIDNS_FD, format!("{}", pidns.unwrap()));
}
let child = child.spawn()?;
child.spawn()?;
unistd::close(crfd)?;
unistd::close(cwfd)?;
@@ -964,19 +969,6 @@ impl BaseContainer for LinuxContainer {
self.created = SystemTime::now();
// create the pipes for notify process exited
let (exit_pipe_r, exit_pipe_w) = unistd::pipe2(OFlag::O_CLOEXEC)
.context("failed to create pipe")
.map_err(|e| {
let _ = signal::kill(Pid::from_raw(child.id() as i32), Some(Signal::SIGKILL))
.map_err(|e| warn!(logger, "signal::kill creating pipe {:?}", e));
e
})?;
p.exit_pipe_w = Some(exit_pipe_w);
p.exit_pipe_r = Some(exit_pipe_r);
if p.init {
let spec = self.config.spec.as_mut().unwrap();
update_namespaces(&self.logger, spec, p.pid)?;
@@ -997,7 +989,7 @@ impl BaseContainer for LinuxContainer {
if init {
self.exec()?;
self.status.transition(ContainerState::RUNNING);
self.status.transition(ContainerState::Running);
}
Ok(())
@@ -1019,7 +1011,7 @@ impl BaseContainer for LinuxContainer {
}
}
self.status.transition(ContainerState::STOPPED);
self.status.transition(ContainerState::Stopped);
mount::umount2(
spec.root.as_ref().unwrap().path.as_str(),
MntFlags::MNT_DETACH,
@@ -1055,7 +1047,7 @@ impl BaseContainer for LinuxContainer {
.unwrap()
.as_secs();
self.status.transition(ContainerState::RUNNING);
self.status.transition(ContainerState::Running);
unistd::close(fd)?;
Ok(())
@@ -1302,7 +1294,7 @@ async fn join_namespaces(
Ok(())
}
fn write_mappings(logger: &Logger, path: &str, maps: &[LinuxIDMapping]) -> Result<()> {
fn write_mappings(logger: &Logger, path: &str, maps: &[LinuxIdMapping]) -> Result<()> {
let data = maps
.iter()
.filter(|m| m.size != 0)
@@ -1480,6 +1472,8 @@ async fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
})
.collect();
// Avoid the exit signal to be reaped by the global reaper.
let _wait_locker = WAIT_PID_LOCKER.lock().await;
let mut child = tokio::process::Command::new(path)
.args(args.iter())
.envs(env.iter())
@@ -1528,28 +1522,23 @@ async fn execute_hook(logger: &Logger, h: &Hook, st: &OCIState) -> Result<()> {
match child.wait().await {
Ok(exit) => {
let code = match exit.code() {
Some(c) => c,
None => {
return Err(anyhow!("hook exit status has no status code"));
}
};
let code = exit
.code()
.ok_or_else(|| anyhow!("hook exit status has no status code"))?;
if code == 0 {
debug!(logger, "hook {} exit status is 0", &path);
return Ok(());
} else {
if code != 0 {
error!(logger, "hook {} exit status is {}", &path, code);
return Err(anyhow!(nix::Error::from_errno(Errno::UnknownErrno)));
}
debug!(logger, "hook {} exit status is 0", &path);
Ok(())
}
Err(e) => {
return Err(anyhow!(
"wait child error: {} {}",
e,
e.raw_os_error().unwrap()
));
}
Err(e) => Err(anyhow!(
"wait child error: {} {}",
e,
e.raw_os_error().unwrap()
)),
}
});
@@ -1588,7 +1577,7 @@ mod tests {
&OCIState {
version: "1.2.3".to_string(),
id: "321".to_string(),
status: ContainerState::RUNNING,
status: ContainerState::Running,
pid: 2,
bundle: "".to_string(),
annotations: Default::default(),
@@ -1611,7 +1600,7 @@ mod tests {
&OCIState {
version: "1.2.3".to_string(),
id: "321".to_string(),
status: ContainerState::RUNNING,
status: ContainerState::Running,
pid: 2,
bundle: "".to_string(),
annotations: Default::default(),
@@ -1630,10 +1619,10 @@ mod tests {
fn test_status_transtition() {
let mut status = ContainerStatus::new();
let status_table: [ContainerState; 4] = [
ContainerState::CREATED,
ContainerState::RUNNING,
ContainerState::PAUSED,
ContainerState::STOPPED,
ContainerState::Created,
ContainerState::Running,
ContainerState::Paused,
ContainerState::Stopped,
];
for s in status_table.iter() {
@@ -1770,7 +1759,7 @@ mod tests {
fn test_linuxcontainer_pause_bad_status() {
let ret = new_linux_container_and_then(|mut c: LinuxContainer| {
// Change state to pause, c.pause() should fail
c.status.transition(ContainerState::PAUSED);
c.status.transition(ContainerState::Paused);
c.pause().map_err(|e| anyhow!(e))
});
@@ -1802,7 +1791,7 @@ mod tests {
fn test_linuxcontainer_resume_bad_status() {
let ret = new_linux_container_and_then(|mut c: LinuxContainer| {
// Change state to created, c.resume() should fail
c.status.transition(ContainerState::CREATED);
c.status.transition(ContainerState::Created);
c.resume().map_err(|e| anyhow!(e))
});
@@ -1813,7 +1802,7 @@ mod tests {
#[test]
fn test_linuxcontainer_resume_cgroupmgr_is_none() {
let ret = new_linux_container_and_then(|mut c: LinuxContainer| {
c.status.transition(ContainerState::PAUSED);
c.status.transition(ContainerState::Paused);
c.cgroup_manager = None;
c.resume().map_err(|e| anyhow!(e))
});
@@ -1826,7 +1815,7 @@ mod tests {
let ret = new_linux_container_and_then(|mut c: LinuxContainer| {
c.cgroup_manager = FsManager::new("").ok();
// Change status to paused, this way we can resume it
c.status.transition(ContainerState::PAUSED);
c.status.transition(ContainerState::Paused);
c.resume().map_err(|e| anyhow!(e))
});

View File

@@ -58,24 +58,16 @@ pub mod validator;
// pub mod user;
//pub mod intelrdt;
// construtc ociSpec from grpcSpec, which is needed for hook
// execution. since hooks read config.json
use oci::{
Box as ociBox, Hooks as ociHooks, Linux as ociLinux, LinuxCapabilities as ociLinuxCapabilities,
Mount as ociMount, POSIXRlimit as ociPOSIXRlimit, Process as ociProcess, Root as ociRoot,
Spec as ociSpec, User as ociUser,
};
use protocols::oci::{
Hooks as grpcHooks, Linux as grpcLinux, Mount as grpcMount, Process as grpcProcess,
Root as grpcRoot, Spec as grpcSpec,
};
use std::collections::HashMap;
pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
use protocols::oci as grpc;
// construct ociSpec from grpc::Spec, which is needed for hook
// execution. since hooks read config.json
pub fn process_grpc_to_oci(p: &grpc::Process) -> oci::Process {
let console_size = if p.ConsoleSize.is_some() {
let c = p.ConsoleSize.as_ref().unwrap();
Some(ociBox {
Some(oci::Box {
height: c.Height,
width: c.Width,
})
@@ -85,14 +77,14 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let user = if p.User.is_some() {
let u = p.User.as_ref().unwrap();
ociUser {
oci::User {
uid: u.UID,
gid: u.GID,
additional_gids: u.AdditionalGids.clone(),
username: u.Username.clone(),
}
} else {
ociUser {
oci::User {
uid: 0,
gid: 0,
additional_gids: vec![],
@@ -103,7 +95,7 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let capabilities = if p.Capabilities.is_some() {
let cap = p.Capabilities.as_ref().unwrap();
Some(ociLinuxCapabilities {
Some(oci::LinuxCapabilities {
bounding: cap.Bounding.clone().into_vec(),
effective: cap.Effective.clone().into_vec(),
inheritable: cap.Inheritable.clone().into_vec(),
@@ -117,7 +109,7 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
let rlimits = {
let mut r = Vec::new();
for lm in p.Rlimits.iter() {
r.push(ociPOSIXRlimit {
r.push(oci::PosixRlimit {
r#type: lm.Type.clone(),
hard: lm.Hard,
soft: lm.Soft,
@@ -126,7 +118,7 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
r
};
ociProcess {
oci::Process {
terminal: p.Terminal,
console_size,
user,
@@ -142,15 +134,15 @@ pub fn process_grpc_to_oci(p: &grpcProcess) -> ociProcess {
}
}
fn root_grpc_to_oci(root: &grpcRoot) -> ociRoot {
ociRoot {
fn root_grpc_to_oci(root: &grpc::Root) -> oci::Root {
oci::Root {
path: root.Path.clone(),
readonly: root.Readonly,
}
}
fn mount_grpc_to_oci(m: &grpcMount) -> ociMount {
ociMount {
fn mount_grpc_to_oci(m: &grpc::Mount) -> oci::Mount {
oci::Mount {
destination: m.destination.clone(),
r#type: m.field_type.clone(),
source: m.source.clone(),
@@ -158,13 +150,12 @@ fn mount_grpc_to_oci(m: &grpcMount) -> ociMount {
}
}
use oci::Hook as ociHook;
use protocols::oci::Hook as grpcHook;
fn hook_grpc_to_oci(h: &[grpcHook]) -> Vec<ociHook> {
fn hook_grpc_to_oci(h: &[grpcHook]) -> Vec<oci::Hook> {
let mut r = Vec::new();
for e in h.iter() {
r.push(ociHook {
r.push(oci::Hook {
path: e.Path.clone(),
args: e.Args.clone().into_vec(),
env: e.Env.clone().into_vec(),
@@ -174,39 +165,29 @@ fn hook_grpc_to_oci(h: &[grpcHook]) -> Vec<ociHook> {
r
}
fn hooks_grpc_to_oci(h: &grpcHooks) -> ociHooks {
fn hooks_grpc_to_oci(h: &grpc::Hooks) -> oci::Hooks {
let prestart = hook_grpc_to_oci(h.Prestart.as_ref());
let poststart = hook_grpc_to_oci(h.Poststart.as_ref());
let poststop = hook_grpc_to_oci(h.Poststop.as_ref());
ociHooks {
oci::Hooks {
prestart,
poststart,
poststop,
}
}
use oci::{
LinuxDevice as ociLinuxDevice, LinuxIDMapping as ociLinuxIDMapping,
LinuxIntelRdt as ociLinuxIntelRdt, LinuxNamespace as ociLinuxNamespace,
LinuxResources as ociLinuxResources, LinuxSeccomp as ociLinuxSeccomp,
};
use protocols::oci::{
LinuxIDMapping as grpcLinuxIDMapping, LinuxResources as grpcLinuxResources,
LinuxSeccomp as grpcLinuxSeccomp,
};
fn idmap_grpc_to_oci(im: &grpcLinuxIDMapping) -> ociLinuxIDMapping {
ociLinuxIDMapping {
fn idmap_grpc_to_oci(im: &grpc::LinuxIDMapping) -> oci::LinuxIdMapping {
oci::LinuxIdMapping {
container_id: im.ContainerID,
host_id: im.HostID,
size: im.Size,
}
}
fn idmaps_grpc_to_oci(ims: &[grpcLinuxIDMapping]) -> Vec<ociLinuxIDMapping> {
fn idmaps_grpc_to_oci(ims: &[grpc::LinuxIDMapping]) -> Vec<oci::LinuxIdMapping> {
let mut r = Vec::new();
for im in ims.iter() {
r.push(idmap_grpc_to_oci(im));
@@ -214,24 +195,13 @@ fn idmaps_grpc_to_oci(ims: &[grpcLinuxIDMapping]) -> Vec<ociLinuxIDMapping> {
r
}
use oci::{
LinuxBlockIO as ociLinuxBlockIO, LinuxBlockIODevice as ociLinuxBlockIODevice,
LinuxCPU as ociLinuxCPU, LinuxDeviceCgroup as ociLinuxDeviceCgroup,
LinuxHugepageLimit as ociLinuxHugepageLimit,
LinuxInterfacePriority as ociLinuxInterfacePriority, LinuxMemory as ociLinuxMemory,
LinuxNetwork as ociLinuxNetwork, LinuxPids as ociLinuxPids,
LinuxThrottleDevice as ociLinuxThrottleDevice, LinuxWeightDevice as ociLinuxWeightDevice,
};
use protocols::oci::{
LinuxBlockIO as grpcLinuxBlockIO, LinuxThrottleDevice as grpcLinuxThrottleDevice,
LinuxWeightDevice as grpcLinuxWeightDevice,
};
fn throttle_devices_grpc_to_oci(tds: &[grpcLinuxThrottleDevice]) -> Vec<ociLinuxThrottleDevice> {
fn throttle_devices_grpc_to_oci(
tds: &[grpc::LinuxThrottleDevice],
) -> Vec<oci::LinuxThrottleDevice> {
let mut r = Vec::new();
for td in tds.iter() {
r.push(ociLinuxThrottleDevice {
blk: ociLinuxBlockIODevice {
r.push(oci::LinuxThrottleDevice {
blk: oci::LinuxBlockIoDevice {
major: td.Major,
minor: td.Minor,
},
@@ -241,11 +211,11 @@ fn throttle_devices_grpc_to_oci(tds: &[grpcLinuxThrottleDevice]) -> Vec<ociLinux
r
}
fn weight_devices_grpc_to_oci(wds: &[grpcLinuxWeightDevice]) -> Vec<ociLinuxWeightDevice> {
fn weight_devices_grpc_to_oci(wds: &[grpc::LinuxWeightDevice]) -> Vec<oci::LinuxWeightDevice> {
let mut r = Vec::new();
for wd in wds.iter() {
r.push(ociLinuxWeightDevice {
blk: ociLinuxBlockIODevice {
r.push(oci::LinuxWeightDevice {
blk: oci::LinuxBlockIoDevice {
major: wd.Major,
minor: wd.Minor,
},
@@ -256,7 +226,7 @@ fn weight_devices_grpc_to_oci(wds: &[grpcLinuxWeightDevice]) -> Vec<ociLinuxWeig
r
}
fn blockio_grpc_to_oci(blk: &grpcLinuxBlockIO) -> ociLinuxBlockIO {
fn blockio_grpc_to_oci(blk: &grpc::LinuxBlockIO) -> oci::LinuxBlockIo {
let weight_device = weight_devices_grpc_to_oci(blk.WeightDevice.as_ref());
let throttle_read_bps_device = throttle_devices_grpc_to_oci(blk.ThrottleReadBpsDevice.as_ref());
let throttle_write_bps_device =
@@ -266,7 +236,7 @@ fn blockio_grpc_to_oci(blk: &grpcLinuxBlockIO) -> ociLinuxBlockIO {
let throttle_write_iops_device =
throttle_devices_grpc_to_oci(blk.ThrottleWriteIOPSDevice.as_ref());
ociLinuxBlockIO {
oci::LinuxBlockIo {
weight: Some(blk.Weight as u16),
leaf_weight: Some(blk.LeafWeight as u16),
weight_device,
@@ -277,7 +247,7 @@ fn blockio_grpc_to_oci(blk: &grpcLinuxBlockIO) -> ociLinuxBlockIO {
}
}
pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
pub fn resources_grpc_to_oci(res: &grpc::LinuxResources) -> oci::LinuxResources {
let devices = {
let mut d = Vec::new();
for dev in res.Devices.iter() {
@@ -292,7 +262,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
} else {
Some(dev.Minor)
};
d.push(ociLinuxDeviceCgroup {
d.push(oci::LinuxDeviceCgroup {
allow: dev.Allow,
r#type: dev.Type.clone(),
major,
@@ -305,7 +275,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let memory = if res.Memory.is_some() {
let mem = res.Memory.as_ref().unwrap();
Some(ociLinuxMemory {
Some(oci::LinuxMemory {
limit: Some(mem.Limit),
reservation: Some(mem.Reservation),
swap: Some(mem.Swap),
@@ -320,7 +290,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let cpu = if res.CPU.is_some() {
let c = res.CPU.as_ref().unwrap();
Some(ociLinuxCPU {
Some(oci::LinuxCpu {
shares: Some(c.Shares),
quota: Some(c.Quota),
period: Some(c.Period),
@@ -335,7 +305,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let pids = if res.Pids.is_some() {
let p = res.Pids.as_ref().unwrap();
Some(ociLinuxPids { limit: p.Limit })
Some(oci::LinuxPids { limit: p.Limit })
} else {
None
};
@@ -351,7 +321,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let hugepage_limits = {
let mut r = Vec::new();
for hl in res.HugepageLimits.iter() {
r.push(ociLinuxHugepageLimit {
r.push(oci::LinuxHugepageLimit {
page_size: hl.Pagesize.clone(),
limit: hl.Limit,
});
@@ -364,14 +334,14 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
let priorities = {
let mut r = Vec::new();
for pr in net.Priorities.iter() {
r.push(ociLinuxInterfacePriority {
r.push(oci::LinuxInterfacePriority {
name: pr.Name.clone(),
priority: pr.Priority,
});
}
r
};
Some(ociLinuxNetwork {
Some(oci::LinuxNetwork {
class_id: Some(net.ClassID),
priorities,
})
@@ -379,7 +349,7 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
None
};
ociLinuxResources {
oci::LinuxResources {
devices,
memory,
cpu,
@@ -391,17 +361,22 @@ pub fn resources_grpc_to_oci(res: &grpcLinuxResources) -> ociLinuxResources {
}
}
use oci::{LinuxSeccompArg as ociLinuxSeccompArg, LinuxSyscall as ociLinuxSyscall};
fn seccomp_grpc_to_oci(sec: &grpcLinuxSeccomp) -> ociLinuxSeccomp {
fn seccomp_grpc_to_oci(sec: &grpc::LinuxSeccomp) -> oci::LinuxSeccomp {
let syscalls = {
let mut r = Vec::new();
for sys in sec.Syscalls.iter() {
let mut args = Vec::new();
let errno_ret: u32;
if sys.has_errnoret() {
errno_ret = sys.get_errnoret();
} else {
errno_ret = libc::EPERM as u32;
}
for arg in sys.Args.iter() {
args.push(ociLinuxSeccompArg {
args.push(oci::LinuxSeccompArg {
index: arg.Index as u32,
value: arg.Value,
value_two: arg.ValueTwo,
@@ -409,23 +384,25 @@ fn seccomp_grpc_to_oci(sec: &grpcLinuxSeccomp) -> ociLinuxSeccomp {
});
}
r.push(ociLinuxSyscall {
r.push(oci::LinuxSyscall {
names: sys.Names.clone().into_vec(),
action: sys.Action.clone(),
errno_ret,
args,
});
}
r
};
ociLinuxSeccomp {
oci::LinuxSeccomp {
default_action: sec.DefaultAction.clone(),
architectures: sec.Architectures.clone().into_vec(),
flags: sec.Flags.clone().into_vec(),
syscalls,
}
}
fn linux_grpc_to_oci(l: &grpcLinux) -> ociLinux {
fn linux_grpc_to_oci(l: &grpc::Linux) -> oci::Linux {
let uid_mappings = idmaps_grpc_to_oci(l.UIDMappings.as_ref());
let gid_mappings = idmaps_grpc_to_oci(l.GIDMappings.as_ref());
@@ -445,7 +422,7 @@ fn linux_grpc_to_oci(l: &grpcLinux) -> ociLinux {
let mut r = Vec::new();
for ns in l.Namespaces.iter() {
r.push(ociLinuxNamespace {
r.push(oci::LinuxNamespace {
r#type: ns.Type.clone(),
path: ns.Path.clone(),
});
@@ -457,7 +434,7 @@ fn linux_grpc_to_oci(l: &grpcLinux) -> ociLinux {
let mut r = Vec::new();
for d in l.Devices.iter() {
r.push(ociLinuxDevice {
r.push(oci::LinuxDevice {
path: d.Path.clone(),
r#type: d.Type.clone(),
major: d.Major,
@@ -473,14 +450,14 @@ fn linux_grpc_to_oci(l: &grpcLinux) -> ociLinux {
let intel_rdt = if l.IntelRdt.is_some() {
let rdt = l.IntelRdt.as_ref().unwrap();
Some(ociLinuxIntelRdt {
Some(oci::LinuxIntelRdt {
l3_cache_schema: rdt.L3CacheSchema.clone(),
})
} else {
None
};
ociLinux {
oci::Linux {
uid_mappings,
gid_mappings,
sysctl: l.Sysctl.clone(),
@@ -497,11 +474,11 @@ fn linux_grpc_to_oci(l: &grpcLinux) -> ociLinux {
}
}
fn linux_oci_to_grpc(_l: &ociLinux) -> grpcLinux {
grpcLinux::default()
fn linux_oci_to_grpc(_l: &oci::Linux) -> grpc::Linux {
grpc::Linux::default()
}
pub fn grpc_to_oci(grpc: &grpcSpec) -> ociSpec {
pub fn grpc_to_oci(grpc: &grpc::Spec) -> oci::Spec {
// process
let process = if grpc.Process.is_some() {
Some(process_grpc_to_oci(grpc.Process.as_ref().unwrap()))
@@ -539,7 +516,7 @@ pub fn grpc_to_oci(grpc: &grpcSpec) -> ociSpec {
None
};
ociSpec {
oci::Spec {
version: grpc.Version.clone(),
process,
root,

View File

@@ -52,10 +52,12 @@ const MOUNTINFOFORMAT: &str = "{d} {d} {d}:{d} {} {} {} {}";
const PROC_PATH: &str = "/proc";
// since libc didn't defined this const for musl, thus redefined it here.
#[cfg(all(target_os = "linux", target_env = "gnu"))]
#[cfg(all(target_os = "linux", target_env = "gnu", not(target_arch = "s390x")))]
const PROC_SUPER_MAGIC: libc::c_long = 0x00009fa0;
#[cfg(all(target_os = "linux", target_env = "musl"))]
const PROC_SUPER_MAGIC: libc::c_ulong = 0x00009fa0;
#[cfg(all(target_os = "linux", target_env = "gnu", target_arch = "s390x"))]
const PROC_SUPER_MAGIC: libc::c_uint = 0x00009fa0;
lazy_static! {
static ref PROPAGATION: HashMap<&'static str, MsFlags> = {
@@ -66,6 +68,8 @@ lazy_static! {
m.insert("rprivate", MsFlags::MS_PRIVATE | MsFlags::MS_REC);
m.insert("slave", MsFlags::MS_SLAVE);
m.insert("rslave", MsFlags::MS_SLAVE | MsFlags::MS_REC);
m.insert("unbindable", MsFlags::MS_UNBINDABLE);
m.insert("runbindable", MsFlags::MS_UNBINDABLE | MsFlags::MS_REC);
m
};
static ref OPTIONS: HashMap<&'static str, (bool, MsFlags)> = {
@@ -91,17 +95,6 @@ lazy_static! {
m.insert("nodiratime", (false, MsFlags::MS_NODIRATIME));
m.insert("bind", (false, MsFlags::MS_BIND));
m.insert("rbind", (false, MsFlags::MS_BIND | MsFlags::MS_REC));
m.insert("unbindable", (false, MsFlags::MS_UNBINDABLE));
m.insert(
"runbindable",
(false, MsFlags::MS_UNBINDABLE | MsFlags::MS_REC),
);
m.insert("private", (false, MsFlags::MS_PRIVATE));
m.insert("rprivate", (false, MsFlags::MS_PRIVATE | MsFlags::MS_REC));
m.insert("shared", (false, MsFlags::MS_SHARED));
m.insert("rshared", (false, MsFlags::MS_SHARED | MsFlags::MS_REC));
m.insert("slave", (false, MsFlags::MS_SLAVE));
m.insert("rslave", (false, MsFlags::MS_SLAVE | MsFlags::MS_REC));
m.insert("relatime", (false, MsFlags::MS_RELATIME));
m.insert("norelatime", (true, MsFlags::MS_RELATIME));
m.insert("strictatime", (false, MsFlags::MS_STRICTATIME));
@@ -190,7 +183,7 @@ pub fn init_rootfs(
let mut bind_mount_dev = false;
for m in &spec.mounts {
let (mut flags, data) = parse_mount(&m);
let (mut flags, pgflags, data) = parse_mount(&m);
if !m.destination.starts_with('/') || m.destination.contains("..") {
return Err(anyhow!(
"the mount destination {} is invalid",
@@ -232,13 +225,15 @@ pub fn init_rootfs(
// effective.
// first check that we have non-default options required before attempting a
// remount
if m.r#type == "bind" {
for o in &m.options {
if let Some(fl) = PROPAGATION.get(o.as_str()) {
let dest = secure_join(rootfs, &m.destination);
mount(None::<&str>, dest.as_str(), None::<&str>, *fl, None::<&str>)?;
}
}
if m.r#type == "bind" && !pgflags.is_empty() {
let dest = secure_join(rootfs, &m.destination);
mount(
None::<&str>,
dest.as_str(),
None::<&str>,
pgflags,
None::<&str>,
)?;
}
}
}
@@ -655,26 +650,27 @@ pub fn ms_move_root(rootfs: &str) -> Result<bool> {
Ok(true)
}
fn parse_mount(m: &Mount) -> (MsFlags, String) {
fn parse_mount(m: &Mount) -> (MsFlags, MsFlags, String) {
let mut flags = MsFlags::empty();
let mut pgflags = MsFlags::empty();
let mut data = Vec::new();
for o in &m.options {
match OPTIONS.get(o.as_str()) {
Some(v) => {
let (clear, fl) = *v;
if clear {
flags &= !fl;
} else {
flags |= fl;
}
if let Some(v) = OPTIONS.get(o.as_str()) {
let (clear, fl) = *v;
if clear {
flags &= !fl;
} else {
flags |= fl;
}
None => data.push(o.clone()),
} else if let Some(fl) = PROPAGATION.get(o.as_str()) {
pgflags |= *fl;
} else {
data.push(o.clone());
}
}
(flags, data.join(","))
(flags, pgflags, data.join(","))
}
// This function constructs a canonicalized path by combining the `rootfs` and `unsafe_path` elements.
@@ -920,7 +916,7 @@ pub fn finish_rootfs(cfd_log: RawFd, spec: &Spec) -> Result<()> {
for m in spec.mounts.iter() {
if m.destination == "/dev" {
let (flags, _) = parse_mount(m);
let (flags, _, _) = parse_mount(m);
if flags.contains(MsFlags::MS_RDONLY) {
mount(
Some("/dev"),
@@ -1365,7 +1361,7 @@ mod tests {
let msg = format!("{}, result: {:?}", msg, result);
// Perform the checks
assert!(result == t.result, msg);
assert!(result == t.result, "{}", msg);
}
}
}

View File

@@ -77,10 +77,6 @@ impl PipeStream {
Ok(Self(AsyncFd::new(StreamFd(fd))?))
}
pub fn shutdown(&mut self) -> io::Result<()> {
self.0.get_mut().close()
}
pub fn from_fd(fd: RawFd) -> Self {
unsafe { Self::from_raw_fd(fd) }
}
@@ -164,7 +160,44 @@ impl AsyncWrite for PipeStream {
}
fn poll_shutdown(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<io::Result<()>> {
self.get_mut().shutdown()?;
// Do nothing in shutdown is very important
// The only right way to shutdown pipe is drop it
// Otherwise PipeStream will conflict with its twins
// Because they both have same fd, and both registered.
Poll::Ready(Ok(()))
}
}
#[cfg(test)]
mod tests {
use super::*;
use nix::fcntl::OFlag;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
#[tokio::test]
// Shutdown should never close the inner fd.
async fn test_pipestream_shutdown() {
let (_, wfd1) = unistd::pipe2(OFlag::O_CLOEXEC).unwrap();
let mut writer1 = PipeStream::new(wfd1).unwrap();
// if close fd in shutdown, the fd will be reused
// and the test will failed
let _ = writer1.shutdown().await.unwrap();
// let _ = unistd::close(wfd1);
let (rfd2, wfd2) = unistd::pipe2(OFlag::O_CLOEXEC).unwrap(); // reuse fd number, rfd2 == wfd1
let mut reader2 = PipeStream::new(rfd2).unwrap();
let mut writer2 = PipeStream::new(wfd2).unwrap();
// deregister writer1, then reader2 which has the same fd will be deregistered from epoll
drop(writer1);
let _ = writer2.write(b"1").await;
let mut content = vec![0u8; 1];
// Will Block here if shutdown close the fd.
let _ = reader2.read(&mut content).await;
}
}

View File

@@ -29,7 +29,6 @@ pub enum StreamType {
Stdin,
Stdout,
Stderr,
ExitPipeR,
TermMaster,
ParentStdin,
ParentStdout,
@@ -45,8 +44,8 @@ pub struct Process {
pub stdin: Option<RawFd>,
pub stdout: Option<RawFd>,
pub stderr: Option<RawFd>,
pub exit_pipe_r: Option<RawFd>,
pub exit_pipe_w: Option<RawFd>,
pub exit_tx: Option<tokio::sync::watch::Sender<bool>>,
pub exit_rx: Option<tokio::sync::watch::Receiver<bool>>,
pub extra_files: Vec<File>,
pub term_master: Option<RawFd>,
pub tty: bool,
@@ -97,14 +96,15 @@ impl Process {
pipe_size: i32,
) -> Result<Self> {
let logger = logger.new(o!("subsystem" => "process"));
let (exit_tx, exit_rx) = tokio::sync::watch::channel(false);
let mut p = Process {
exec_id: String::from(id),
stdin: None,
stdout: None,
stderr: None,
exit_pipe_w: None,
exit_pipe_r: None,
exit_tx: Some(exit_tx),
exit_rx: Some(exit_rx),
extra_files: Vec::new(),
tty: ocip.terminal,
term_master: None,
@@ -152,7 +152,6 @@ impl Process {
StreamType::Stdin => self.stdin,
StreamType::Stdout => self.stdout,
StreamType::Stderr => self.stderr,
StreamType::ExitPipeR => self.exit_pipe_r,
StreamType::TermMaster => self.term_master,
StreamType::ParentStdin => self.parent_stdin,
StreamType::ParentStdout => self.parent_stdout,

View File

@@ -117,28 +117,20 @@ pub async fn write_async(pipe_w: &mut PipeStream, msg_type: i32, data_str: &str)
}
match msg_type {
SYNC_FAILED => match write_count(pipe_w, data_str.as_bytes(), data_str.len()).await {
Ok(_) => pipe_w.shutdown()?,
Err(e) => {
pipe_w.shutdown()?;
SYNC_FAILED => {
if let Err(e) = write_count(pipe_w, data_str.as_bytes(), data_str.len()).await {
return Err(anyhow!(e).context("error in send message to process"));
}
},
}
SYNC_DATA => {
let length: i32 = data_str.len() as i32;
write_count(pipe_w, &length.to_be_bytes(), MSG_SIZE)
.await
.or_else(|e| {
pipe_w.shutdown()?;
Err(anyhow!(e).context("error in send message to process"))
})?;
.map_err(|e| anyhow!(e).context("error in send message to process"))?;
write_count(pipe_w, data_str.as_bytes(), data_str.len())
.await
.or_else(|e| {
pipe_w.shutdown()?;
Err(anyhow!(e).context("error in send message to process"))
})?;
.map_err(|e| anyhow!(e).context("error in send message to process"))?;
}
_ => (),

View File

@@ -6,7 +6,7 @@
use crate::container::Config;
use anyhow::{anyhow, Context, Error, Result};
use nix::errno::Errno;
use oci::{Linux, LinuxIDMapping, LinuxNamespace, Spec};
use oci::{Linux, LinuxIdMapping, LinuxNamespace, Spec};
use std::collections::HashMap;
use std::path::{Component, PathBuf};
@@ -107,7 +107,7 @@ fn security(oci: &Spec) -> Result<()> {
Ok(())
}
fn idmapping(maps: &[LinuxIDMapping]) -> Result<()> {
fn idmapping(maps: &[LinuxIdMapping]) -> Result<()> {
for map in maps {
if map.size > 0 {
return Ok(());
@@ -238,7 +238,7 @@ fn rootless_euid_mapping(oci: &Spec) -> Result<()> {
Ok(())
}
fn has_idmapping(maps: &[LinuxIDMapping], id: u32) -> bool {
fn has_idmapping(maps: &[LinuxIdMapping], id: u32) -> bool {
for map in maps {
if id >= map.container_id && id < map.container_id + map.size {
return true;
@@ -441,7 +441,7 @@ mod tests {
usernamespace(&spec).unwrap();
let mut linux = Linux::default();
linux.uid_mappings = vec![LinuxIDMapping {
linux.uid_mappings = vec![LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 0,
@@ -450,7 +450,7 @@ mod tests {
usernamespace(&spec).unwrap_err();
let mut linux = Linux::default();
linux.uid_mappings = vec![LinuxIDMapping {
linux.uid_mappings = vec![LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 100,
@@ -497,12 +497,12 @@ mod tests {
path: "/sys/cgroups/user".to_owned(),
},
];
linux.uid_mappings = vec![LinuxIDMapping {
linux.uid_mappings = vec![LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 1000,
}];
linux.gid_mappings = vec![LinuxIDMapping {
linux.gid_mappings = vec![LinuxIdMapping {
container_id: 0,
host_id: 1000,
size: 1000,

View File

@@ -273,12 +273,12 @@ fn get_string_value(param: &str) -> Result<String> {
}
// We need name (but the value can be blank)
if fields[0] == "" {
if fields[0].is_empty() {
return Err(anyhow!(ERR_INVALID_GET_VALUE_NO_NAME));
}
let value = fields[1..].join("=");
if value == "" {
if value.is_empty() {
return Err(anyhow!(ERR_INVALID_GET_VALUE_NO_VALUE));
}
@@ -334,7 +334,7 @@ mod tests {
if $expected_result.is_ok() {
let expected_level = $expected_result.as_ref().unwrap();
let actual_level = $actual_result.unwrap();
assert!(*expected_level == actual_level, $msg);
assert!(*expected_level == actual_level, "{}", $msg);
} else {
let expected_error = $expected_result.as_ref().unwrap_err();
let expected_error_msg = format!("{:?}", expected_error);
@@ -342,9 +342,9 @@ mod tests {
if let Err(actual_error) = $actual_result {
let actual_error_msg = format!("{:?}", actual_error);
assert!(expected_error_msg == actual_error_msg, $msg);
assert!(expected_error_msg == actual_error_msg, "{}", $msg);
} else {
assert!(expected_error_msg == "expected error, got OK", $msg);
assert!(expected_error_msg == "expected error, got OK", "{}", $msg);
}
}
};

294
src/agent/src/console.rs Normal file
View File

@@ -0,0 +1,294 @@
// Copyright (c) 2021 Ant Group
// Copyright (c) 2021 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::util;
use anyhow::{anyhow, Result};
use nix::fcntl::{self, FcntlArg, FdFlag, OFlag};
use nix::libc::{STDERR_FILENO, STDIN_FILENO, STDOUT_FILENO};
use nix::pty::{openpty, OpenptyResult};
use nix::sys::socket::{self, AddressFamily, SockAddr, SockFlag, SockType};
use nix::sys::stat::Mode;
use nix::sys::wait;
use nix::unistd::{self, close, dup2, fork, setsid, ForkResult, Pid};
use rustjail::pipestream::PipeStream;
use slog::Logger;
use std::ffi::CString;
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::PathBuf;
use std::process::Stdio;
use std::sync::Arc;
use std::sync::Mutex as SyncMutex;
use futures::StreamExt;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::select;
use tokio::sync::watch::Receiver;
const CONSOLE_PATH: &str = "/dev/console";
lazy_static! {
static ref SHELLS: Arc<SyncMutex<Vec<String>>> = {
let mut v = Vec::new();
if !cfg!(test) {
v.push("/bin/bash".to_string());
v.push("/bin/sh".to_string());
}
Arc::new(SyncMutex::new(v))
};
}
pub fn initialize() {
lazy_static::initialize(&SHELLS);
}
pub async fn debug_console_handler(
logger: Logger,
port: u32,
mut shutdown: Receiver<bool>,
) -> Result<()> {
let logger = logger.new(o!("subsystem" => "debug-console"));
let shells = SHELLS.lock().unwrap().to_vec();
let shell = shells
.into_iter()
.find(|sh| PathBuf::from(sh).exists())
.ok_or_else(|| anyhow!("no shell found to launch debug console"))?;
if port > 0 {
let listenfd = socket::socket(
AddressFamily::Vsock,
SockType::Stream,
SockFlag::SOCK_CLOEXEC,
None,
)?;
let addr = SockAddr::new_vsock(libc::VMADDR_CID_ANY, port);
socket::bind(listenfd, &addr)?;
socket::listen(listenfd, 1)?;
let mut incoming = util::get_vsock_incoming(listenfd);
loop {
select! {
_ = shutdown.changed() => {
info!(logger, "debug console got shutdown request");
break;
}
conn = incoming.next() => {
if let Some(conn) = conn {
// Accept a new connection
match conn {
Ok(stream) => {
let logger = logger.clone();
let shell = shell.clone();
// Do not block(await) here, or we'll never receive the shutdown signal
tokio::spawn(async move {
let _ = run_debug_console_vsock(logger, shell, stream).await;
});
}
Err(e) => {
error!(logger, "{:?}", e);
}
}
} else {
break;
}
}
}
}
} else {
let mut flags = OFlag::empty();
flags.insert(OFlag::O_RDWR);
flags.insert(OFlag::O_CLOEXEC);
let fd = fcntl::open(CONSOLE_PATH, flags, Mode::empty())?;
select! {
_ = shutdown.changed() => {
info!(logger, "debug console got shutdown request");
}
result = run_debug_console_serial(shell.clone(), fd) => {
match result {
Ok(_) => {
info!(logger, "run_debug_console_shell session finished");
}
Err(err) => {
error!(logger, "run_debug_console_shell failed: {:?}", err);
}
}
}
}
};
Ok(())
}
fn run_in_child(slave_fd: libc::c_int, shell: String) -> Result<()> {
// create new session with child as session leader
setsid()?;
// dup stdin, stdout, stderr to let child act as a terminal
dup2(slave_fd, STDIN_FILENO)?;
dup2(slave_fd, STDOUT_FILENO)?;
dup2(slave_fd, STDERR_FILENO)?;
// set tty
unsafe {
libc::ioctl(0, libc::TIOCSCTTY);
}
let cmd = CString::new(shell).unwrap();
// run shell
let _ = unistd::execvp(cmd.as_c_str(), &[]).map_err(|e| match e {
nix::Error::Sys(errno) => {
std::process::exit(errno as i32);
}
_ => std::process::exit(-2),
});
Ok(())
}
async fn run_in_parent<T: AsyncRead + AsyncWrite>(
logger: Logger,
stream: T,
pseudo: OpenptyResult,
child_pid: Pid,
) -> Result<()> {
info!(logger, "get debug shell pid {:?}", child_pid);
let master_fd = pseudo.master;
let _ = close(pseudo.slave);
let (mut socket_reader, mut socket_writer) = tokio::io::split(stream);
let (mut master_reader, mut master_writer) = tokio::io::split(PipeStream::from_fd(master_fd));
select! {
res = tokio::io::copy(&mut master_reader, &mut socket_writer) => {
debug!(
logger,
"master closed: {:?}", res
);
}
res = tokio::io::copy(&mut socket_reader, &mut master_writer) => {
info!(
logger,
"socket closed: {:?}", res
);
}
}
let wait_status = wait::waitpid(child_pid, None);
info!(logger, "debug console process exit code: {:?}", wait_status);
Ok(())
}
async fn run_debug_console_vsock<T: AsyncRead + AsyncWrite>(
logger: Logger,
shell: String,
stream: T,
) -> Result<()> {
let logger = logger.new(o!("subsystem" => "debug-console-shell"));
let pseudo = openpty(None, None)?;
let _ = fcntl::fcntl(pseudo.master, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let _ = fcntl::fcntl(pseudo.slave, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let slave_fd = pseudo.slave;
match fork() {
Ok(ForkResult::Child) => run_in_child(slave_fd, shell),
Ok(ForkResult::Parent { child: child_pid }) => {
run_in_parent(logger.clone(), stream, pseudo, child_pid).await
}
Err(err) => Err(anyhow!("fork error: {:?}", err)),
}
}
async fn run_debug_console_serial(shell: String, fd: RawFd) -> Result<()> {
let mut child = match tokio::process::Command::new(shell)
.arg("-i")
.kill_on_drop(true)
.stdin(unsafe { Stdio::from_raw_fd(fd) })
.stdout(unsafe { Stdio::from_raw_fd(fd) })
.stderr(unsafe { Stdio::from_raw_fd(fd) })
.spawn()
{
Ok(c) => c,
Err(_) => return Err(anyhow!("failed to spawn shell")),
};
child.wait().await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;
use tokio::sync::watch;
#[tokio::test]
async fn test_setup_debug_console_no_shells() {
{
// Guarantee no shells have been added
// (required to avoid racing with
// test_setup_debug_console_invalid_shell()).
let shells_ref = SHELLS.clone();
let mut shells = shells_ref.lock().unwrap();
shells.clear();
}
let logger = slog_scope::logger();
let (_, rx) = watch::channel(true);
let result = debug_console_handler(logger, 0, rx).await;
assert!(result.is_err());
assert_eq!(
result.unwrap_err().to_string(),
"no shell found to launch debug console"
);
}
#[tokio::test]
async fn test_setup_debug_console_invalid_shell() {
{
let shells_ref = SHELLS.clone();
let mut shells = shells_ref.lock().unwrap();
let dir = tempdir().expect("failed to create tmpdir");
// Add an invalid shell
let shell = dir
.path()
.join("enoent")
.to_str()
.expect("failed to construct shell path")
.to_string();
shells.push(shell);
}
let logger = slog_scope::logger();
let (_, rx) = watch::channel(true);
let result = debug_console_handler(logger, 0, rx).await;
assert!(result.is_err());
assert_eq!(
result.unwrap_err().to_string(),
"no shell found to launch debug console"
);
}
}

View File

@@ -5,6 +5,7 @@
use libc::{c_uint, major, minor};
use nix::sys::stat;
use regex::Regex;
use std::collections::HashMap;
use std::fs;
use std::os::unix::fs::MetadataExt;
@@ -17,7 +18,7 @@ use crate::linux_abi::*;
use crate::mount::{DRIVER_BLK_TYPE, DRIVER_MMIO_BLK_TYPE, DRIVER_NVDIMM_TYPE, DRIVER_SCSI_TYPE};
use crate::pci;
use crate::sandbox::Sandbox;
use crate::{AGENT_CONFIG, GLOBAL_DEVICE_WATCHER};
use crate::uevent::{wait_for_uevent, Uevent, UeventMatcher};
use anyhow::{anyhow, Result};
use oci::{LinuxDeviceCgroup, LinuxResources, Spec};
use protocols::agent::Device;
@@ -87,75 +88,116 @@ fn pcipath_to_sysfs(root_bus_sysfs: &str, pcipath: &pci::Path) -> Result<String>
Ok(relpath)
}
async fn get_device_name(sandbox: &Arc<Mutex<Sandbox>>, dev_addr: &str) -> Result<String> {
// Keep the same lock order as uevent::handle_block_add_event(), otherwise it may cause deadlock.
let mut w = GLOBAL_DEVICE_WATCHER.lock().await;
let sb = sandbox.lock().await;
for (key, value) in sb.pci_device_map.iter() {
if key.contains(dev_addr) {
info!(sl!(), "Device {} found in pci device map", dev_addr);
return Ok(format!("{}/{}", SYSTEM_DEV_PATH, value));
}
// FIXME: This matcher is only correct if the guest has at most one
// SCSI host.
#[derive(Debug)]
struct ScsiBlockMatcher {
search: String,
}
impl ScsiBlockMatcher {
fn new(scsi_addr: &str) -> ScsiBlockMatcher {
let search = format!(r"/0:0:{}/block/", scsi_addr);
ScsiBlockMatcher { search }
}
drop(sb);
}
// If device is not found in the device map, hotplug event has not
// been received yet, create and add channel to the watchers map.
// The key of the watchers map is the device we are interested in.
// Note this is done inside the lock, not to miss any events from the
// global udev listener.
let (tx, rx) = tokio::sync::oneshot::channel::<String>();
w.insert(dev_addr.to_string(), Some(tx));
drop(w);
info!(sl!(), "Waiting on channel for device notification\n");
let hotplug_timeout = AGENT_CONFIG.read().await.hotplug_timeout;
let dev_name = match tokio::time::timeout(hotplug_timeout, rx).await {
Ok(v) => v?,
Err(_) => {
let watcher = GLOBAL_DEVICE_WATCHER.clone();
let mut w = watcher.lock().await;
w.remove_entry(dev_addr);
return Err(anyhow!(
"Timeout reached after {:?} waiting for device {}",
hotplug_timeout,
dev_addr
));
}
};
Ok(format!("{}/{}", SYSTEM_DEV_PATH, &dev_name))
impl UeventMatcher for ScsiBlockMatcher {
fn is_match(&self, uev: &Uevent) -> bool {
uev.subsystem == "block" && uev.devpath.contains(&self.search) && !uev.devname.is_empty()
}
}
pub async fn get_scsi_device_name(
sandbox: &Arc<Mutex<Sandbox>>,
scsi_addr: &str,
) -> Result<String> {
let dev_sub_path = format!("{}{}/{}", SCSI_HOST_CHANNEL, scsi_addr, SCSI_BLOCK_SUFFIX);
let matcher = ScsiBlockMatcher::new(scsi_addr);
scan_scsi_bus(scsi_addr)?;
get_device_name(sandbox, &dev_sub_path).await
let uev = wait_for_uevent(sandbox, matcher).await?;
Ok(format!("{}/{}", SYSTEM_DEV_PATH, &uev.devname))
}
pub async fn get_pci_device_name(
#[derive(Debug)]
struct VirtioBlkPciMatcher {
rex: Regex,
}
impl VirtioBlkPciMatcher {
fn new(relpath: &str) -> VirtioBlkPciMatcher {
let root_bus = create_pci_root_bus_path();
let re = format!(r"^{}{}/virtio[0-9]+/block/", root_bus, relpath);
VirtioBlkPciMatcher {
rex: Regex::new(&re).unwrap(),
}
}
}
impl UeventMatcher for VirtioBlkPciMatcher {
fn is_match(&self, uev: &Uevent) -> bool {
uev.subsystem == "block" && self.rex.is_match(&uev.devpath) && !uev.devname.is_empty()
}
}
pub async fn get_virtio_blk_pci_device_name(
sandbox: &Arc<Mutex<Sandbox>>,
pcipath: &pci::Path,
) -> Result<String> {
let root_bus_sysfs = format!("{}{}", SYSFS_DIR, create_pci_root_bus_path());
let sysfs_rel_path = pcipath_to_sysfs(&root_bus_sysfs, pcipath)?;
let matcher = VirtioBlkPciMatcher::new(&sysfs_rel_path);
rescan_pci_bus()?;
get_device_name(sandbox, &sysfs_rel_path).await
let uev = wait_for_uevent(sandbox, matcher).await?;
Ok(format!("{}/{}", SYSTEM_DEV_PATH, &uev.devname))
}
pub async fn get_pmem_device_name(
sandbox: &Arc<Mutex<Sandbox>>,
pmem_devname: &str,
) -> Result<String> {
let dev_sub_path = format!("/{}/{}", SCSI_BLOCK_SUFFIX, pmem_devname);
get_device_name(sandbox, &dev_sub_path).await
#[derive(Debug)]
struct PmemBlockMatcher {
suffix: String,
}
impl PmemBlockMatcher {
fn new(devname: &str) -> PmemBlockMatcher {
let suffix = format!(r"/block/{}", devname);
PmemBlockMatcher { suffix }
}
}
impl UeventMatcher for PmemBlockMatcher {
fn is_match(&self, uev: &Uevent) -> bool {
uev.subsystem == "block"
&& uev.devpath.starts_with(ACPI_DEV_PATH)
&& uev.devpath.ends_with(&self.suffix)
&& !uev.devname.is_empty()
}
}
pub async fn wait_for_pmem_device(sandbox: &Arc<Mutex<Sandbox>>, devpath: &str) -> Result<()> {
let devname = match devpath.strip_prefix("/dev/") {
Some(dev) => dev,
None => {
return Err(anyhow!(
"Storage source '{}' must start with /dev/",
devpath
))
}
};
let matcher = PmemBlockMatcher::new(devname);
let uev = wait_for_uevent(sandbox, matcher).await?;
if uev.devname != devname {
return Err(anyhow!(
"Unexpected device name {} for pmem device (expected {})",
uev.devname,
devname
));
}
Ok(())
}
/// Scan SCSI bus for the given SCSI address(SCSI-Id and LUN)
@@ -290,14 +332,9 @@ async fn virtio_blk_device_handler(
devidx: &DevIndex,
) -> Result<()> {
let mut dev = device.clone();
let pcipath = pci::Path::from_str(&device.id)?;
// When "Id (PCI path)" is not set, we allow to use the predicted
// "VmPath" passed from kata-runtime Note this is a special code
// path for cloud-hypervisor when BDF information is not available
if !device.id.is_empty() {
let pcipath = pci::Path::from_str(&device.id)?;
dev.vm_path = get_pci_device_name(sandbox, &pcipath).await?;
}
dev.vm_path = get_virtio_blk_pci_device_name(sandbox, &pcipath).await?;
update_spec_device_list(&dev, spec, devidx)
}
@@ -430,6 +467,7 @@ pub fn update_device_cgroup(spec: &mut Spec) -> Result<()> {
#[cfg(test)]
mod tests {
use super::*;
use crate::uevent::spawn_test_watcher;
use oci::Linux;
use tempfile::tempdir;
@@ -776,4 +814,107 @@ mod tests {
let relpath = pcipath_to_sysfs(rootbuspath, &path234);
assert_eq!(relpath.unwrap(), "/0000:00:02.0/0000:01:03.0/0000:02:04.0");
}
// We use device specific variants of this for real cases, but
// they have some complications that make them troublesome to unit
// test
async fn example_get_device_name(
sandbox: &Arc<Mutex<Sandbox>>,
relpath: &str,
) -> Result<String> {
let matcher = VirtioBlkPciMatcher::new(relpath);
let uev = wait_for_uevent(sandbox, matcher).await?;
Ok(uev.devname)
}
#[tokio::test]
async fn test_get_device_name() {
let devname = "vda";
let root_bus = create_pci_root_bus_path();
let relpath = "/0000:00:0a.0/0000:03:0b.0";
let devpath = format!("{}{}/virtio4/block/{}", root_bus, relpath, devname);
let mut uev = crate::uevent::Uevent::default();
uev.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev.subsystem = "block".to_string();
uev.devpath = devpath.clone();
uev.devname = devname.to_string();
let logger = slog::Logger::root(slog::Discard, o!());
let sandbox = Arc::new(Mutex::new(Sandbox::new(&logger).unwrap()));
let mut sb = sandbox.lock().await;
sb.uevent_map.insert(devpath.clone(), uev);
drop(sb); // unlock
let name = example_get_device_name(&sandbox, relpath).await;
assert!(name.is_ok(), "{}", name.unwrap_err());
assert_eq!(name.unwrap(), devname);
let mut sb = sandbox.lock().await;
let uev = sb.uevent_map.remove(&devpath).unwrap();
drop(sb); // unlock
spawn_test_watcher(sandbox.clone(), uev);
let name = example_get_device_name(&sandbox, relpath).await;
assert!(name.is_ok(), "{}", name.unwrap_err());
assert_eq!(name.unwrap(), devname);
}
#[tokio::test]
async fn test_virtio_blk_matcher() {
let root_bus = create_pci_root_bus_path();
let devname = "vda";
let mut uev_a = crate::uevent::Uevent::default();
let relpath_a = "/0000:00:0a.0";
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.subsystem = "block".to_string();
uev_a.devname = devname.to_string();
uev_a.devpath = format!("{}{}/virtio4/block/{}", root_bus, relpath_a, devname);
let matcher_a = VirtioBlkPciMatcher::new(&relpath_a);
let mut uev_b = uev_a.clone();
let relpath_b = "/0000:00:0a.0/0000:00:0b.0";
uev_b.devpath = format!("{}{}/virtio0/block/{}", root_bus, relpath_b, devname);
let matcher_b = VirtioBlkPciMatcher::new(&relpath_b);
assert!(matcher_a.is_match(&uev_a));
assert!(matcher_b.is_match(&uev_b));
assert!(!matcher_b.is_match(&uev_a));
assert!(!matcher_a.is_match(&uev_b));
}
#[tokio::test]
async fn test_scsi_block_matcher() {
let root_bus = create_pci_root_bus_path();
let devname = "sda";
let mut uev_a = crate::uevent::Uevent::default();
let addr_a = "0:0";
uev_a.action = crate::linux_abi::U_EVENT_ACTION_ADD.to_string();
uev_a.subsystem = "block".to_string();
uev_a.devname = devname.to_string();
uev_a.devpath = format!(
"{}/0000:00:00.0/virtio0/host0/target0:0:0/0:0:{}/block/sda",
root_bus, addr_a
);
let matcher_a = ScsiBlockMatcher::new(&addr_a);
let mut uev_b = uev_a.clone();
let addr_b = "2:0";
uev_b.devpath = format!(
"{}/0000:00:00.0/virtio0/host0/target0:0:2/0:0:{}/block/sdb",
root_bus, addr_b
);
let matcher_b = ScsiBlockMatcher::new(&addr_b);
assert!(matcher_a.is_match(&uev_a));
assert!(matcher_b.is_match(&uev_b));
assert!(!matcher_b.is_match(&uev_a));
assert!(!matcher_a.is_match(&uev_b));
}
}

View File

@@ -78,11 +78,6 @@ pub const SYSFS_MEMORY_BLOCK_SIZE_PATH: &str = "/sys/devices/system/memory/block
pub const SYSFS_MEMORY_HOTPLUG_PROBE_PATH: &str = "/sys/devices/system/memory/probe";
pub const SYSFS_MEMORY_ONLINE_PATH: &str = "/sys/devices/system/memory";
// Here in "0:0", the first number is the SCSI host number because
// only one SCSI controller has been plugged, while the second number
// is always 0.
pub const SCSI_HOST_CHANNEL: &str = "0:0:";
pub const SCSI_BLOCK_SUFFIX: &str = "block";
pub const SYSFS_SCSI_HOST_PATH: &str = "/sys/class/scsi_host";
pub const SYSFS_CGROUPPATH: &str = "/sys/fs/cgroup";

View File

@@ -20,27 +20,21 @@ extern crate scopeguard;
extern crate slog;
use anyhow::{anyhow, Context, Result};
use nix::fcntl::{self, OFlag};
use nix::fcntl::{FcntlArg, FdFlag};
use nix::libc::{STDERR_FILENO, STDIN_FILENO, STDOUT_FILENO};
use nix::pty;
use nix::sys::select::{select, FdSet};
use nix::fcntl::OFlag;
use nix::sys::socket::{self, AddressFamily, SockAddr, SockFlag, SockType};
use nix::sys::wait;
use nix::unistd::{self, close, dup, dup2, fork, setsid, ForkResult};
use std::collections::HashMap;
use nix::unistd::{self, dup, Pid};
use std::env;
use std::ffi::{CStr, CString, OsStr};
use std::ffi::OsStr;
use std::fs::{self, File};
use std::io::{Read, Write};
use std::os::unix::ffi::OsStrExt;
use std::os::unix::fs as unixfs;
use std::os::unix::io::AsRawFd;
use std::path::Path;
use std::process::exit;
use std::sync::Arc;
use unistd::Pid;
mod config;
mod console;
mod device;
mod linux_abi;
mod metrics;
@@ -64,33 +58,23 @@ use signal::setup_signal_handler;
use slog::Logger;
use uevent::watch_uevents;
use std::sync::Mutex as SyncMutex;
use futures::future::join_all;
use futures::StreamExt as _;
use rustjail::pipestream::PipeStream;
use tokio::{
io::AsyncWrite,
sync::{
oneshot::Sender,
watch::{channel, Receiver},
Mutex, RwLock,
},
task::JoinHandle,
};
use tokio_vsock::{Incoming, VsockListener, VsockStream};
mod rpc;
const NAME: &str = "kata-agent";
const KERNEL_CMDLINE_FILE: &str = "/proc/cmdline";
const CONSOLE_PATH: &str = "/dev/console";
const DEFAULT_BUF_SIZE: usize = 8 * 1024;
lazy_static! {
static ref GLOBAL_DEVICE_WATCHER: Arc<Mutex<HashMap<String, Option<Sender<String>>>>> =
Arc::new(Mutex::new(HashMap::new()));
static ref AGENT_CONFIG: Arc<RwLock<AgentConfig>> =
Arc::new(RwLock::new(config::AgentConfig::new()));
}
@@ -108,27 +92,6 @@ fn announce(logger: &Logger, config: &AgentConfig) {
);
}
fn set_fd_close_exec(fd: RawFd) -> Result<RawFd> {
if let Err(e) = fcntl::fcntl(fd, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC)) {
return Err(anyhow!("failed to set fd: {} as close-on-exec: {}", fd, e));
}
Ok(fd)
}
fn get_vsock_incoming(fd: RawFd) -> Incoming {
let incoming;
unsafe {
incoming = VsockListener::from_raw_fd(fd).incoming();
}
incoming
}
async fn get_vsock_stream(fd: RawFd) -> Result<VsockStream> {
let stream = get_vsock_incoming(fd).next().await.unwrap().unwrap();
set_fd_close_exec(stream.as_raw_fd())?;
Ok(stream)
}
// Create a thread to handle reading from the logger pipe. The thread will
// output to the vsock port specified, or stdout.
async fn create_logger_task(rfd: RawFd, vsock_port: u32, shutdown: Receiver<bool>) -> Result<()> {
@@ -147,7 +110,7 @@ async fn create_logger_task(rfd: RawFd, vsock_port: u32, shutdown: Receiver<bool
socket::bind(listenfd, &addr).unwrap();
socket::listen(listenfd, 1).unwrap();
writer = Box::new(get_vsock_stream(listenfd).await.unwrap());
writer = Box::new(util::get_vsock_stream(listenfd).await.unwrap());
} else {
writer = Box::new(tokio::io::stdout());
}
@@ -163,7 +126,7 @@ async fn real_main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// List of tasks that need to be stopped for a clean shutdown
let mut tasks: Vec<JoinHandle<Result<()>>> = vec![];
lazy_static::initialize(&SHELLS);
console::initialize();
lazy_static::initialize(&AGENT_CONFIG);
@@ -303,26 +266,17 @@ async fn start_sandbox(
tasks: &mut Vec<JoinHandle<Result<()>>>,
shutdown: Receiver<bool>,
) -> Result<()> {
let shells = SHELLS.clone();
let debug_console_vport = config.debug_console_vport as u32;
let shell_handle = if config.debug_console {
let thread_logger = logger.clone();
let shells = shells.lock().unwrap().to_vec();
if config.debug_console {
let debug_console_task = tokio::task::spawn(console::debug_console_handler(
logger.clone(),
debug_console_vport,
shutdown.clone(),
));
let handle = tokio::task::spawn_blocking(move || {
let result = setup_debug_console(&thread_logger, shells, debug_console_vport);
if result.is_err() {
// Report error, but don't fail
warn!(thread_logger, "failed to setup debug console";
"error" => format!("{}", result.unwrap_err()));
}
});
Some(handle)
} else {
None
};
tasks.push(debug_console_task);
}
// Initialize unique sandbox structure.
let s = Sandbox::new(&logger).context("Failed to create sandbox")?;
@@ -354,10 +308,6 @@ async fn start_sandbox(
let _ = rx.await?;
server.shutdown().await?;
if let Some(handle) = shell_handle {
handle.await.map_err(|e| anyhow!("{:?}", e))?;
}
Ok(())
}
@@ -408,284 +358,5 @@ fn sethostname(hostname: &OsStr) -> Result<()> {
}
}
lazy_static! {
static ref SHELLS: Arc<SyncMutex<Vec<String>>> = {
let mut v = Vec::new();
if !cfg!(test) {
v.push("/bin/bash".to_string());
v.push("/bin/sh".to_string());
}
Arc::new(SyncMutex::new(v))
};
}
use crate::config::AgentConfig;
use nix::sys::stat::Mode;
use std::os::unix::io::{FromRawFd, RawFd};
use std::path::PathBuf;
use std::process::exit;
fn setup_debug_console(logger: &Logger, shells: Vec<String>, port: u32) -> Result<()> {
let shell = shells
.iter()
.find(|sh| PathBuf::from(sh).exists())
.ok_or_else(|| anyhow!("no shell found to launch debug console"))?;
if port > 0 {
let listenfd = socket::socket(
AddressFamily::Vsock,
SockType::Stream,
SockFlag::SOCK_CLOEXEC,
None,
)?;
let addr = SockAddr::new_vsock(libc::VMADDR_CID_ANY, port);
socket::bind(listenfd, &addr)?;
socket::listen(listenfd, 1)?;
loop {
let f: RawFd = socket::accept4(listenfd, SockFlag::SOCK_CLOEXEC)?;
match run_debug_console_shell(logger, shell, f) {
Ok(_) => {
info!(logger, "run_debug_console_shell session finished");
}
Err(err) => {
error!(logger, "run_debug_console_shell failed: {:?}", err);
}
}
}
} else {
let mut flags = OFlag::empty();
flags.insert(OFlag::O_RDWR);
flags.insert(OFlag::O_CLOEXEC);
loop {
let f: RawFd = fcntl::open(CONSOLE_PATH, flags, Mode::empty())?;
match run_debug_console_shell(logger, shell, f) {
Ok(_) => {
info!(logger, "run_debug_console_shell session finished");
}
Err(err) => {
error!(logger, "run_debug_console_shell failed: {:?}", err);
}
}
}
};
}
fn io_copy<R: ?Sized, W: ?Sized>(reader: &mut R, writer: &mut W) -> std::io::Result<u64>
where
R: Read,
W: Write,
{
let mut buf = [0; DEFAULT_BUF_SIZE];
let buf_len;
match reader.read(&mut buf) {
Ok(0) => return Ok(0),
Ok(len) => buf_len = len,
Err(err) => return Err(err),
};
// write and return
match writer.write_all(&buf[..buf_len]) {
Ok(_) => Ok(buf_len as u64),
Err(err) => Err(err),
}
}
fn run_debug_console_shell(logger: &Logger, shell: &str, socket_fd: RawFd) -> Result<()> {
let pseduo = pty::openpty(None, None)?;
let _ = fcntl::fcntl(pseduo.master, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let _ = fcntl::fcntl(pseduo.slave, FcntlArg::F_SETFD(FdFlag::FD_CLOEXEC));
let slave_fd = pseduo.slave;
match fork() {
Ok(ForkResult::Child) => {
// create new session with child as session leader
setsid()?;
// dup stdin, stdout, stderr to let child act as a terminal
dup2(slave_fd, STDIN_FILENO)?;
dup2(slave_fd, STDOUT_FILENO)?;
dup2(slave_fd, STDERR_FILENO)?;
// set tty
unsafe {
libc::ioctl(0, libc::TIOCSCTTY);
}
let cmd = CString::new(shell).unwrap();
let args: Vec<&CStr> = vec![];
// run shell
let _ = unistd::execvp(cmd.as_c_str(), args.as_slice()).map_err(|e| match e {
nix::Error::Sys(errno) => {
std::process::exit(errno as i32);
}
_ => std::process::exit(-2),
});
}
Ok(ForkResult::Parent { child: child_pid }) => {
info!(logger, "get debug shell pid {:?}", child_pid);
let (rfd, wfd) = unistd::pipe2(OFlag::O_CLOEXEC)?;
let master_fd = pseduo.master;
let debug_shell_logger = logger.clone();
// channel that used to sync between thread and main process
let (tx, rx) = std::sync::mpsc::channel::<i32>();
// start a thread to do IO copy between socket and pseduo.master
std::thread::spawn(move || {
let mut master_reader = unsafe { File::from_raw_fd(master_fd) };
let mut master_writer = unsafe { File::from_raw_fd(master_fd) };
let mut socket_reader = unsafe { File::from_raw_fd(socket_fd) };
let mut socket_writer = unsafe { File::from_raw_fd(socket_fd) };
loop {
let mut fd_set = FdSet::new();
fd_set.insert(rfd);
fd_set.insert(master_fd);
fd_set.insert(socket_fd);
match select(
Some(fd_set.highest().unwrap() + 1),
&mut fd_set,
None,
None,
None,
) {
Ok(_) => (),
Err(e) => {
if e == nix::Error::from(nix::errno::Errno::EINTR) {
continue;
} else {
error!(debug_shell_logger, "select error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
if fd_set.contains(rfd) {
info!(
debug_shell_logger,
"debug shell process {} exited", child_pid
);
tx.send(1).unwrap();
break;
}
if fd_set.contains(master_fd) {
match io_copy(&mut master_reader, &mut socket_writer) {
Ok(0) => {
debug!(debug_shell_logger, "master fd closed");
tx.send(1).unwrap();
break;
}
Ok(_) => {}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!(debug_shell_logger, "read master fd error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
if fd_set.contains(socket_fd) {
match io_copy(&mut socket_reader, &mut master_writer) {
Ok(0) => {
debug!(debug_shell_logger, "socket fd closed");
tx.send(1).unwrap();
break;
}
Ok(_) => {}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => {
error!(debug_shell_logger, "read socket fd error {:?}", e);
tx.send(1).unwrap();
break;
}
}
}
}
});
let wait_status = wait::waitpid(child_pid, None);
info!(logger, "debug console process exit code: {:?}", wait_status);
info!(logger, "notify debug monitor thread to exit");
// close pipe to exit select loop
let _ = close(wfd);
// wait for thread exit.
let _ = rx.recv().unwrap();
info!(logger, "debug monitor thread has exited");
// close files
let _ = close(rfd);
let _ = close(master_fd);
let _ = close(slave_fd);
}
Err(err) => {
return Err(anyhow!("fork error: {:?}", err));
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::tempdir;
#[test]
fn test_setup_debug_console_no_shells() {
// Guarantee no shells have been added
// (required to avoid racing with
// test_setup_debug_console_invalid_shell()).
let shells_ref = SHELLS.clone();
let mut shells = shells_ref.lock().unwrap();
shells.clear();
let logger = slog_scope::logger();
let result = setup_debug_console(&logger, shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(
result.unwrap_err().to_string(),
"no shell found to launch debug console"
);
}
#[test]
fn test_setup_debug_console_invalid_shell() {
let shells_ref = SHELLS.clone();
let mut shells = shells_ref.lock().unwrap();
let dir = tempdir().expect("failed to create tmpdir");
// Add an invalid shell
let shell = dir
.path()
.join("enoent")
.to_str()
.expect("failed to construct shell path")
.to_string();
shells.push(shell);
let logger = slog_scope::logger();
let result = setup_debug_console(&logger, shells.to_vec(), 0);
assert!(result.is_err());
assert_eq!(
result.unwrap_err().to_string(),
"no shell found to launch debug console"
);
}
}

View File

@@ -7,7 +7,7 @@ use std::collections::HashMap;
use std::ffi::CString;
use std::fs;
use std::io;
use std::os::unix::fs::PermissionsExt;
use std::os::unix::fs::{MetadataExt, PermissionsExt};
use std::path::Path;
use std::ptr::null;
@@ -17,13 +17,14 @@ use tokio::sync::Mutex;
use libc::{c_void, mount};
use nix::mount::{self, MsFlags};
use nix::unistd::Gid;
use regex::Regex;
use std::fs::File;
use std::io::{BufRead, BufReader};
use crate::device::{
get_pci_device_name, get_pmem_device_name, get_scsi_device_name, online_device,
get_scsi_device_name, get_virtio_blk_pci_device_name, online_device, wait_for_pmem_device,
};
use crate::linux_abi::*;
use crate::pci;
@@ -45,6 +46,9 @@ pub const TYPE_ROOTFS: &str = "rootfs";
pub const MOUNT_GUEST_TAG: &str = "kataShared";
// Allocating an FSGroup that owns the pod's volumes
const FS_GID: &str = "fsgid";
#[rustfmt::skip]
lazy_static! {
pub static ref FLAGS: HashMap<&'static str, (bool, MsFlags)> = {
@@ -241,7 +245,35 @@ async fn ephemeral_storage_handler(
}
fs::create_dir_all(Path::new(&storage.mount_point))?;
common_storage_handler(logger, storage)?;
// By now we only support one option field: "fsGroup" which
// isn't an valid mount option, thus we should remove it when
// do mount.
if storage.options.len() > 0 {
// ephemeral_storage didn't support mount options except fsGroup.
let mut new_storage = storage.clone();
new_storage.options = protobuf::RepeatedField::default();
common_storage_handler(logger, &new_storage)?;
let opts_vec: Vec<String> = storage.options.to_vec();
let opts = parse_options(opts_vec);
if let Some(fsgid) = opts.get(FS_GID) {
let gid = fsgid.parse::<u32>()?;
nix::unistd::chown(storage.mount_point.as_str(), None, Some(Gid::from_raw(gid)))?;
let meta = fs::metadata(&storage.mount_point)?;
let mut permission = meta.permissions();
let o_mode = meta.mode() | 0o2000;
permission.set_mode(o_mode);
fs::set_permissions(&storage.mount_point, permission)?;
}
} else {
common_storage_handler(logger, &storage)?;
}
Ok("".to_string())
}
@@ -266,11 +298,24 @@ async fn local_storage_handler(
let opts_vec: Vec<String> = storage.options.to_vec();
let opts = parse_options(opts_vec);
let mode = opts.get("mode");
if let Some(mode) = mode {
let mut need_set_fsgid = false;
if let Some(fsgid) = opts.get(FS_GID) {
let gid = fsgid.parse::<u32>()?;
nix::unistd::chown(storage.mount_point.as_str(), None, Some(Gid::from_raw(gid)))?;
need_set_fsgid = true;
}
if let Some(mode) = opts.get("mode") {
let mut permission = fs::metadata(&storage.mount_point)?.permissions();
let o_mode = u32::from_str_radix(mode, 8)?;
let mut o_mode = u32::from_str_radix(mode, 8)?;
if need_set_fsgid {
// set SetGid mode mask.
o_mode |= 0o2000;
}
permission.set_mode(o_mode);
fs::set_permissions(&storage.mount_point, permission)?;
@@ -325,7 +370,7 @@ async fn virtio_blk_storage_handler(
}
} else {
let pcipath = pci::Path::from_str(&storage.source)?;
let dev_path = get_pci_device_name(&sandbox, &pcipath).await?;
let dev_path = get_virtio_blk_pci_device_name(&sandbox, &pcipath).await?;
storage.source = dev_path;
}
@@ -360,22 +405,10 @@ async fn nvdimm_storage_handler(
storage: &Storage,
sandbox: Arc<Mutex<Sandbox>>,
) -> Result<String> {
let mut storage = storage.clone();
// If hot-plugged, get the device node path based on the PCI address else
// use the virt path provided in Storage Source
let pmem_devname = match storage.source.strip_prefix("/dev/") {
Some(dev) => dev,
None => {
return Err(anyhow!(
"Storage source '{}' must start with /dev/",
storage.source
))
}
};
let storage = storage.clone();
// Retrieve the device path from NVDIMM address.
let dev_path = get_pmem_device_name(&sandbox, pmem_devname).await?;
storage.source = dev_path;
wait_for_pmem_device(&sandbox, &storage.source).await?;
common_storage_handler(logger, &storage)
}
@@ -388,7 +421,10 @@ fn mount_storage(logger: &Logger, storage: &Storage) -> Result<()> {
// If so, skip doing the mount. This facilitates mounting the sharedfs automatically
// in the guest before the agent service starts.
if storage.source == MOUNT_GUEST_TAG && is_mounted(&storage.mount_point)? {
warn!(logger, "kataShared already mounted, ignoring...");
warn!(
logger,
"{} already mounted on {}, ignoring...", MOUNT_GUEST_TAG, &storage.mount_point
);
return Ok(());
}
@@ -900,7 +936,7 @@ mod tests {
let msg = format!("{}: result: {:?}", msg, result);
if d.error_contains.is_empty() {
assert!(result.is_ok(), msg);
assert!(result.is_ok(), "{}", msg);
// Cleanup
unsafe {
@@ -912,7 +948,7 @@ mod tests {
let msg = format!("{}: umount result: {:?}", msg, result);
assert!(ret == 0, msg);
assert!(ret == 0, "{}", msg);
};
continue;
@@ -920,7 +956,7 @@ mod tests {
let err = result.unwrap_err();
let error_msg = format!("{}", err);
assert!(error_msg.contains(d.error_contains), msg);
assert!(error_msg.contains(d.error_contains), "{}", msg);
}
}
@@ -1026,13 +1062,13 @@ mod tests {
let msg = format!("{}: result: {:?}", msg, result);
if d.error_contains.is_empty() {
assert!(result.is_ok(), msg);
assert!(result.is_ok(), "{}", msg);
continue;
}
let error_msg = format!("{:#}", result.unwrap_err());
assert!(error_msg.contains(d.error_contains), msg);
assert!(error_msg.contains(d.error_contains), "{}", msg);
}
}
@@ -1108,6 +1144,7 @@ mod tests {
assert!(
format!("{}", err).contains("No such file or directory"),
"{}",
msg
);
}
@@ -1136,13 +1173,13 @@ mod tests {
if d.error_contains.is_empty() {
let fs_type = result.unwrap();
assert!(d.fs_type == fs_type, msg);
assert!(d.fs_type == fs_type, "{}", msg);
continue;
}
let error_msg = format!("{}", result.unwrap_err());
assert!(error_msg.contains(d.error_contains), msg);
assert!(error_msg.contains(d.error_contains), "{}", msg);
}
}
@@ -1291,34 +1328,34 @@ mod tests {
let msg = format!("{}: result: {:?}", msg, result);
if !d.error_contains.is_empty() {
assert!(result.is_err(), msg);
assert!(result.is_err(), "{}", msg);
let error_msg = format!("{}", result.unwrap_err());
assert!(error_msg.contains(d.error_contains), msg);
assert!(error_msg.contains(d.error_contains), "{}", msg);
continue;
}
assert!(result.is_ok(), msg);
assert!(result.is_ok(), "{}", msg);
let mounts = result.unwrap();
let count = mounts.len();
if !d.devices_cgroup {
assert!(count == 0, msg);
assert!(count == 0, "{}", msg);
continue;
}
// get_cgroup_mounts() adds the device cgroup plus two other mounts.
assert!(count == (1 + 2), msg);
assert!(count == (1 + 2), "{}", msg);
// First mount
assert!(mounts[0].eq(&first_mount), msg);
assert!(mounts[0].eq(&first_mount), "{}", msg);
// Last mount
assert!(mounts[2].eq(&last_mount), msg);
assert!(mounts[2].eq(&last_mount), "{}", msg);
// Devices cgroup
assert!(mounts[1].eq(&cg_devices_mount), msg);
assert!(mounts[1].eq(&cg_devices_mount), "{}", msg);
}
}
}

View File

@@ -45,18 +45,18 @@ impl Namespace {
logger: logger.clone(),
path: String::from(""),
persistent_ns_dir: String::from(PERSISTENT_NS_DIR),
ns_type: NamespaceType::IPC,
ns_type: NamespaceType::Ipc,
hostname: None,
}
}
pub fn get_ipc(mut self) -> Self {
self.ns_type = NamespaceType::IPC;
self.ns_type = NamespaceType::Ipc;
self
}
pub fn get_uts(mut self, hostname: &str) -> Self {
self.ns_type = NamespaceType::UTS;
self.ns_type = NamespaceType::Uts;
if !hostname.is_empty() {
self.hostname = Some(String::from(hostname));
}
@@ -64,7 +64,7 @@ impl Namespace {
}
pub fn get_pid(mut self) -> Self {
self.ns_type = NamespaceType::PID;
self.ns_type = NamespaceType::Pid;
self
}
@@ -81,7 +81,7 @@ impl Namespace {
let ns_path = PathBuf::from(&self.persistent_ns_dir);
let ns_type = self.ns_type;
if ns_type == NamespaceType::PID {
if ns_type == NamespaceType::Pid {
return Err(anyhow!("Cannot persist namespace of PID type"));
}
let logger = self.logger.clone();
@@ -104,7 +104,7 @@ impl Namespace {
unshare(cf)?;
if ns_type == NamespaceType::UTS && hostname.is_some() {
if ns_type == NamespaceType::Uts && hostname.is_some() {
nix::unistd::sethostname(hostname.unwrap())?;
}
// Bind mount the new namespace from the current thread onto the mount point to persist it.
@@ -147,27 +147,27 @@ impl Namespace {
/// Represents the Namespace type.
#[derive(Clone, Copy, PartialEq)]
enum NamespaceType {
IPC,
UTS,
PID,
Ipc,
Uts,
Pid,
}
impl NamespaceType {
/// Get the string representation of the namespace type.
pub fn get(&self) -> &str {
match *self {
Self::IPC => "ipc",
Self::UTS => "uts",
Self::PID => "pid",
Self::Ipc => "ipc",
Self::Uts => "uts",
Self::Pid => "pid",
}
}
/// Get the associate flags with the namespace type.
pub fn get_flags(&self) -> CloneFlags {
match *self {
Self::IPC => CloneFlags::CLONE_NEWIPC,
Self::UTS => CloneFlags::CLONE_NEWUTS,
Self::PID => CloneFlags::CLONE_NEWPID,
Self::Ipc => CloneFlags::CLONE_NEWIPC,
Self::Uts => CloneFlags::CLONE_NEWUTS,
Self::Pid => CloneFlags::CLONE_NEWPID,
}
}
}
@@ -178,12 +178,6 @@ impl fmt::Debug for NamespaceType {
}
}
impl Default for NamespaceType {
fn default() -> Self {
NamespaceType::IPC
}
}
#[cfg(test)]
mod tests {
use super::{Namespace, NamespaceType};
@@ -234,15 +228,15 @@ mod tests {
#[test]
fn test_namespace_type() {
let ipc = NamespaceType::IPC;
let ipc = NamespaceType::Ipc;
assert_eq!("ipc", ipc.get());
assert_eq!(CloneFlags::CLONE_NEWIPC, ipc.get_flags());
let uts = NamespaceType::UTS;
let uts = NamespaceType::Uts;
assert_eq!("uts", uts.get());
assert_eq!(CloneFlags::CLONE_NEWUTS, uts.get_flags());
let pid = NamespaceType::PID;
let pid = NamespaceType::Pid;
assert_eq!("pid", pid.get());
assert_eq!(CloneFlags::CLONE_NEWPID, pid.get_flags());
}

View File

@@ -542,12 +542,10 @@ impl Handle {
ntype: NDA_UNSPEC as u8,
},
nlas: {
let mut nlas = vec![];
nlas.push(Nla::Destination(match ip {
let mut nlas = vec![Nla::Destination(match ip {
IpAddr::V4(v4) => v4.octets().to_vec(),
IpAddr::V6(v6) => v6.octets().to_vec(),
}));
})];
if !neigh.lladdr.is_empty() {
nlas.push(Nla::LinkLocalAddress(

View File

@@ -20,9 +20,8 @@ use anyhow::{anyhow, Context, Result};
use oci::{LinuxNamespace, Root, Spec};
use protobuf::{RepeatedField, SingularPtrField};
use protocols::agent::{
AgentDetails, CopyFileRequest, GuestDetailsResponse, Interfaces, ListProcessesResponse,
Metrics, OOMEvent, ReadStreamResponse, Routes, StatsContainerResponse, WaitProcessResponse,
WriteStreamResponse,
AgentDetails, CopyFileRequest, GuestDetailsResponse, Interfaces, Metrics, OOMEvent,
ReadStreamResponse, Routes, StatsContainerResponse, WaitProcessResponse, WriteStreamResponse,
};
use protocols::empty::Empty;
use protocols::health::{
@@ -370,7 +369,6 @@ impl AgentService {
let s = self.sandbox.clone();
let mut resp = WaitProcessResponse::new();
let pid: pid_t;
let stream;
let (exit_send, mut exit_recv) = tokio::sync::mpsc::channel(100);
@@ -381,22 +379,20 @@ impl AgentService {
"exec-id" => eid.clone()
);
{
let exit_rx = {
let mut sandbox = s.lock().await;
let p = find_process(&mut sandbox, cid.as_str(), eid.as_str(), false)?;
stream = p.get_reader(StreamType::ExitPipeR);
p.exit_watchers.push(exit_send);
pid = p.pid;
}
if stream.is_some() {
info!(sl!(), "reading exit pipe");
p.exit_rx.clone()
};
let reader = stream.unwrap();
let mut content: Vec<u8> = vec![0, 1];
let _ = reader.lock().await.read(&mut content).await;
if let Some(mut exit_rx) = exit_rx {
info!(sl!(), "cid {} eid {} waiting for exit signal", &cid, &eid);
while exit_rx.changed().await.is_ok() {}
info!(sl!(), "cid {} eid {} received exit signal", &cid, &eid);
}
let mut sandbox = s.lock().await;
@@ -576,91 +572,6 @@ impl protocols::agent_ttrpc::AgentService for AgentService {
.map_err(|e| ttrpc_error(ttrpc::Code::INTERNAL, e.to_string()))
}
async fn list_processes(
&self,
_ctx: &TtrpcContext,
req: protocols::agent::ListProcessesRequest,
) -> ttrpc::Result<ListProcessesResponse> {
let cid = req.container_id.clone();
let format = req.format.clone();
let mut args = req.args.into_vec();
let mut resp = ListProcessesResponse::new();
let s = Arc::clone(&self.sandbox);
let mut sandbox = s.lock().await;
let ctr = sandbox.get_container(&cid).ok_or_else(|| {
ttrpc_error(
ttrpc::Code::INVALID_ARGUMENT,
"invalid container id".to_string(),
)
})?;
let pids = ctr.processes().unwrap();
match format.as_str() {
"table" => {}
"json" => {
resp.process_list = serde_json::to_vec(&pids).unwrap();
return Ok(resp);
}
_ => {
return Err(ttrpc_error(
ttrpc::Code::INVALID_ARGUMENT,
"invalid format!".to_string(),
));
}
}
// format "table"
if args.is_empty() {
// default argument
args = vec!["-ef".to_string()];
}
let output = tokio::process::Command::new("ps")
.args(args.as_slice())
.stdout(Stdio::piped())
.output()
.await
.expect("ps failed");
let out: String = String::from_utf8(output.stdout).unwrap();
let mut lines: Vec<String> = out.split('\n').map(|v| v.to_string()).collect();
let pid_index = lines[0]
.split_whitespace()
.position(|v| v == "PID")
.unwrap();
let mut result = String::new();
result.push_str(lines[0].as_str());
lines.remove(0);
for line in &lines {
if line.trim().is_empty() {
continue;
}
let fields: Vec<String> = line.split_whitespace().map(|v| v.to_string()).collect();
if fields.len() < pid_index + 1 {
warn!(sl!(), "corrupted output?");
continue;
}
let pid = fields[pid_index].trim().parse::<i32>().unwrap();
for p in &pids {
if pid == *p {
result.push_str(line.as_str());
}
}
}
resp.process_list = Vec::from(result);
Ok(resp)
}
async fn update_container(
&self,
_ctx: &TtrpcContext,
@@ -1636,11 +1547,6 @@ fn cleanup_process(p: &mut Process) -> Result<()> {
let _ = unistd::close(p.term_master.unwrap())?;
}
if p.exit_pipe_r.is_some() {
p.close_stream(StreamType::ExitPipeR);
let _ = unistd::close(p.exit_pipe_r.unwrap())?;
}
p.notify_term_close();
p.parent_stdin = None;
@@ -2018,9 +1924,9 @@ mod tests {
let msg = format!("{}, result: {:?}", msg, result);
if result.is_ok() {
assert!(!d.expect_error, msg);
assert!(!d.expect_error, "{}", msg);
} else {
assert!(d.expect_error, msg);
assert!(d.expect_error, "{}", msg);
}
}
}

View File

@@ -8,6 +8,7 @@ use crate::mount::{get_mount_fs_type, remove_mounts, TYPE_ROOTFS};
use crate::namespace::Namespace;
use crate::netlink::Handle;
use crate::network::Network;
use crate::uevent::{Uevent, UeventMatcher};
use anyhow::{anyhow, Context, Result};
use libc::pid_t;
use oci::{Hook, Hooks};
@@ -25,8 +26,11 @@ use std::path::Path;
use std::sync::Arc;
use std::{thread, time};
use tokio::sync::mpsc::{channel, Receiver, Sender};
use tokio::sync::oneshot;
use tokio::sync::Mutex;
type UeventWatcher = (Box<dyn UeventMatcher>, oneshot::Sender<Uevent>);
#[derive(Debug)]
pub struct Sandbox {
pub logger: Logger,
@@ -36,7 +40,8 @@ pub struct Sandbox {
pub network: Network,
pub mounts: Vec<String>,
pub container_mounts: HashMap<String, Vec<String>>,
pub pci_device_map: HashMap<String, String>,
pub uevent_map: HashMap<String, Uevent>,
pub uevent_watchers: Vec<Option<UeventWatcher>>,
pub shared_utsns: Namespace,
pub shared_ipcns: Namespace,
pub sandbox_pidns: Option<Namespace>,
@@ -65,7 +70,8 @@ impl Sandbox {
containers: HashMap::new(),
mounts: Vec::new(),
container_mounts: HashMap::new(),
pci_device_map: HashMap::new(),
uevent_map: HashMap::new(),
uevent_watchers: Vec::new(),
shared_utsns: Namespace::new(&logger),
shared_ipcns: Namespace::new(&logger),
sandbox_pidns: None,

View File

@@ -22,6 +22,9 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
info!(logger, "handling signal"; "signal" => "SIGCHLD");
loop {
// Avoid reaping the undesirable child's signal, e.g., execute_hook's
// The lock should be released immediately.
rustjail::container::WAIT_PID_LOCKER.lock().await;
let result = wait::waitpid(
Some(Pid::from_raw(-1)),
Some(WaitPidFlag::WNOHANG | WaitPidFlag::__WALL),
@@ -55,13 +58,6 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
}
let mut p = process.unwrap();
if p.exit_pipe_w.is_none() {
info!(logger, "process exit pipe not set");
continue;
}
let pipe_write = p.exit_pipe_w.unwrap();
let ret: i32;
match wait_status {
@@ -75,7 +71,7 @@ async fn handle_sigchild(logger: Logger, sandbox: Arc<Mutex<Sandbox>>) -> Result
}
p.exit_code = ret;
let _ = unistd::close(pipe_write);
let _ = p.exit_tx.take();
info!(logger, "notify term to close");
// close the socket file to notify readStdio to close terminal specifically

View File

@@ -6,26 +6,38 @@
use crate::device::online_device;
use crate::linux_abi::*;
use crate::sandbox::Sandbox;
use crate::GLOBAL_DEVICE_WATCHER;
use crate::AGENT_CONFIG;
use slog::Logger;
use anyhow::Result;
use anyhow::{anyhow, Result};
use netlink_sys::{protocols, SocketAddr, TokioSocket};
use nix::errno::Errno;
use std::fmt::Debug;
use std::os::unix::io::FromRawFd;
use std::sync::Arc;
use tokio::select;
use tokio::sync::watch::Receiver;
use tokio::sync::Mutex;
#[derive(Debug, Default)]
struct Uevent {
action: String,
devpath: String,
devname: String,
subsystem: String,
// Convenience macro to obtain the scope logger
macro_rules! sl {
() => {
slog_scope::logger().new(o!("subsystem" => "uevent"))
};
}
#[derive(Debug, Default, Clone, PartialEq, Eq)]
pub struct Uevent {
pub action: String,
pub devpath: String,
pub devname: String,
pub subsystem: String,
seqnum: String,
interface: String,
pub interface: String,
}
pub trait UeventMatcher: Sync + Send + Debug + 'static {
fn is_match(&self, uev: &Uevent) -> bool;
}
impl Uevent {
@@ -52,89 +64,87 @@ impl Uevent {
event
}
// Check whether this is a block device hot-add event.
fn is_block_add_event(&self) -> bool {
let pci_root_bus_path = create_pci_root_bus_path();
self.action == U_EVENT_ACTION_ADD
&& self.subsystem == "block"
&& {
self.devpath.starts_with(pci_root_bus_path.as_str())
|| self.devpath.starts_with(ACPI_DEV_PATH) // NVDIMM/PMEM devices
}
&& !self.devname.is_empty()
}
async fn process_add(&self, logger: &Logger, sandbox: &Arc<Mutex<Sandbox>>) {
// Special case for memory hot-adds first
let online_path = format!("{}/{}/online", SYSFS_DIR, &self.devpath);
if online_path.starts_with(SYSFS_MEMORY_ONLINE_PATH) {
let _ = online_device(online_path.as_ref()).map_err(|e| {
error!(
*logger,
"failed to online device";
"device" => &self.devpath,
"error" => format!("{}", e),
)
});
return;
}
async fn handle_block_add_event(&self, sandbox: &Arc<Mutex<Sandbox>>) {
let pci_root_bus_path = create_pci_root_bus_path();
// Keep the same lock order as device::get_device_name(), otherwise it may cause deadlock.
let watcher = GLOBAL_DEVICE_WATCHER.clone();
let mut w = watcher.lock().await;
let mut sb = sandbox.lock().await;
// Add the device node name to the pci device map.
sb.pci_device_map
.insert(self.devpath.clone(), self.devname.clone());
// Record the event by sysfs path
sb.uevent_map.insert(self.devpath.clone(), self.clone());
// Notify watchers that are interested in the udev event.
// Close the channel after watcher has been notified.
let devpath = self.devpath.clone();
let empties: Vec<_> = w
.iter_mut()
.filter(|(dev_addr, _)| {
let pci_p = format!("{}/{}", pci_root_bus_path, *dev_addr);
// blk block device
devpath.starts_with(pci_p.as_str()) ||
// scsi block device
{
(*dev_addr).ends_with(SCSI_BLOCK_SUFFIX) &&
devpath.contains(*dev_addr)
} ||
// nvdimm/pmem device
{
let pmem_suffix = format!("/{}/{}", SCSI_BLOCK_SUFFIX, self.devname);
devpath.starts_with(ACPI_DEV_PATH) &&
devpath.ends_with(pmem_suffix.as_str()) &&
dev_addr.ends_with(pmem_suffix.as_str())
for watch in &mut sb.uevent_watchers {
if let Some((matcher, _)) = watch {
if matcher.is_match(&self) {
let (_, sender) = watch.take().unwrap();
let _ = sender.send(self.clone());
}
})
.map(|(k, sender)| {
let devname = self.devname.clone();
let sender = sender.take().unwrap();
let _ = sender.send(devname);
k.clone()
})
.collect();
// Remove notified nodes from the watcher map.
for empty in empties {
w.remove(&empty);
}
}
}
async fn process(&self, logger: &Logger, sandbox: &Arc<Mutex<Sandbox>>) {
if self.is_block_add_event() {
return self.handle_block_add_event(sandbox).await;
} else if self.action == U_EVENT_ACTION_ADD {
let online_path = format!("{}/{}/online", SYSFS_DIR, &self.devpath);
// It's a memory hot-add event.
if online_path.starts_with(SYSFS_MEMORY_ONLINE_PATH) {
let _ = online_device(online_path.as_ref()).map_err(|e| {
error!(
*logger,
"failed to online device";
"device" => &self.devpath,
"error" => format!("{}", e),
)
});
return;
}
if self.action == U_EVENT_ACTION_ADD {
return self.process_add(logger, sandbox).await;
}
debug!(*logger, "ignoring event"; "uevent" => format!("{:?}", self));
}
}
pub async fn wait_for_uevent(
sandbox: &Arc<Mutex<Sandbox>>,
matcher: impl UeventMatcher,
) -> Result<Uevent> {
let mut sb = sandbox.lock().await;
for uev in sb.uevent_map.values() {
if matcher.is_match(uev) {
info!(sl!(), "Device {:?} found in pci device map", uev);
return Ok(uev.clone());
}
}
// If device is not found in the device map, hotplug event has not
// been received yet, create and add channel to the watchers map.
// The key of the watchers map is the device we are interested in.
// Note this is done inside the lock, not to miss any events from the
// global udev listener.
let (tx, rx) = tokio::sync::oneshot::channel::<Uevent>();
let idx = sb.uevent_watchers.len();
sb.uevent_watchers.push(Some((Box::new(matcher), tx)));
drop(sb); // unlock
info!(sl!(), "Waiting on channel for uevent notification\n");
let hotplug_timeout = AGENT_CONFIG.read().await.hotplug_timeout;
let uev = match tokio::time::timeout(hotplug_timeout, rx).await {
Ok(v) => v?,
Err(_) => {
let mut sb = sandbox.lock().await;
let matcher = sb.uevent_watchers[idx].take().unwrap().0;
return Err(anyhow!(
"Timeout after {:?} waiting for uevent {:?}",
hotplug_timeout,
&matcher
));
}
};
Ok(uev)
}
pub async fn watch_uevents(
sandbox: Arc<Mutex<Sandbox>>,
mut shutdown: Receiver<bool>,
@@ -199,3 +209,71 @@ pub async fn watch_uevents(
Ok(())
}
// Used in the device module unit tests
#[cfg(test)]
pub(crate) fn spawn_test_watcher(sandbox: Arc<Mutex<Sandbox>>, uev: Uevent) {
tokio::spawn(async move {
loop {
let mut sb = sandbox.lock().await;
for w in &mut sb.uevent_watchers {
if let Some((matcher, _)) = w {
if matcher.is_match(&uev) {
let (_, sender) = w.take().unwrap();
let _ = sender.send(uev);
return;
}
}
}
drop(sb); // unlock
}
});
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Debug, Clone, Copy)]
struct AlwaysMatch();
impl UeventMatcher for AlwaysMatch {
fn is_match(&self, _: &Uevent) -> bool {
true
}
}
#[tokio::test]
async fn test_wait_for_uevent() {
let uev = Uevent {
action: crate::linux_abi::U_EVENT_ACTION_ADD.to_string(),
subsystem: "test".to_string(),
devpath: "/test/sysfs/path".to_string(),
devname: "testdevname".to_string(),
..Default::default()
};
let matcher = AlwaysMatch();
let logger = slog::Logger::root(slog::Discard, o!());
let sandbox = Arc::new(Mutex::new(Sandbox::new(&logger).unwrap()));
let mut sb = sandbox.lock().await;
sb.uevent_map.insert(uev.devpath.clone(), uev.clone());
drop(sb); // unlock
let uev2 = wait_for_uevent(&sandbox, matcher).await;
assert!(uev2.is_ok());
assert_eq!(uev2.unwrap(), uev);
let mut sb = sandbox.lock().await;
sb.uevent_map.remove(&uev.devpath).unwrap();
drop(sb); // unlock
spawn_test_watcher(sandbox.clone(), uev.clone());
let uev2 = wait_for_uevent(&sandbox, matcher).await;
assert!(uev2.is_ok());
assert_eq!(uev2.unwrap(), uev);
}
}

View File

@@ -3,10 +3,14 @@
// SPDX-License-Identifier: Apache-2.0
//
use anyhow::Result;
use futures::StreamExt;
use std::io;
use std::io::ErrorKind;
use std::os::unix::io::{FromRawFd, RawFd};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::sync::watch::Receiver;
use tokio_vsock::{Incoming, VsockListener, VsockStream};
// Size of I/O read buffer
const BUF_SIZE: usize = 8192;
@@ -52,6 +56,15 @@ where
Ok(total_bytes)
}
pub fn get_vsock_incoming(fd: RawFd) -> Incoming {
unsafe { VsockListener::from_raw_fd(fd).incoming() }
}
pub async fn get_vsock_stream(fd: RawFd) -> Result<VsockStream> {
let stream = get_vsock_incoming(fd).next().await.unwrap()?;
Ok(stream)
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -166,6 +166,7 @@ DEFAULTEXPFEATURES := []
#Default entropy source
DEFENTROPYSOURCE := /dev/urandom
DEFVALIDENTROPYSOURCES := [\"/dev/urandom\",\"/dev/random\",\"\"]
DEFDISABLEBLOCK := false
DEFSHAREDFS_QEMU_VIRTIOFS := virtio-fs
@@ -173,7 +174,7 @@ DEFVIRTIOFSDAEMON := $(LIBEXECDIR)/kata-qemu/virtiofsd
DEFVALIDVIRTIOFSDAEMONPATHS := [\"$(DEFVIRTIOFSDAEMON)\"]
# Default DAX mapping cache size in MiB
#if value is 0, DAX is not enabled
DEFVIRTIOFSCACHESIZE := 0
DEFVIRTIOFSCACHESIZE ?= 0
DEFVIRTIOFSCACHE ?= auto
# Format example:
# [\"-o\", \"arg1=xxx,arg2\", \"-o\", \"hello world\", \"--arg3=yyy\"]
@@ -454,6 +455,7 @@ USER_VARS += DEFFILEMEMBACKEND
USER_VARS += DEFVALIDFILEMEMBACKENDS
USER_VARS += DEFMSIZE9P
USER_VARS += DEFENTROPYSOURCE
USER_VARS += DEFVALIDENTROPYSOURCES
USER_VARS += DEFSANDBOXCGROUPONLY
USER_VARS += DEFBINDMOUNTS
USER_VARS += FEATURE_SELINUX

View File

@@ -1 +0,0 @@
2.0.0

1
src/runtime/VERSION Symbolic link
View File

@@ -0,0 +1 @@
../../VERSION

View File

@@ -150,6 +150,10 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_ACRN@"
#debug_console_enabled = true
# Agent connection dialing timeout value in seconds
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -165,6 +165,10 @@ block_device_driver = "virtio-blk"
#debug_console_enabled = true
# Agent connection dialing timeout value in seconds
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -161,23 +161,23 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@"
# This option changes the default hypervisor and kernel parameters
# to enable debug output where available.
#
#
# Default false
#enable_debug = true
# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#
#disable_nesting_checks = true
# This is the msize used for 9p shares. It is the number of bytes
# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = @DEFMSIZE9P@
# VFIO devices are hotplugged on a bridge by default.
# VFIO devices are hotplugged on a bridge by default.
# Enable hotplugging on root bus. This may be required for devices with
# a large PCI bar, as this is a current limitation with hotplugging on
# a large PCI bar, as this is a current limitation with hotplugging on
# a bridge. This value is valid for "pc" machine type.
# Default false
#hotplug_vfio_on_root_bus = true
@@ -194,6 +194,11 @@ block_device_driver = "@DEFBLOCKSTORAGEDRIVER_FC@"
# all practical purposes.
#entropy_source= "@DEFENTROPYSOURCE@"
# List of valid annotations values for entropy_source
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDENTROPYSOURCES@
valid_entropy_sources = @DEFVALIDENTROPYSOURCES@
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
@@ -282,6 +287,10 @@ kernel_modules=[]
#debug_console_enabled = true
# Agent connection dialing timeout value in seconds
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -296,6 +296,11 @@ pflashes = []
# all practical purposes.
#entropy_source= "@DEFENTROPYSOURCE@"
# List of valid annotations values for entropy_source
# The default if not set is empty (all annotations rejected.)
# Your distribution recommends: @DEFVALIDENTROPYSOURCES@
valid_entropy_sources = @DEFVALIDENTROPYSOURCES@
# Path to OCI hook binaries in the *guest rootfs*.
# This does not affect host-side hooks which must instead be added to
# the OCI spec passed to the runtime.
@@ -432,6 +437,10 @@ kernel_modules=[]
#debug_console_enabled = true
# Agent connection dialing timeout value in seconds
# (default: 30)
#dial_timeout = 30
[netmon]
# If enabled, the network monitoring process gets started when the
# sandbox is created. This allows for the detection of some additional

View File

@@ -1,134 +0,0 @@
// Copyright (c) 2014,2015,2016 Docker, Inc.
// Copyright (c) 2017 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"fmt"
"io"
"os"
"syscall"
"unsafe"
"golang.org/x/sys/unix"
)
var ptmxPath = "/dev/ptmx"
// Console represents a pseudo TTY.
type Console struct {
io.ReadWriteCloser
master *os.File
slavePath string
}
// isTerminal returns true if fd is a terminal, else false
func isTerminal(fd uintptr) bool {
var termios syscall.Termios
_, _, err := syscall.Syscall(syscall.SYS_IOCTL, fd, syscall.TCGETS, uintptr(unsafe.Pointer(&termios)))
return err == 0
}
// ConsoleFromFile creates a console from a file
func ConsoleFromFile(f *os.File) *Console {
return &Console{
master: f,
}
}
// NewConsole returns an initialized console that can be used within a container by copying bytes
// from the master side to the slave that is attached as the tty for the container's init process.
func newConsole() (*Console, error) {
master, err := os.OpenFile(ptmxPath, unix.O_RDWR|unix.O_NOCTTY|unix.O_CLOEXEC, 0)
if err != nil {
return nil, err
}
if err := saneTerminal(master); err != nil {
return nil, err
}
console, err := ptsname(master)
if err != nil {
return nil, err
}
if err := unlockpt(master); err != nil {
return nil, err
}
return &Console{
slavePath: console,
master: master,
}, nil
}
// File returns master
func (c *Console) File() *os.File {
return c.master
}
// Path to slave
func (c *Console) Path() string {
return c.slavePath
}
// Read from master
func (c *Console) Read(b []byte) (int, error) {
return c.master.Read(b)
}
// Write to master
func (c *Console) Write(b []byte) (int, error) {
return c.master.Write(b)
}
// Close master
func (c *Console) Close() error {
if m := c.master; m != nil {
return m.Close()
}
return nil
}
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
var u int32
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))); err != 0 {
return err
}
return nil
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
var u uint32
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCGPTN, uintptr(unsafe.Pointer(&u))); err != 0 {
return "", err
}
return fmt.Sprintf("/dev/pts/%d", u), nil
}
// saneTerminal sets the necessary tty_ioctl(4)s to ensure that a pty pair
// created by us acts normally. In particular, a not-very-well-known default of
// Linux unix98 ptys is that they have +onlcr by default. While this isn't a
// problem for terminal emulators, because we relay data from the terminal we
// also relay that funky line discipline.
func saneTerminal(terminal *os.File) error {
// Go doesn't have a wrapper for any of the termios ioctls.
var termios unix.Termios
if _, _, err := unix.Syscall(unix.SYS_IOCTL, terminal.Fd(), unix.TCGETS, uintptr(unsafe.Pointer(&termios))); err != 0 {
return fmt.Errorf("ioctl(tty, tcgets): %s", err.Error())
}
// Set -onlcr so we don't have to deal with \r.
termios.Oflag &^= unix.ONLCR
if _, _, err := unix.Syscall(unix.SYS_IOCTL, terminal.Fd(), unix.TCSETS, uintptr(unsafe.Pointer(&termios))); err != 0 {
return fmt.Errorf("ioctl(tty, tcsets): %s", err.Error())
}
return nil
}

View File

@@ -1,129 +0,0 @@
// Copyright (c) 2017 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"io/ioutil"
"os"
"testing"
ktu "github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils"
"github.com/stretchr/testify/assert"
)
func TestConsoleFromFile(t *testing.T) {
assert := assert.New(t)
console := ConsoleFromFile(os.Stdout)
assert.NotNil(console.File(), "console file is nil")
}
func TestNewConsole(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
t.Skip(testDisabledAsNonRoot)
}
assert := assert.New(t)
console, err := newConsole()
assert.NoError(err, "failed to create a new console: %s", err)
defer console.Close()
assert.NotEmpty(console.Path(), "console path is empty")
assert.NotNil(console.File(), "console file is nil")
}
func TestIsTerminal(t *testing.T) {
if tc.NotValid(ktu.NeedRoot()) {
t.Skip(testDisabledAsNonRoot)
}
assert := assert.New(t)
var fd uintptr = 4
assert.False(isTerminal(fd), "Fd %d is not a terminal", fd)
console, err := newConsole()
assert.NoError(err, "failed to create a new console: %s", err)
defer console.Close()
fd = console.File().Fd()
assert.True(isTerminal(fd), "Fd %d is a terminal", fd)
}
func TestReadWrite(t *testing.T) {
assert := assert.New(t)
// write operation
f, err := ioutil.TempFile(os.TempDir(), ".tty")
assert.NoError(err, "failed to create a temporal file")
defer os.Remove(f.Name())
console := ConsoleFromFile(f)
assert.NotNil(console)
defer console.Close()
msgWrite := "hello"
l, err := console.Write([]byte(msgWrite))
assert.NoError(err, "failed to write message: %s", msgWrite)
assert.Equal(len(msgWrite), l)
console.master.Sync()
console.master.Seek(0, 0)
// Read operation
msgRead := make([]byte, len(msgWrite))
l, err = console.Read(msgRead)
assert.NoError(err, "failed to read message: %s", msgWrite)
assert.Equal(len(msgWrite), l)
assert.Equal(msgWrite, string(msgRead))
}
func TestNewConsoleFail(t *testing.T) {
assert := assert.New(t)
orgPtmxPath := ptmxPath
defer func() { ptmxPath = orgPtmxPath }()
// OpenFile failure
ptmxPath = "/this/file/does/not/exist"
c, err := newConsole()
assert.Error(err)
assert.Nil(c)
// saneTerminal failure
f, err := ioutil.TempFile("", "")
assert.NoError(err)
assert.NoError(f.Close())
defer os.Remove(f.Name())
ptmxPath = f.Name()
c, err = newConsole()
assert.Error(err)
assert.Nil(c)
}
func TestConsoleClose(t *testing.T) {
assert := assert.New(t)
// nil master
c := &Console{}
assert.NoError(c.Close())
f, err := ioutil.TempFile("", "")
assert.NoError(err)
defer os.Remove(f.Name())
c.master = f
assert.NoError(c.Close())
}
func TestConsolePtsnameFail(t *testing.T) {
assert := assert.New(t)
pts, err := ptsname(nil)
assert.Error(err)
assert.Empty(pts)
}

View File

@@ -389,13 +389,6 @@ EXAMPLES:
if verbose {
kataLog.Logger.SetLevel(logrus.InfoLevel)
}
ctx, err := cliContextToContext(context)
if err != nil {
return err
}
span, _ := katautils.Trace(ctx, "check")
defer span.End()
if !context.Bool("no-network-checks") && os.Getenv(noNetworkEnvVar) == "" {
cmd := RelCmdCheck
@@ -407,8 +400,7 @@ EXAMPLES:
if os.Geteuid() == 0 {
kataLog.Warn("Not running network checks as super user")
} else {
err = HandleReleaseVersions(cmd, version, context.Bool("include-all-releases"))
err := HandleReleaseVersions(cmd, version, context.Bool("include-all-releases"))
if err != nil {
return err
}
@@ -424,7 +416,7 @@ EXAMPLES:
return errors.New("check: cannot determine runtime config")
}
err = setCPUtype(runtimeConfig.HypervisorType)
err := setCPUtype(runtimeConfig.HypervisorType)
if err != nil {
return err
}
@@ -437,7 +429,6 @@ EXAMPLES:
}
err = hostIsVMContainerCapable(details)
if err != nil {
return err
}

View File

@@ -13,7 +13,6 @@ import (
"strings"
"github.com/BurntSushi/toml"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
"github.com/kata-containers/kata-containers/src/runtime/pkg/utils"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
exp "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/experimental"
@@ -448,14 +447,6 @@ var kataEnvCLICommand = cli.Command{
},
},
Action: func(context *cli.Context) error {
ctx, err := cliContextToContext(context)
if err != nil {
return err
}
span, _ := katautils.Trace(ctx, "kata-env")
defer span.End()
return handleSettings(defaultOutputFile, context)
},
}

View File

@@ -1,91 +1,13 @@
// Copyright (c) 2018 IBM
// Copyright (c) 2021 IBM
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"fmt"
vcUtils "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
"path/filepath"
goruntime "runtime"
)
func getExpectedHostDetails(tmpdir string) (HostInfo, error) {
type filesToCreate struct {
file string
contents string
}
const expectedKernelVersion = "99.1"
const expectedArch = goruntime.GOARCH
expectedDistro := DistroInfo{
Name: "Foo",
Version: "42",
}
expectedCPU := CPUInfo{
Vendor: "moi",
Model: "awesome XI",
}
expectedHostDetails := HostInfo{
Kernel: expectedKernelVersion,
Architecture: expectedArch,
Distro: expectedDistro,
CPU: expectedCPU,
VMContainerCapable: true,
SupportVSocks: vcUtils.SupportsVsocks(),
}
testProcCPUInfo := filepath.Join(tmpdir, "cpuinfo")
testOSRelease := filepath.Join(tmpdir, "os-release")
// XXX: This file is *NOT* created by this function on purpose
// (to ensure the only file checked by the tests is
// testOSRelease). osReleaseClr handling is tested in
// utils_test.go.
testOSReleaseClr := filepath.Join(tmpdir, "os-release-clr")
testProcVersion := filepath.Join(tmpdir, "proc-version")
// override
procVersion = testProcVersion
osRelease = testOSRelease
osReleaseClr = testOSReleaseClr
procCPUInfo = testProcCPUInfo
procVersionContents := fmt.Sprintf("Linux version %s a b c",
expectedKernelVersion)
osReleaseContents := fmt.Sprintf(`
NAME="%s"
VERSION_ID="%s"
`, expectedDistro.Name, expectedDistro.Version)
procCPUInfoContents := fmt.Sprintf(`
%s : %s
processor 0: version = 00, identification = 3929E7, %s = %s
`,
archCPUVendorField,
expectedCPU.Vendor,
archCPUModelField,
expectedCPU.Model)
data := []filesToCreate{
{procVersion, procVersionContents},
{osRelease, osReleaseContents},
{procCPUInfo, procCPUInfoContents},
}
for _, d := range data {
err := createFile(d.file, d.contents)
if err != nil {
return HostInfo{}, err
}
}
return expectedHostDetails, nil
expectedVendor := "moi"
expectedModel := "awesome XI"
expectedVMContainerCapable := true
return genericGetExpectedHostDetails(tmpdir, expectedVendor, expectedModel, expectedVMContainerCapable)
}

View File

@@ -161,7 +161,7 @@ func makeRuntimeConfig(prefixDir string) (configFile string, config oci.RuntimeC
return "", oci.RuntimeConfig{}, err
}
_, config, err = katautils.LoadConfiguration(configFile, true, false)
_, config, err = katautils.LoadConfiguration(configFile, true)
if err != nil {
return "", oci.RuntimeConfig{}, err
}

View File

@@ -14,7 +14,6 @@ import (
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"sync"
@@ -26,7 +25,6 @@ import (
clientUtils "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols/client"
"github.com/pkg/errors"
"github.com/urfave/cli"
"go.opentelemetry.io/otel/label"
)
const (
@@ -38,10 +36,8 @@ const (
subCommandName = "exec"
// command-line parameters name
paramRuntimeNamespace = "runtime-namespace"
paramDebugConsolePort = "kata-debug-port"
defaultKernelParamDebugConsoleVPortValue = 1026
defaultRuntimeNamespace = "k8s.io"
)
var (
@@ -57,34 +53,16 @@ var kataExecCLICommand = cli.Command{
Name: subCommandName,
Usage: "Enter into guest by debug console",
Flags: []cli.Flag{
cli.StringFlag{
Name: paramRuntimeNamespace,
Usage: "Namespace that containerd or CRI-O are using for containers. (Default: k8s.io, only works for containerd)",
},
cli.Uint64Flag{
Name: paramDebugConsolePort,
Usage: "Port that debug console is listening on. (Default: 1026)",
},
},
Action: func(context *cli.Context) error {
ctx, err := cliContextToContext(context)
if err != nil {
return err
}
span, _ := katautils.Trace(ctx, subCommandName)
defer span.End()
namespace := context.String(paramRuntimeNamespace)
if namespace == "" {
namespace = defaultRuntimeNamespace
}
span.SetAttributes(label.Key("namespace").String(namespace))
port := context.Uint64(paramDebugConsolePort)
if port == 0 {
port = defaultKernelParamDebugConsoleVPortValue
}
span.SetAttributes(label.Key("port").Uint64(port))
sandboxID := context.Args().Get(0)
@@ -92,9 +70,8 @@ var kataExecCLICommand = cli.Command{
return err
}
span.SetAttributes(label.Key("sandbox").String(sandboxID))
conn, err := getConn(sandboxID, port)
conn, err := getConn(namespace, sandboxID, port)
if err != nil {
return err
}
@@ -177,9 +154,8 @@ func (s *iostream) Read(data []byte) (n int, err error) {
return s.conn.Read(data)
}
func getConn(namespace, sandboxID string, port uint64) (net.Conn, error) {
socketAddr := filepath.Join(string(filepath.Separator), "containerd-shim", namespace, sandboxID, "shim-monitor.sock")
client, err := kataMonitor.BuildUnixSocketClient(socketAddr, defaultTimeout)
func getConn(sandboxID string, port uint64) (net.Conn, error) {
client, err := kataMonitor.BuildShimClient(sandboxID, defaultTimeout)
if err != nil {
return nil, err
}
@@ -190,7 +166,7 @@ func getConn(namespace, sandboxID string, port uint64) (net.Conn, error) {
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("Failed to get %s: %d", socketAddr, resp.StatusCode)
return nil, fmt.Errorf("Failure from %s shim-monitor: %d", sandboxID, resp.StatusCode)
}
defer resp.Body.Close()

View File

@@ -0,0 +1,38 @@
// Copyright (c) 2021 Apple Inc.
//
// SPDX-License-Identifier: Apache-2.0
//
package main
import (
"fmt"
kataMonitor "github.com/kata-containers/kata-containers/src/runtime/pkg/kata-monitor"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
"github.com/urfave/cli"
)
var kataMetricsCLICommand = cli.Command{
Name: "metrics",
Usage: "gather metrics associated with infrastructure used to run a sandbox",
UsageText: "metrics <sandbox id>",
Action: func(context *cli.Context) error {
sandboxID := context.Args().Get(0)
if err := katautils.VerifyContainerID(sandboxID); err != nil {
return err
}
// Get the metrics!
metrics, err := kataMonitor.GetSandboxMetrics(sandboxID)
if err != nil {
return err
}
fmt.Printf("%s\n", metrics)
return nil
},
}

View File

@@ -27,9 +27,6 @@ import (
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/label"
otelTrace "go.opentelemetry.io/otel/trace"
)
// specConfig is the name of the file holding the containers configuration
@@ -125,6 +122,7 @@ var runtimeCommands = []cli.Command{
kataCheckCLICommand,
kataEnvCLICommand,
kataExecCLICommand,
kataMetricsCLICommand,
factoryCLICommand,
}
@@ -132,10 +130,6 @@ var runtimeCommands = []cli.Command{
// parsing occurs.
var runtimeBeforeSubcommands = beforeSubcommands
// runtimeAfterSubcommands is the function to run after the command-line
// has been parsed.
var runtimeAfterSubcommands = afterSubcommands
// runtimeCommandNotFound is the function to handle an invalid sub-command.
var runtimeCommandNotFound = commandNotFound
@@ -168,10 +162,6 @@ func init() {
// setupSignalHandler sets up signal handling, starting a go routine to deal
// with signals as they arrive.
//
// Note that the specified context is NOT used to create a trace span (since the
// first (root) span must be created in beforeSubcommands()): it is simply
// used to pass to the crash handling functions to finalise tracing.
func setupSignalHandler(ctx context.Context) {
signals.SetLogger(kataLog)
@@ -181,10 +171,6 @@ func setupSignalHandler(ctx context.Context) {
signal.Notify(sigCh, sig)
}
dieCb := func() {
katautils.StopTracing(ctx)
}
go func() {
for {
sig := <-sigCh
@@ -198,7 +184,6 @@ func setupSignalHandler(ctx context.Context) {
if signals.FatalSignal(nativeSignal) {
kataLog.WithField("signal", sig).Error("received fatal signal")
signals.Die(dieCb)
} else if debug && signals.NonFatalSignal(nativeSignal) {
kataLog.WithField("signal", sig).Debug("handling signal")
signals.Backtrace()
@@ -210,18 +195,6 @@ func setupSignalHandler(ctx context.Context) {
// setExternalLoggers registers the specified logger with the external
// packages which accept a logger to handle their own logging.
func setExternalLoggers(ctx context.Context, logger *logrus.Entry) {
var span otelTrace.Span
// Only create a new span if a root span already exists. This is
// required to ensure that this function will not disrupt the root
// span logic by creating a span before the proper root span has been
// created.
if otelTrace.SpanFromContext(ctx) != nil {
span, ctx = katautils.Trace(ctx, "setExternalLoggers")
defer span.End()
}
// Set virtcontainers logger.
vci.SetLogger(ctx, logger)
@@ -244,7 +217,6 @@ func beforeSubcommands(c *cli.Context) error {
var configFile string
var runtimeConfig oci.RuntimeConfig
var err error
var traceFlushFunc func()
katautils.SetConfigOptions(name, defaultRuntimeConfiguration, defaultSysConfRuntimeConfiguration)
@@ -270,7 +242,6 @@ func beforeSubcommands(c *cli.Context) error {
// Issue: https://github.com/kata-containers/runtime/issues/2428
ignoreConfigLogs := false
var traceRootSpan string
subCmdIsCheckCmd := (c.NArg() >= 1 && ((c.Args()[0] == "kata-check") || (c.Args()[0] == "check")))
if subCmdIsCheckCmd {
@@ -302,16 +273,13 @@ func beforeSubcommands(c *cli.Context) error {
cmdName := c.Args().First()
if c.App.Command(cmdName) != nil {
kataLog = kataLog.WithField("command", cmdName)
// Name for the root span (used for tracing) now the
// sub-command name is known.
traceRootSpan = name + " " + cmdName
}
// Since a context is required, pass a new (throw-away) one - we
// cannot use the main context as tracing hasn't been enabled yet
// (meaning any spans created at this point will be silently ignored).
setExternalLoggers(context.Background(), kataLog)
ctx, err := cliContextToContext(c)
if err != nil {
return err
}
setExternalLoggers(ctx, kataLog)
if c.NArg() == 1 && (c.Args()[0] == "kata-env" || c.Args()[0] == "env") {
// simply report the logging setup
@@ -319,26 +287,12 @@ func beforeSubcommands(c *cli.Context) error {
}
}
configFile, runtimeConfig, err = katautils.LoadConfiguration(c.GlobalString("kata-config"), ignoreConfigLogs, false)
configFile, runtimeConfig, err = katautils.LoadConfiguration(c.GlobalString("kata-config"), ignoreConfigLogs)
if err != nil {
fatal(err)
}
if !subCmdIsCheckCmd {
debug = runtimeConfig.Debug
if traceRootSpan != "" {
// Create the tracer.
//
// Note: no spans are created until the command-line has been parsed.
// This delays collection of trace data slightly but benefits the user by
// ensuring the first span is the name of the sub-command being
// invoked from the command-line.
traceFlushFunc, err = setupTracing(c, traceRootSpan, &runtimeConfig)
if err != nil {
return err
}
defer traceFlushFunc()
}
}
args := strings.Join(c.Args(), " ")
@@ -377,36 +331,6 @@ func handleShowConfig(context *cli.Context) {
}
}
func setupTracing(context *cli.Context, rootSpanName string, config *oci.RuntimeConfig) (func(), error) {
flush, err := katautils.CreateTracer(name, config)
if err != nil {
return nil, err
}
ctx, err := cliContextToContext(context)
if err != nil {
return nil, err
}
// Create the root span now that the sub-command name is
// known.
//
// Note that this "Before" function is called (and returns)
// before the subcommand handler is called. As such, we cannot
// "Finish()" the span here - that is handled in the .After
// function.
tracer := otel.Tracer("kata")
newCtx, span := tracer.Start(ctx, rootSpanName)
span.SetAttributes(label.Key("subsystem").String("runtime"))
// Add tracer to metadata and update the context
context.App.Metadata["tracer"] = tracer
context.App.Metadata["context"] = newCtx
return flush, nil
}
// add supported experimental features in context
func addExpFeatures(clictx *cli.Context, runtimeConfig oci.RuntimeConfig) error {
ctx, err := cliContextToContext(clictx)
@@ -420,22 +344,11 @@ func addExpFeatures(clictx *cli.Context, runtimeConfig oci.RuntimeConfig) error
}
ctx = exp.ContextWithExp(ctx, exps)
// Add tracer to metadata and update the context
// Add experimental features to metadata and update the context
clictx.App.Metadata["context"] = ctx
return nil
}
func afterSubcommands(c *cli.Context) error {
ctx, err := cliContextToContext(c)
if err != nil {
return err
}
katautils.StopTracing(ctx)
return nil
}
// function called when an invalid command is specified which causes the
// runtime to error.
func commandNotFound(c *cli.Context, command string) {
@@ -502,7 +415,6 @@ func createRuntimeApp(ctx context.Context, args []string) error {
app.Flags = runtimeFlags
app.Commands = runtimeCommands
app.Before = runtimeBeforeSubcommands
app.After = runtimeAfterSubcommands
app.EnableBashCompletion = true
// allow sub-commands to access context
@@ -578,12 +490,5 @@ func cliContextToContext(c *cli.Context) (context.Context, error) {
func main() {
// create a new empty context
ctx := context.Background()
dieCb := func() {
katautils.StopTracing(ctx)
}
defer signals.HandlePanic(dieCb)
createRuntime(ctx)
}

View File

@@ -869,7 +869,7 @@ func TestMainCreateRuntime(t *testing.T) {
assert := assert.New(t)
const cmd = "foo"
const msg = "moo FAILURE"
const msg = "moo message"
resetCLIGlobals()
@@ -942,7 +942,7 @@ func TestMainFatalWriter(t *testing.T) {
assert := assert.New(t)
const cmd = "foo"
const msg = "moo FAILURE"
const msg = "moo message"
// create buffer to save logger output
buf := &bytes.Buffer{}

View File

@@ -6,7 +6,6 @@
package main
import (
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
"github.com/urfave/cli"
)
@@ -14,14 +13,6 @@ var versionCLICommand = cli.Command{
Name: "version",
Usage: "display version details",
Action: func(context *cli.Context) error {
ctx, err := cliContextToContext(context)
if err != nil {
return err
}
span, _ := katautils.Trace(ctx, "version")
defer span.End()
cli.VersionPrinter(context)
return nil
},

View File

@@ -23,6 +23,7 @@ import (
// only register the proto type
_ "github.com/containerd/containerd/runtime/linux/runctypes"
_ "github.com/containerd/containerd/runtime/v2/runc/options"
crioption "github.com/containerd/cri-containerd/pkg/api/runtimeoptions/v1"
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils"
@@ -191,7 +192,7 @@ func loadRuntimeConfig(s *service, r *taskAPI.CreateTaskRequest, anno map[string
configPath = os.Getenv("KATA_CONF_FILE")
}
_, runtimeConfig, err := katautils.LoadConfiguration(configPath, false, true)
_, runtimeConfig, err := katautils.LoadConfiguration(configPath, false)
if err != nil {
return nil, err
}

View File

@@ -16,7 +16,6 @@ import (
"strconv"
"strings"
"github.com/containerd/containerd/namespaces"
cdshim "github.com/containerd/containerd/runtime/v2/shim"
vc "github.com/kata-containers/kata-containers/src/runtime/virtcontainers"
vcAnnotations "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/annotations"
@@ -129,11 +128,7 @@ func decodeAgentMetrics(body string) []*dto.MetricFamily {
func (s *service) startManagementServer(ctx context.Context, ociSpec *specs.Spec) {
// metrics socket will under sandbox's bundle path
metricsAddress, err := socketAddress(ctx, s.id)
if err != nil {
shimMgtLog.WithError(err).Error("failed to create socket address")
return
}
metricsAddress := SocketAddress(s.id)
listener, err := cdshim.NewSocket(metricsAddress)
if err != nil {
@@ -166,7 +161,7 @@ func (s *service) startManagementServer(ctx context.Context, ociSpec *specs.Spec
svr.Serve(listener)
}
// mountServeDebug provides a debug endpoint
// mountPprofHandle provides a debug endpoint
func (s *service) mountPprofHandle(m *http.ServeMux, ociSpec *specs.Spec) {
// return if not enabled
@@ -188,10 +183,8 @@ func (s *service) mountPprofHandle(m *http.ServeMux, ociSpec *specs.Spec) {
m.Handle("/debug/pprof/trace", http.HandlerFunc(pprof.Trace))
}
func socketAddress(ctx context.Context, id string) (string, error) {
ns, err := namespaces.NamespaceRequired(ctx)
if err != nil {
return "", err
}
return filepath.Join(string(filepath.Separator), "containerd-shim", ns, id, "shim-monitor.sock"), nil
// SocketAddress returns the address of the abstract domain socket for communicating with the
// shim management endpoint
func SocketAddress(id string) string {
return filepath.Join(string(filepath.Separator), "run", "vc", id, "shim-monitor")
}

View File

@@ -87,7 +87,6 @@ func newTtyIO(ctx context.Context, stdin, stdout, stderr string, console bool) (
func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
var wg sync.WaitGroup
var closeOnce sync.Once
if tty.Stdin != nil {
wg.Add(1)
@@ -109,7 +108,11 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
defer bufPool.Put(p)
io.CopyBuffer(tty.Stdout, stdoutPipe, *p)
wg.Done()
closeOnce.Do(tty.close)
if tty.Stdin != nil {
// close stdin to make the other routine stop
tty.Stdin.Close()
tty.Stdin = nil
}
}()
}
@@ -124,6 +127,6 @@ func ioCopy(exitch, stdinCloser chan struct{}, tty *ttyIO, stdinPipe io.WriteClo
}
wg.Wait()
closeOnce.Do(tty.close)
tty.close()
close(exitch)
}

View File

@@ -142,7 +142,7 @@ func watchOOMEvents(ctx context.Context, s *service) {
for {
select {
case <-ctx.Done():
case <-s.ctx.Done():
return
default:
containerID, err := s.sandbox.GetOOMEvent(ctx)

View File

@@ -31,7 +31,7 @@ require (
github.com/gogo/googleapis v1.4.0 // indirect
github.com/gogo/protobuf v1.3.1
github.com/hashicorp/go-multierror v1.0.0
github.com/kata-containers/govmm v0.0.0-20210112013750-7d320e8f5dca
github.com/kata-containers/govmm v0.0.0-20210428163604-f0e9a35308ee
github.com/mdlayher/vsock v0.0.0-20191108225356-d9c65923cb8f
github.com/opencontainers/image-spec v1.0.1 // indirect
github.com/opencontainers/runc v1.0.0-rc9.0.20200102164712-2b52db75279c

View File

@@ -276,6 +276,10 @@ github.com/juju/testing v0.0.0-20190613124551-e81189438503/go.mod h1:63prj8cnj0t
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kata-containers/govmm v0.0.0-20210112013750-7d320e8f5dca h1:UdXFthwasAPnmv37gLJUEFsW9FaabYA+mM6FoSi8kzU=
github.com/kata-containers/govmm v0.0.0-20210112013750-7d320e8f5dca/go.mod h1:VmAHbsL5lLfzHW/MNL96NVLF840DNEV5i683kISgFKk=
github.com/kata-containers/govmm v0.0.0-20210329205824-7fbc685865d2 h1:oDJsQ5avmW8PFUkSJdsuaGL3SR4/YQRno51GNGl+IDU=
github.com/kata-containers/govmm v0.0.0-20210329205824-7fbc685865d2/go.mod h1:VmAHbsL5lLfzHW/MNL96NVLF840DNEV5i683kISgFKk=
github.com/kata-containers/govmm v0.0.0-20210428163604-f0e9a35308ee h1:M4N7AdSHgWz/ubV5AZQdeqmK+9Ztpea6oqeXgk8GCHk=
github.com/kata-containers/govmm v0.0.0-20210428163604-f0e9a35308ee/go.mod h1:VmAHbsL5lLfzHW/MNL96NVLF840DNEV5i683kISgFKk=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=

View File

@@ -176,7 +176,7 @@ func (km *KataMonitor) aggregateSandboxMetrics(encoder expfmt.Encoder) error {
for sandboxID, namespace := range sandboxes {
wg.Add(1)
go func(sandboxID, namespace string, results chan<- []*dto.MetricFamily) {
sandboxMetrics, err := km.getSandboxMetrics(sandboxID, namespace)
sandboxMetrics, err := getParsedMetrics(sandboxID)
if err != nil {
monitorLog.WithError(err).WithField("sandbox_id", sandboxID).Errorf("failed to get metrics for sandbox")
}
@@ -229,13 +229,12 @@ func (km *KataMonitor) aggregateSandboxMetrics(encoder expfmt.Encoder) error {
return err
}
}
return nil
}
// getSandboxMetrics will get sandbox's metrics from shim
func (km *KataMonitor) getSandboxMetrics(sandboxID, namespace string) ([]*dto.MetricFamily, error) {
body, err := km.doGet(sandboxID, namespace, defaultTimeout, "metrics")
func getParsedMetrics(sandboxID string) ([]*dto.MetricFamily, error) {
body, err := doGet(sandboxID, defaultTimeout, "metrics")
if err != nil {
return nil, err
}
@@ -243,6 +242,16 @@ func (km *KataMonitor) getSandboxMetrics(sandboxID, namespace string) ([]*dto.Me
return parsePrometheusMetrics(sandboxID, body)
}
// GetSandboxMetrics will get sandbox's metrics from shim
func GetSandboxMetrics(sandboxID string) (string, error) {
body, err := doGet(sandboxID, defaultTimeout, "metrics")
if err != nil {
return "", err
}
return string(body), nil
}
// parsePrometheusMetrics will decode metrics from Prometheus text format
// and return array of *dto.MetricFamily with an ASC order
func parsePrometheusMetrics(sandboxID string, body []byte) ([]*dto.MetricFamily, error) {

View File

@@ -87,13 +87,8 @@ func (km *KataMonitor) GetAgentURL(w http.ResponseWriter, r *http.Request) {
commonServeError(w, http.StatusBadRequest, err)
return
}
namespace, err := km.getSandboxNamespace(sandboxID)
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return
}
data, err := km.doGet(sandboxID, namespace, defaultTimeout, "agent-url")
data, err := doGet(sandboxID, defaultTimeout, "agent-url")
if err != nil {
commonServeError(w, http.StatusBadRequest, err)
return

View File

@@ -11,6 +11,8 @@ import (
"net"
"net/http"
"time"
shim "github.com/kata-containers/kata-containers/src/runtime/containerd-shim-v2"
)
const (
@@ -33,16 +35,13 @@ func getSandboxIDFromReq(r *http.Request) (string, error) {
return "", fmt.Errorf("sandbox not found in %+v", r.URL.Query())
}
func (km *KataMonitor) buildShimClient(sandboxID, namespace string, timeout time.Duration) (*http.Client, error) {
socketAddr, err := km.getMonitorAddress(sandboxID, namespace)
if err != nil {
return nil, err
}
return BuildUnixSocketClient(socketAddr, timeout)
// BuildShimClient builds and returns an http client for communicating with the provided sandbox
func BuildShimClient(sandboxID string, timeout time.Duration) (*http.Client, error) {
return buildUnixSocketClient(shim.SocketAddress(sandboxID), timeout)
}
// BuildUnixSocketClient build http client for Unix socket
func BuildUnixSocketClient(socketAddr string, timeout time.Duration) (*http.Client, error) {
// buildUnixSocketClient build http client for Unix socket
func buildUnixSocketClient(socketAddr string, timeout time.Duration) (*http.Client, error) {
transport := &http.Transport{
DisableKeepAlives: true,
Dial: func(proto, addr string) (conn net.Conn, err error) {
@@ -61,8 +60,8 @@ func BuildUnixSocketClient(socketAddr string, timeout time.Duration) (*http.Clie
return client, nil
}
func (km *KataMonitor) doGet(sandboxID, namespace string, timeoutInSeconds time.Duration, urlPath string) ([]byte, error) {
client, err := km.buildShimClient(sandboxID, namespace, timeoutInSeconds)
func doGet(sandboxID string, timeoutInSeconds time.Duration, urlPath string) ([]byte, error) {
client, err := BuildShimClient(sandboxID, timeoutInSeconds)
if err != nil {
return nil, err
}

View File

@@ -99,6 +99,7 @@ type hypervisor struct {
PFlashList []string `toml:"pflashes"`
VhostUserStorePathList []string `toml:"valid_vhost_user_store_paths"`
FileBackedMemRootList []string `toml:"valid_file_mem_backends"`
EntropySourceList []string `toml:"valid_entropy_sources"`
EnableAnnotations []string `toml:"enable_annotations"`
RxRateLimiterMaxRate uint64 `toml:"rx_rate_limiter_max_rate"`
TxRateLimiterMaxRate uint64 `toml:"tx_rate_limiter_max_rate"`
@@ -153,6 +154,7 @@ type agent struct {
Debug bool `toml:"enable_debug"`
Tracing bool `toml:"enable_tracing"`
DebugConsoleEnabled bool `toml:"debug_console_enabled"`
DialTimeout uint32 `toml:"dial_timeout"`
}
type netmon struct {
@@ -470,6 +472,10 @@ func (a agent) debugConsoleEnabled() bool {
return a.DebugConsoleEnabled
}
func (a agent) dialTimout() uint32 {
return a.DialTimeout
}
func (a agent) debug() bool {
return a.Debug
}
@@ -557,6 +563,7 @@ func newFirecrackerHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
MemorySize: h.defaultMemSz(),
MemSlots: h.defaultMemSlots(),
EntropySource: h.GetEntropySource(),
EntropySourceList: h.EntropySourceList,
DefaultBridges: h.defaultBridges(),
DisableBlockDeviceUse: h.DisableBlockDeviceUse,
HugePages: h.HugePages,
@@ -663,6 +670,7 @@ func newQemuHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
MemOffset: h.defaultMemOffset(),
VirtioMem: h.VirtioMem,
EntropySource: h.GetEntropySource(),
EntropySourceList: h.EntropySourceList,
DefaultBridges: h.defaultBridges(),
DisableBlockDeviceUse: h.DisableBlockDeviceUse,
SharedFS: sharedFS,
@@ -754,6 +762,7 @@ func newAcrnHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
MemorySize: h.defaultMemSz(),
MemSlots: h.defaultMemSlots(),
EntropySource: h.GetEntropySource(),
EntropySourceList: h.EntropySourceList,
DefaultBridges: h.defaultBridges(),
HugePages: h.HugePages,
Mlock: !h.Swap,
@@ -830,6 +839,7 @@ func newClhHypervisorConfig(h hypervisor) (vc.HypervisorConfig, error) {
MemOffset: h.defaultMemOffset(),
VirtioMem: h.VirtioMem,
EntropySource: h.GetEntropySource(),
EntropySourceList: h.EntropySourceList,
DefaultBridges: h.defaultBridges(),
DisableBlockDeviceUse: h.DisableBlockDeviceUse,
SharedFS: sharedFS,
@@ -905,7 +915,7 @@ func updateRuntimeConfigHypervisor(configPath string, tomlConf tomlConfig, confi
return nil
}
func updateRuntimeConfigAgent(configPath string, tomlConf tomlConfig, config *oci.RuntimeConfig, builtIn bool) error {
func updateRuntimeConfigAgent(configPath string, tomlConf tomlConfig, config *oci.RuntimeConfig) error {
for _, agent := range tomlConf.Agent {
config.AgentConfig = vc.KataAgentConfig{
LongLiveConn: true,
@@ -915,6 +925,7 @@ func updateRuntimeConfigAgent(configPath string, tomlConf tomlConfig, config *oc
TraceType: agent.traceType(),
KernelModules: agent.kernelModules(),
EnableDebugConsole: agent.debugConsoleEnabled(),
DialTimeout: agent.dialTimout(),
}
}
@@ -980,12 +991,12 @@ func SetKernelParams(runtimeConfig *oci.RuntimeConfig) error {
return nil
}
func updateRuntimeConfig(configPath string, tomlConf tomlConfig, config *oci.RuntimeConfig, builtIn bool) error {
func updateRuntimeConfig(configPath string, tomlConf tomlConfig, config *oci.RuntimeConfig) error {
if err := updateRuntimeConfigHypervisor(configPath, tomlConf, config); err != nil {
return err
}
if err := updateRuntimeConfigAgent(configPath, tomlConf, config, builtIn); err != nil {
if err := updateRuntimeConfigAgent(configPath, tomlConf, config); err != nil {
return err
}
@@ -1076,7 +1087,7 @@ func initConfig() (config oci.RuntimeConfig, err error) {
//
// All paths are resolved fully meaning if this function does not return an
// error, all paths are valid at the time of the call.
func LoadConfiguration(configPath string, ignoreLogging, builtIn bool) (resolvedConfigPath string, config oci.RuntimeConfig, err error) {
func LoadConfiguration(configPath string, ignoreLogging bool) (resolvedConfigPath string, config oci.RuntimeConfig, err error) {
config, err = initConfig()
if err != nil {
@@ -1118,7 +1129,7 @@ func LoadConfiguration(configPath string, ignoreLogging, builtIn bool) (resolved
}).Info("loaded configuration")
}
if err := updateRuntimeConfig(resolved, tomlConf, &config, builtIn); err != nil {
if err := updateRuntimeConfig(resolved, tomlConf, &config); err != nil {
return "", config, err
}

View File

@@ -260,7 +260,7 @@ func testLoadConfiguration(t *testing.T, dir string,
assert.NoError(t, err)
}
resolvedConfigPath, config, err := LoadConfiguration(file, ignoreLogging, false)
resolvedConfigPath, config, err := LoadConfiguration(file, ignoreLogging)
if expectFail {
assert.Error(t, err)
@@ -566,7 +566,7 @@ func TestMinimalRuntimeConfig(t *testing.T) {
t.Error(err)
}
_, config, err := LoadConfiguration(configPath, false, false)
_, config, err := LoadConfiguration(configPath, false)
if err != nil {
t.Fatal(err)
}
@@ -1398,7 +1398,7 @@ func TestUpdateRuntimeConfigurationVMConfig(t *testing.T) {
},
}
err := updateRuntimeConfig("", tomlConf, &config, false)
err := updateRuntimeConfig("", tomlConf, &config)
assert.NoError(err)
assert.Equal(expectedVMConfig, config.HypervisorConfig.MemorySize)
@@ -1416,7 +1416,7 @@ func TestUpdateRuntimeConfigurationFactoryConfig(t *testing.T) {
tomlConf := tomlConfig{Factory: factory{Template: true}}
err := updateRuntimeConfig("", tomlConf, &config, false)
err := updateRuntimeConfig("", tomlConf, &config)
assert.NoError(err)
assert.Equal(expectedFactoryConfig, config.FactoryConfig)
@@ -1443,7 +1443,7 @@ func TestUpdateRuntimeConfigurationInvalidKernelParams(t *testing.T) {
}
}
err := updateRuntimeConfig("", tomlConf, &config, false)
err := updateRuntimeConfig("", tomlConf, &config)
assert.EqualError(err, "Empty kernel parameter")
}

View File

@@ -30,7 +30,7 @@ var _ export.SpanExporter = (*kataSpanExporter)(nil)
// ExportSpans exports SpanData to Jaeger.
func (e *kataSpanExporter) ExportSpans(ctx context.Context, spans []*export.SpanData) error {
for _, span := range spans {
kataUtilsLogger.Infof("Reporting span %+v", span)
kataUtilsLogger.Tracef("Reporting span %+v", span)
}
return nil
}

View File

@@ -131,6 +131,9 @@ const (
// PCIeRootPort is a PCIe Root Port, the PCIe device should be hotplugged to this port.
PCIeRootPort DeviceDriver = "pcie-root-port"
// Loader is the Loader device driver.
Loader DeviceDriver = "loader"
)
func isDimmSupported(config *Config) bool {
@@ -224,6 +227,9 @@ type ObjectType string
const (
// MemoryBackendFile represents a guest memory mapped file.
MemoryBackendFile ObjectType = "memory-backend-file"
// TDXGuest represents a TDX object
TDXGuest ObjectType = "tdx-guest"
)
// Object is a qemu object representation.
@@ -246,6 +252,12 @@ type Object struct {
// Size is the object size in bytes
Size uint64
// Debug this is a debug object
Debug bool
// File is the device file
File string
}
// Valid returns true if the Object structure is valid and complete.
@@ -256,6 +268,11 @@ func (object Object) Valid() bool {
return false
}
case TDXGuest:
if object.ID == "" || object.File == "" || object.DeviceID == "" {
return false
}
default:
return false
}
@@ -280,6 +297,13 @@ func (object Object) QemuParams(config *Config) []string {
objectParams = append(objectParams, fmt.Sprintf(",size=%d", object.Size))
deviceParams = append(deviceParams, fmt.Sprintf(",memdev=%s", object.ID))
case TDXGuest:
objectParams = append(objectParams, string(object.Type))
objectParams = append(objectParams, fmt.Sprintf(",id=%s", object.ID))
if object.Debug {
objectParams = append(objectParams, ",debug=on")
}
deviceParams = append(deviceParams, fmt.Sprintf(",file=%s", object.File))
}
qemuParams = append(qemuParams, "-device")
@@ -405,7 +429,7 @@ func (fsdev FSDevice) QemuParams(config *Config) []string {
}
deviceParams = append(deviceParams, fmt.Sprintf(",fsdev=%s", fsdev.ID))
deviceParams = append(deviceParams, fmt.Sprintf(",mount_tag=%s", fsdev.MountTag))
if fsdev.Transport.isVirtioPCI(config) {
if fsdev.Transport.isVirtioPCI(config) && fsdev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", fsdev.ROMFile))
}
if fsdev.Transport.isVirtioCCW(config) {
@@ -538,7 +562,7 @@ func (cdev CharDevice) QemuParams(config *Config) []string {
if cdev.Name != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",name=%s", cdev.Name))
}
if cdev.Driver == VirtioSerial && cdev.Transport.isVirtioPCI(config) {
if cdev.Driver == VirtioSerial && cdev.Transport.isVirtioPCI(config) && cdev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", cdev.ROMFile))
}
@@ -552,7 +576,7 @@ func (cdev CharDevice) QemuParams(config *Config) []string {
cdevParams = append(cdevParams, string(cdev.Backend))
cdevParams = append(cdevParams, fmt.Sprintf(",id=%s", cdev.ID))
if cdev.Backend == Socket {
cdevParams = append(cdevParams, fmt.Sprintf(",path=%s,server,nowait", cdev.Path))
cdevParams = append(cdevParams, fmt.Sprintf(",path=%s,server=on,wait=off", cdev.Path))
} else {
cdevParams = append(cdevParams, fmt.Sprintf(",path=%s", cdev.Path))
}
@@ -808,7 +832,7 @@ func (netdev NetDevice) QemuDeviceParams(config *Config) []string {
deviceParams = append(deviceParams, netdev.mqParameter(config))
}
if netdev.Transport.isVirtioPCI(config) {
if netdev.Transport.isVirtioPCI(config) && netdev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", netdev.ROMFile))
}
@@ -941,7 +965,7 @@ func (dev SerialDevice) QemuParams(config *Config) []string {
deviceParams = append(deviceParams, fmt.Sprintf(",%s", s))
}
deviceParams = append(deviceParams, fmt.Sprintf(",id=%s", dev.ID))
if dev.Transport.isVirtioPCI(config) {
if dev.Transport.isVirtioPCI(config) && dev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", dev.ROMFile))
if dev.Driver == VirtioSerial && dev.MaxPorts != 0 {
deviceParams = append(deviceParams, fmt.Sprintf(",max_ports=%d", dev.MaxPorts))
@@ -1072,7 +1096,7 @@ func (blkdev BlockDevice) QemuParams(config *Config) []string {
deviceParams = append(deviceParams, ",config-wce=off")
}
if blkdev.Transport.isVirtioPCI(config) {
if blkdev.Transport.isVirtioPCI(config) && blkdev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", blkdev.ROMFile))
}
@@ -1138,6 +1162,40 @@ func (dev PVPanicDevice) QemuParams(config *Config) []string {
return []string{"-device", "pvpanic"}
}
// LoaderDevice represents a qemu loader device.
type LoaderDevice struct {
File string
ID string
}
// Valid returns true if there is a valid structure defined for LoaderDevice
func (dev LoaderDevice) Valid() bool {
if dev.File == "" {
return false
}
if dev.ID == "" {
return false
}
return true
}
// QemuParams returns the qemu parameters built out of this loader device.
func (dev LoaderDevice) QemuParams(config *Config) []string {
var qemuParams []string
var devParams []string
devParams = append(devParams, "loader")
devParams = append(devParams, fmt.Sprintf("file=%s", dev.File))
devParams = append(devParams, fmt.Sprintf("id=%s", dev.ID))
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
}
// VhostUserDevice represents a qemu vhost-user device meant to be passed
// in to the guest
type VhostUserDevice struct {
@@ -1153,6 +1211,9 @@ type VhostUserDevice struct {
// ROMFile specifies the ROM file being used for this device.
ROMFile string
// DevNo identifies the CCW device for s390x.
DevNo string
// Transport is the virtio transport for this device.
Transport VirtioTransport
}
@@ -1217,88 +1278,150 @@ func (vhostuserDev VhostUserDevice) Valid() bool {
return true
}
// QemuNetParams builds QEMU netdev and device parameters for a VhostUserNet device
func (vhostuserDev VhostUserDevice) QemuNetParams(config *Config) []string {
var qemuParams []string
var netParams []string
var devParams []string
driver := vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
netParams = append(netParams, "type=vhost-user")
netParams = append(netParams, fmt.Sprintf("id=%s", vhostuserDev.TypeDevID))
netParams = append(netParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
netParams = append(netParams, "vhostforce")
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("netdev=%s", vhostuserDev.TypeDevID))
devParams = append(devParams, fmt.Sprintf("mac=%s", vhostuserDev.Address))
if vhostuserDev.Transport.isVirtioPCI(config) && vhostuserDev.ROMFile != "" {
devParams = append(devParams, fmt.Sprintf("romfile=%s", vhostuserDev.ROMFile))
}
qemuParams = append(qemuParams, "-netdev")
qemuParams = append(qemuParams, strings.Join(netParams, ","))
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
}
// QemuSCSIParams builds QEMU device parameters for a VhostUserSCSI device
func (vhostuserDev VhostUserDevice) QemuSCSIParams(config *Config) []string {
var qemuParams []string
var devParams []string
driver := vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("id=%s", vhostuserDev.TypeDevID))
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
if vhostuserDev.Transport.isVirtioPCI(config) && vhostuserDev.ROMFile != "" {
devParams = append(devParams, fmt.Sprintf("romfile=%s", vhostuserDev.ROMFile))
}
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
}
// QemuBlkParams builds QEMU device parameters for a VhostUserBlk device
func (vhostuserDev VhostUserDevice) QemuBlkParams(config *Config) []string {
var qemuParams []string
var devParams []string
driver := vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, "logical_block_size=4096")
devParams = append(devParams, "size=512M")
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
if vhostuserDev.Transport.isVirtioPCI(config) && vhostuserDev.ROMFile != "" {
devParams = append(devParams, fmt.Sprintf("romfile=%s", vhostuserDev.ROMFile))
}
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
}
// QemuFSParams builds QEMU device parameters for a VhostUserFS device
func (vhostuserDev VhostUserDevice) QemuFSParams(config *Config) []string {
var qemuParams []string
var devParams []string
driver := vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
devParams = append(devParams, fmt.Sprintf("tag=%s", vhostuserDev.Tag))
if vhostuserDev.CacheSize != 0 {
devParams = append(devParams, fmt.Sprintf("cache-size=%dM", vhostuserDev.CacheSize))
}
if vhostuserDev.SharedVersions {
devParams = append(devParams, "versiontable=/dev/shm/fuse_shared_versions")
}
if vhostuserDev.Transport.isVirtioCCW(config) {
devParams = append(devParams, fmt.Sprintf("devno=%s", vhostuserDev.DevNo))
}
if vhostuserDev.Transport.isVirtioPCI(config) && vhostuserDev.ROMFile != "" {
devParams = append(devParams, fmt.Sprintf("romfile=%s", vhostuserDev.ROMFile))
}
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
}
// QemuParams returns the qemu parameters built out of this vhostuser device.
func (vhostuserDev VhostUserDevice) QemuParams(config *Config) []string {
var qemuParams []string
var charParams []string
var netParams []string
var devParams []string
var driver string
charParams = append(charParams, "socket")
charParams = append(charParams, fmt.Sprintf("id=%s", vhostuserDev.CharDevID))
charParams = append(charParams, fmt.Sprintf("path=%s", vhostuserDev.SocketPath))
qemuParams = append(qemuParams, "-chardev")
qemuParams = append(qemuParams, strings.Join(charParams, ","))
switch vhostuserDev.VhostUserType {
// if network based vhost device:
case VhostUserNet:
driver = vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
netParams = append(netParams, "type=vhost-user")
netParams = append(netParams, fmt.Sprintf("id=%s", vhostuserDev.TypeDevID))
netParams = append(netParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
netParams = append(netParams, "vhostforce")
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("netdev=%s", vhostuserDev.TypeDevID))
devParams = append(devParams, fmt.Sprintf("mac=%s", vhostuserDev.Address))
devParams = vhostuserDev.QemuNetParams(config)
case VhostUserSCSI:
driver = vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("id=%s", vhostuserDev.TypeDevID))
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
devParams = vhostuserDev.QemuSCSIParams(config)
case VhostUserBlk:
driver = vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, "logical_block_size=4096")
devParams = append(devParams, "size=512M")
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
devParams = vhostuserDev.QemuBlkParams(config)
case VhostUserFS:
driver = vhostuserDev.deviceName(config)
if driver == "" {
return nil
}
devParams = append(devParams, driver)
devParams = append(devParams, fmt.Sprintf("chardev=%s", vhostuserDev.CharDevID))
devParams = append(devParams, fmt.Sprintf("tag=%s", vhostuserDev.Tag))
if vhostuserDev.CacheSize != 0 {
devParams = append(devParams, fmt.Sprintf("cache-size=%dM", vhostuserDev.CacheSize))
}
if vhostuserDev.SharedVersions {
devParams = append(devParams, "versiontable=/dev/shm/fuse_shared_versions")
}
devParams = vhostuserDev.QemuFSParams(config)
default:
return nil
}
if vhostuserDev.Transport.isVirtioPCI(config) {
devParams = append(devParams, fmt.Sprintf("romfile=%s", vhostuserDev.ROMFile))
if devParams != nil {
return append(qemuParams, devParams...)
}
qemuParams = append(qemuParams, "-chardev")
qemuParams = append(qemuParams, strings.Join(charParams, ","))
// if network based vhost device:
if vhostuserDev.VhostUserType == VhostUserNet {
qemuParams = append(qemuParams, "-netdev")
qemuParams = append(qemuParams, strings.Join(netParams, ","))
}
qemuParams = append(qemuParams, "-device")
qemuParams = append(qemuParams, strings.Join(devParams, ","))
return qemuParams
return nil
}
// deviceName returns the QEMU device name for the current combination of
@@ -1473,7 +1596,9 @@ func (vfioDev VFIODevice) QemuParams(config *Config) []string {
if vfioDev.DeviceID != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",x-pci-device-id=%s", vfioDev.DeviceID))
}
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", vfioDev.ROMFile))
if vfioDev.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", vfioDev.ROMFile))
}
}
if vfioDev.Bus != "" {
@@ -1558,7 +1683,7 @@ func (scsiCon SCSIController) QemuParams(config *Config) []string {
if scsiCon.IOThread != "" {
devParams = append(devParams, fmt.Sprintf("iothread=%s", scsiCon.IOThread))
}
if scsiCon.Transport.isVirtioPCI(config) {
if scsiCon.Transport.isVirtioPCI(config) && scsiCon.ROMFile != "" {
devParams = append(devParams, fmt.Sprintf("romfile=%s", scsiCon.ROMFile))
}
@@ -1664,7 +1789,7 @@ func (bridgeDev BridgeDevice) QemuParams(config *Config) []string {
}
var transport VirtioTransport
if transport.isVirtioPCI(config) {
if transport.isVirtioPCI(config) && bridgeDev.ROMFile != "" {
deviceParam = append(deviceParam, fmt.Sprintf(",romfile=%s", bridgeDev.ROMFile))
}
@@ -1743,7 +1868,7 @@ func (vsock VSOCKDevice) QemuParams(config *Config) []string {
deviceParams = append(deviceParams, fmt.Sprintf(",id=%s", vsock.ID))
deviceParams = append(deviceParams, fmt.Sprintf(",%s=%d", VSOCKGuestCID, vsock.ContextID))
if vsock.Transport.isVirtioPCI(config) {
if vsock.Transport.isVirtioPCI(config) && vsock.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf(",romfile=%s", vsock.ROMFile))
}
@@ -1816,7 +1941,7 @@ func (v RngDevice) QemuParams(config *Config) []string {
deviceParams = append(deviceParams, v.deviceName(config))
deviceParams = append(deviceParams, "rng="+v.ID)
if v.Transport.isVirtioPCI(config) {
if v.Transport.isVirtioPCI(config) && v.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf("romfile=%s", v.ROMFile))
}
@@ -1893,7 +2018,7 @@ func (b BalloonDevice) QemuParams(config *Config) []string {
deviceParams = append(deviceParams, "id="+b.ID)
}
if b.Transport.isVirtioPCI(config) {
if b.Transport.isVirtioPCI(config) && b.ROMFile != "" {
deviceParams = append(deviceParams, fmt.Sprintf("romfile=%s", b.ROMFile))
}
@@ -2385,9 +2510,9 @@ func (config *Config) appendQMPSockets() {
qmpParams := append([]string{}, fmt.Sprintf("%s:", q.Type))
qmpParams = append(qmpParams, q.Name)
if q.Server {
qmpParams = append(qmpParams, ",server")
qmpParams = append(qmpParams, ",server=on")
if q.NoWait {
qmpParams = append(qmpParams, ",nowait")
qmpParams = append(qmpParams, ",wait=off")
}
}
@@ -2528,9 +2653,6 @@ func (config *Config) appendMemoryKnobs() {
if config.Memory.Size == "" {
return
}
if !isDimmSupported(config) {
return
}
var objMemParam, numaMemParam string
dimmName := "dimm1"
if config.Knobs.HugePages {
@@ -2553,8 +2675,13 @@ func (config *Config) appendMemoryKnobs() {
config.qemuParams = append(config.qemuParams, "-object")
config.qemuParams = append(config.qemuParams, objMemParam)
config.qemuParams = append(config.qemuParams, "-numa")
config.qemuParams = append(config.qemuParams, numaMemParam)
if isDimmSupported(config) {
config.qemuParams = append(config.qemuParams, "-numa")
config.qemuParams = append(config.qemuParams, numaMemParam)
} else {
config.qemuParams = append(config.qemuParams, "-machine")
config.qemuParams = append(config.qemuParams, "memory-backend="+dimmName)
}
}
func (config *Config) appendKnobs() {

View File

@@ -267,10 +267,6 @@ func (q *QMP) readLoop(fromVMCh chan<- []byte) {
for scanner.Scan() {
line := scanner.Bytes()
if q.cfg.Logger.V(1) {
q.cfg.Logger.Infof("read from QMP: %s", string(line))
}
// Since []byte channel type transfer slice info(include slice underlying array pointer, len, cap)
// between channel sender and receiver. scanner.Bytes() returned slice's underlying array
// may point to data that will be overwritten by a subsequent call to Scan(reference from:
@@ -442,7 +438,6 @@ func (q *QMP) writeNextQMPCommand(cmdQueue *list.List) {
}
cmdQueue.Remove(cmdEl)
}
q.cfg.Logger.Infof("%s", string(encodedCmd))
encodedCmd = append(encodedCmd, '\n')
if unixConn, ok := q.conn.(*net.UnixConn); ok && len(cmd.oob) > 0 {
_, _, err = unixConn.WriteMsgUnix(encodedCmd, cmd.oob, nil)

View File

@@ -222,7 +222,7 @@ github.com/hashicorp/errwrap
# github.com/hashicorp/go-multierror v1.0.0
## explicit
github.com/hashicorp/go-multierror
# github.com/kata-containers/govmm v0.0.0-20210112013750-7d320e8f5dca
# github.com/kata-containers/govmm v0.0.0-20210428163604-f0e9a35308ee
## explicit
github.com/kata-containers/govmm/qemu
# github.com/konsorten/go-windows-terminal-sequences v1.0.1

View File

@@ -479,7 +479,7 @@ func (a *Acrn) waitSandbox(ctx context.Context, timeoutSecs int) error {
}
// stopSandbox will stop the Sandbox's VM.
func (a *Acrn) stopSandbox(ctx context.Context) (err error) {
func (a *Acrn) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := a.trace(ctx, "stopSandbox")
defer span.End()
@@ -511,36 +511,14 @@ func (a *Acrn) stopSandbox(ctx context.Context) (err error) {
pid := a.state.PID
// Send signal to the VM process to try to stop it properly
if err = syscall.Kill(pid, syscall.SIGINT); err != nil {
if err == syscall.ESRCH {
return nil
}
a.Logger().Info("Sending signal to stop acrn VM failed")
return err
shutdownSignal := syscall.SIGINT
if waitOnly {
// NOP
shutdownSignal = syscall.Signal(0)
}
// Wait for the VM process to terminate
tInit := time.Now()
for {
if err = syscall.Kill(pid, syscall.Signal(0)); err != nil {
a.Logger().Info("acrn VM stopped after sending signal")
return nil
}
if time.Since(tInit).Seconds() >= acrnStopSandboxTimeoutSecs {
a.Logger().Warnf("VM still running after waiting %ds", acrnStopSandboxTimeoutSecs)
break
}
// Let's avoid to run a too busy loop
time.Sleep(time.Duration(50) * time.Millisecond)
}
// Let's try with a hammer now, a SIGKILL should get rid of the
// VM process.
return syscall.Kill(pid, syscall.SIGKILL)
return utils.WaitLocalProcess(pid, acrnStopSandboxTimeoutSecs, shutdownSignal, a.Logger())
}
func (a *Acrn) updateBlockDevice(drive *config.BlockDrive) error {

View File

@@ -132,9 +132,6 @@ type agent interface {
// readProcessStderr will tell the agent to read a process stderr
readProcessStderr(ctx context.Context, c *Container, processID string, data []byte) (int, error)
// processListContainer will list the processes running inside the container
processListContainer(ctx context.Context, sandbox *Sandbox, c Container, options ProcessListOptions) (ProcessList, error)
// updateContainer will update the resources of a running container
updateContainer(ctx context.Context, sandbox *Sandbox, c Container, resources specs.LinuxResources) error
@@ -163,10 +160,10 @@ type agent interface {
resumeContainer(ctx context.Context, sandbox *Sandbox, c Container) error
// configure will update agent settings based on provided arguments
configure(ctx context.Context, h hypervisor, id, sharePath string, config interface{}) error
configure(ctx context.Context, h hypervisor, id, sharePath string, config KataAgentConfig) error
// configureFromGrpc will update agent settings based on provided arguments which from Grpc
configureFromGrpc(h hypervisor, id string, config interface{}) error
configureFromGrpc(h hypervisor, id string, config KataAgentConfig) error
// reseedRNG will reseed the guest random number generator
reseedRNG(ctx context.Context, data []byte) error

View File

@@ -144,7 +144,7 @@ func TestCreateSandboxNoopAgentSuccessful(t *testing.T) {
assert.True(ok)
assert.NotNil(s)
sandboxDir := filepath.Join(s.newStore.RunStoragePath(), p.ID())
sandboxDir := filepath.Join(s.store.RunStoragePath(), p.ID())
_, err = os.Stat(sandboxDir)
assert.NoError(err)
}
@@ -175,7 +175,7 @@ func TestCreateSandboxKataAgentSuccessful(t *testing.T) {
s, ok := p.(*Sandbox)
assert.True(ok)
sandboxDir := filepath.Join(s.newStore.RunStoragePath(), p.ID())
sandboxDir := filepath.Join(s.store.RunStoragePath(), p.ID())
_, err = os.Stat(sandboxDir)
assert.NoError(err)
}
@@ -224,7 +224,7 @@ func createAndStartSandbox(ctx context.Context, config SandboxConfig) (sandbox V
if !ok {
return nil, "", fmt.Errorf("Could not get Sandbox")
}
sandboxDir = filepath.Join(s.newStore.RunStoragePath(), sandbox.ID())
sandboxDir = filepath.Join(s.store.RunStoragePath(), sandbox.ID())
_, err = os.Stat(sandboxDir)
if err != nil {
return nil, "", err
@@ -285,7 +285,7 @@ func TestCleanupContainer(t *testing.T) {
s, ok := p.(*Sandbox)
assert.True(ok)
sandboxDir := filepath.Join(s.newStore.RunStoragePath(), p.ID())
sandboxDir := filepath.Join(s.store.RunStoragePath(), p.ID())
_, err = os.Stat(sandboxDir)
if err == nil {

View File

@@ -131,7 +131,6 @@ type cloudHypervisor struct {
}
var clhKernelParams = []Param{
{"root", "/dev/pmem0p1"},
{"panic", "1"}, // upon kernel panic wait 1 second before reboot
{"no_timer_check", ""}, // do not check broken timer IRQ resources
@@ -141,7 +140,6 @@ var clhKernelParams = []Param{
}
var clhDebugKernelParams = []Param{
{"console", "ttyS0,115200n8"}, // enable serial console
{"systemd.log_target", "console"}, // send loggng to the console
}
@@ -192,13 +190,11 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
err = clh.getAvailableVersion()
if err != nil {
return err
}
if err := clh.checkVersion(); err != nil {
return err
}
}
clh.Logger().WithField("function", "createSandbox").Info("creating Sandbox")
@@ -206,7 +202,6 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
virtiofsdSocketPath, err := clh.virtioFsSocketPath(clh.id)
if err != nil {
return nil
}
if clh.state.PID > 0 {
@@ -222,7 +217,7 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
// No need to return an error from there since there might be nothing
// to fetch if this is the first time the hypervisor is created.
clh.Logger().WithField("function", "createSandbox").WithError(err).Info("Sandbox not found creating ")
clh.Logger().WithField("function", "createSandbox").Info("Sandbox not found creating")
// Set initial memomory size of the virtual machine
// Convert to int64 openApiClient only support int64
@@ -260,7 +255,7 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
params = append(params, clhDebugKernelParams...)
}
// Followed by extra debug parameters defined in the configuration file
// Followed by extra kernel parameters defined in the configuration file
params = append(params, clh.config.KernelParams...)
clh.vmconfig.Cmdline.Args = kernelParamsToString(params)
@@ -291,7 +286,6 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
clh.vmconfig.Serial = chclient.ConsoleConfig{
Mode: cctTTY,
}
} else {
clh.vmconfig.Serial = chclient.ConsoleConfig{
Mode: cctNULL,
@@ -311,8 +305,8 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
// Overwrite the default value of HTTP API socket path for cloud hypervisor
apiSocketPath, err := clh.apiSocketPath(id)
if err != nil {
clh.Logger().Info("Invalid api socket path for cloud-hypervisor")
return nil
clh.Logger().WithError(err).Info("Invalid api socket path for cloud-hypervisor")
return err
}
clh.state.apiSocket = apiSocketPath
@@ -377,10 +371,10 @@ func (clh *cloudHypervisor) startSandbox(ctx context.Context, timeout int) error
return errors.New("cloud-hypervisor only supports virtio based file sharing")
}
pid, err := clh.LaunchClh()
pid, err := clh.launchClh()
if err != nil {
if shutdownErr := clh.virtiofsd.Stop(ctx); shutdownErr != nil {
clh.Logger().WithField("error", shutdownErr).Warn("error shutting down Virtiofsd")
clh.Logger().WithError(shutdownErr).Warn("error shutting down Virtiofsd")
}
return fmt.Errorf("failed to launch cloud-hypervisor: %q", err)
}
@@ -400,7 +394,7 @@ func (clh *cloudHypervisor) getSandboxConsole(ctx context.Context, id string) (s
clh.Logger().WithField("function", "getSandboxConsole").WithField("id", id).Info("Get Sandbox Console")
master, slave, err := console.NewPty()
if err != nil {
clh.Logger().Debugf("Error create pseudo tty: %v", err)
clh.Logger().WithError(err).Error("Error create pseudo tty")
return consoleProtoPty, "", err
}
clh.console = master
@@ -427,6 +421,27 @@ func clhDriveIndexToID(i int) string {
return "clh_drive_" + strconv.Itoa(i)
}
// Various cloud-hypervisor APIs report a PCI address in "BB:DD.F"
// form within the PciDeviceInfo struct. This is a broken API,
// because there's no way clh can reliably know the guest side bdf for
// a device, since the bus number depends on how the guest firmware
// and/or kernel enumerates it. They get away with it only because
// they don't use bridges, and so the bus is always 0. Under that
// assumption convert a clh PciDeviceInfo into a PCI path
func clhPciInfoToPath(pciInfo chclient.PciDeviceInfo) (vcTypes.PciPath, error) {
tokens := strings.Split(pciInfo.Bdf, ":")
if len(tokens) != 3 || tokens[0] != "0000" || tokens[1] != "00" {
return vcTypes.PciPath{}, fmt.Errorf("Unexpected PCI address %q from clh hotplug", pciInfo.Bdf)
}
tokens = strings.Split(tokens[2], ".")
if len(tokens) != 2 || tokens[1] != "0" || len(tokens[0]) != 2 {
return vcTypes.PciPath{}, fmt.Errorf("Unexpected PCI address %q from clh hotplug", pciInfo.Bdf)
}
return vcTypes.PciPathFromString(tokens[0])
}
func (clh *cloudHypervisor) hotplugAddBlockDevice(drive *config.BlockDrive) error {
if clh.config.BlockDeviceDriver != config.VirtioBlock {
return fmt.Errorf("incorrect hypervisor configuration on 'block_device_driver':"+
@@ -441,24 +456,24 @@ func (clh *cloudHypervisor) hotplugAddBlockDevice(drive *config.BlockDrive) erro
driveID := clhDriveIndexToID(drive.Index)
//Explicitly set PCIPath to NULL, so that VirtPath can be used
drive.PCIPath = vcTypes.PciPath{}
if drive.Pmem {
err = fmt.Errorf("pmem device hotplug not supported")
} else {
blkDevice := chclient.DiskConfig{
Path: drive.File,
Readonly: drive.ReadOnly,
VhostUser: false,
Id: driveID,
}
_, _, err = cl.VmAddDiskPut(ctx, blkDevice)
return fmt.Errorf("pmem device hotplug not supported")
}
blkDevice := chclient.DiskConfig{
Path: drive.File,
Readonly: drive.ReadOnly,
VhostUser: false,
Id: driveID,
}
pciInfo, _, err := cl.VmAddDiskPut(ctx, blkDevice)
if err != nil {
err = fmt.Errorf("failed to hotplug block device %+v %s", drive, openAPIClientError(err))
return fmt.Errorf("failed to hotplug block device %+v %s", drive, openAPIClientError(err))
}
drive.PCIPath, err = clhPciInfoToPath(pciInfo)
return err
}
@@ -514,7 +529,6 @@ func (clh *cloudHypervisor) hotplugRemoveDevice(ctx context.Context, devInfo int
defer cancel()
_, err := cl.VmRemoveDevicePut(ctx, chclient.VmRemoveDevice{Id: deviceID})
if err != nil {
err = fmt.Errorf("failed to hotplug remove (unplug) device %+v: %s", devInfo, openAPIClientError(err))
}
@@ -652,11 +666,11 @@ func (clh *cloudHypervisor) resumeSandbox(ctx context.Context) error {
}
// stopSandbox will stop the Sandbox's VM.
func (clh *cloudHypervisor) stopSandbox(ctx context.Context) (err error) {
func (clh *cloudHypervisor) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := clh.trace(ctx, "stopSandbox")
defer span.End()
clh.Logger().WithField("function", "stopSandbox").Info("Stop Sandbox")
return clh.terminate(ctx)
return clh.terminate(ctx, waitOnly)
}
func (clh *cloudHypervisor) fromGrpc(ctx context.Context, hypervisorConfig *HypervisorConfig, j []byte) error {
@@ -756,7 +770,7 @@ func (clh *cloudHypervisor) trace(parent context.Context, name string) (otelTrac
return span, ctx
}
func (clh *cloudHypervisor) terminate(ctx context.Context) (err error) {
func (clh *cloudHypervisor) terminate(ctx context.Context, waitOnly bool) (err error) {
span, _ := clh.trace(ctx, "terminate")
defer span.End()
@@ -775,7 +789,7 @@ func (clh *cloudHypervisor) terminate(ctx context.Context) (err error) {
clh.Logger().Debug("Stopping Cloud Hypervisor")
if pidRunning {
if pidRunning && !waitOnly {
clhRunning, _ := clh.isClhRunning(clhStopSandboxTimeout)
if clhRunning {
ctx, cancel := context.WithTimeout(context.Background(), clhStopSandboxTimeout*time.Second)
@@ -786,31 +800,8 @@ func (clh *cloudHypervisor) terminate(ctx context.Context) (err error) {
}
}
// At this point the VMM was stop nicely, but need to check if PID is still running
// Wait for the VM process to terminate
tInit := time.Now()
for {
if err = syscall.Kill(pid, syscall.Signal(0)); err != nil {
pidRunning = false
break
}
if time.Since(tInit).Seconds() >= clhStopSandboxTimeout {
pidRunning = true
clh.Logger().Warnf("VM still running after waiting %ds", clhStopSandboxTimeout)
break
}
// Let's avoid to run a too busy loop
time.Sleep(time.Duration(50) * time.Millisecond)
}
// Let's try with a hammer now, a SIGKILL should get rid of the
// VM process.
if pidRunning {
if err = syscall.Kill(pid, syscall.SIGKILL); err != nil {
return fmt.Errorf("Fatal, failed to kill hypervisor process, error: %s", err)
}
if err = utils.WaitLocalProcess(pid, clhStopSandboxTimeout, syscall.Signal(0), clh.Logger()); err != nil {
return err
}
if clh.virtiofsd == nil {
@@ -856,7 +847,6 @@ func (clh *cloudHypervisor) apiSocketPath(id string) (string, error) {
func (clh *cloudHypervisor) waitVMM(timeout uint) error {
clhRunning, err := clh.isClhRunning(timeout)
if err != nil {
return err
}
@@ -890,7 +880,6 @@ func (clh *cloudHypervisor) getAvailableVersion() error {
clhPath, err := clh.clhPath()
if err != nil {
return err
}
cmd := exec.Command(clhPath, "--version")
@@ -902,7 +891,6 @@ func (clh *cloudHypervisor) getAvailableVersion() error {
words := strings.Fields(string(out))
if len(words) != 2 {
return errors.New("Failed to parse cloud-hypervisor version response. Illegal length")
}
versionSplit := strings.SplitN(words[1], ".", -1)
if len(versionSplit) != 3 {
@@ -915,12 +903,10 @@ func (clh *cloudHypervisor) getAvailableVersion() error {
major, err := strconv.ParseUint(versionSplit[0], 10, 64)
if err != nil {
return err
}
minor, err := strconv.ParseUint(versionSplit[1], 10, 64)
if err != nil {
return err
}
// revision could have aditional commit information separated by '-'
@@ -942,7 +928,7 @@ func (clh *cloudHypervisor) getAvailableVersion() error {
}
func (clh *cloudHypervisor) LaunchClh() (int, error) {
func (clh *cloudHypervisor) launchClh() (int, error) {
clhPath, err := clh.clhPath()
if err != nil {
@@ -1026,7 +1012,6 @@ func kernelParamsToString(params []Param) string {
for _, p := range params {
paramBuilder.WriteString(p.Key)
if len(p.Value) > 0 {
paramBuilder.WriteString("=")
paramBuilder.WriteString(p.Value)
}
@@ -1084,7 +1069,6 @@ func (clh *cloudHypervisor) newAPIClient() *chclient.DefaultApiService {
addr, err := net.ResolveUnixAddr("unix", clh.state.apiSocket)
if err != nil {
return nil, err
}
return net.DialUnix("unix", nil, addr)
@@ -1123,13 +1107,11 @@ func (clh *cloudHypervisor) bootVM(ctx context.Context) error {
clh.Logger().WithField("body", string(bodyBuf)).Debug("VM config")
}
_, err := cl.CreateVM(ctx, clh.vmconfig)
if err != nil {
return openAPIClientError(err)
}
info, err := clh.vmInfo()
if err != nil {
return err
}
@@ -1142,13 +1124,11 @@ func (clh *cloudHypervisor) bootVM(ctx context.Context) error {
clh.Logger().Debug("Booting VM")
_, err = cl.BootVM(ctx)
if err != nil {
return openAPIClientError(err)
}
info, err = clh.vmInfo()
if err != nil {
return err
}
@@ -1176,13 +1156,11 @@ func (clh *cloudHypervisor) addNet(e Endpoint) error {
mac := e.HardwareAddr()
netPair := e.NetworkPair()
if netPair == nil {
return errors.New("net Pair to be added is nil, needed to get TAP path")
}
tapPath := netPair.TapInterface.TAPIface.Name
if tapPath == "" {
return errors.New("TAP path in network pair is empty")
}
@@ -1301,7 +1279,6 @@ func (clh *cloudHypervisor) vmInfo() (chclient.VmInfo, error) {
clh.Logger().WithError(openAPIClientError(err)).Warn("VmInfoGet failed")
}
return info, openAPIClientError(err)
}
func (clh *cloudHypervisor) isRateLimiterBuiltin() bool {

View File

@@ -36,7 +36,7 @@ func newClhConfig() (HypervisorConfig, error) {
}
if testVirtiofsdPath == "" {
return HypervisorConfig{}, errors.New("hypervisor fake path is empty")
return HypervisorConfig{}, errors.New("virtiofsd fake path is empty")
}
if _, err := os.Stat(testClhPath); os.IsNotExist(err) {
@@ -101,7 +101,7 @@ func (c *clhClientMock) VmAddDevicePut(ctx context.Context, vmAddDevice chclient
//nolint:golint
func (c *clhClientMock) VmAddDiskPut(ctx context.Context, diskConfig chclient.DiskConfig) (chclient.PciDeviceInfo, *http.Response, error) {
return chclient.PciDeviceInfo{}, nil, nil
return chclient.PciDeviceInfo{Bdf: "0000:00:0a.0"}, nil, nil
}
//nolint:golint

View File

@@ -1125,18 +1125,6 @@ func (c *Container) ioStream(processID string) (io.WriteCloser, io.Reader, io.Re
return stream.stdin(), stream.stdout(), stream.stderr(), nil
}
func (c *Container) processList(ctx context.Context, options ProcessListOptions) (ProcessList, error) {
if err := c.checkSandboxRunning("ps"); err != nil {
return nil, err
}
if c.state.State != types.StateRunning {
return nil, fmt.Errorf("Container not running, impossible to list processes")
}
return c.sandbox.agent.processListContainer(ctx, c.sandbox, *c, options)
}
func (c *Container) stats(ctx context.Context) (*ContainerStats, error) {
if err := c.checkSandboxRunning("stats"); err != nil {
return nil, err
@@ -1427,8 +1415,6 @@ func (c *Container) cgroupsCreate() (err error) {
return fmt.Errorf("Could not create cgroup for %v: %v", c.state.CgroupPath, err)
}
c.config.Resources = resources
// Add shim into cgroup
if c.process.Pid > 0 {
if err := cgroup.Add(cgroups.Process{Pid: c.process.Pid}); err != nil {

View File

@@ -316,11 +316,11 @@ func TestContainerAddDriveDir(t *testing.T) {
},
}
sandbox.newStore, err = persist.GetDriver()
sandbox.store, err = persist.GetDriver()
assert.NoError(err)
assert.NotNil(sandbox.newStore)
assert.NotNil(sandbox.store)
defer sandbox.newStore.Destroy(sandbox.id)
defer sandbox.store.Destroy(sandbox.id)
contID := "100"
container := Container{

View File

@@ -582,7 +582,6 @@ type VCSandbox interface {
ResumeContainer(containerID string) error
EnterContainer(containerID string, cmd types.Cmd) (VCContainer, *Process, error)
UpdateContainer(containerID string, resources specs.LinuxResources) error
ProcessListContainer(containerID string, options ProcessListOptions) (ProcessList, error)
WaitProcess(containerID, processID string) (int32, error)
SignalProcess(containerID, processID string, signal syscall.Signal, all bool) error
WinsizeProcess(containerID, processID string, height, width uint32) error
@@ -916,7 +915,6 @@ type VCContainer interface {
* [`EnterContainer`](#entercontainer)
* [`StatusContainer`](#statuscontainer)
* [`KillContainer`](#killcontainer)
* [`ProcessListContainer`](#processlistcontainer)
* [`StatsContainer`](#statscontainer)
* [`PauseContainer`](#pausecontainer)
* [`ResumeContainer`](#resumecontainer)
@@ -977,13 +975,6 @@ func StatusContainer(sandboxID, containerID string) (ContainerStatus, error)
func KillContainer(sandboxID, containerID string, signal syscall.Signal, all bool) error
```
#### `ProcessListContainer`
```Go
// ProcessListContainer is the virtcontainers entry point to list
// processes running inside a container
func ProcessListContainer(sandboxID, containerID string, options ProcessListOptions) (ProcessList, error)
```
#### `StatsContainer`
```Go
// StatsContainer return the stats of a running container

View File

@@ -273,20 +273,8 @@ func (fc *firecracker) vmRunning(ctx context.Context) bool {
fc.Logger().WithError(err).Error("getting vm status failed")
return false
}
// Be explicit
switch *resp.Payload.State {
case models.InstanceInfoStateStarting:
// Unsure what we should do here
fc.Logger().WithField("unexpected-state", models.InstanceInfoStateStarting).Debug("vmRunning")
return false
case models.InstanceInfoStateRunning:
return true
case models.InstanceInfoStateUninitialized:
return false
default:
return false
}
// The current state of the Firecracker instance (swagger:model InstanceInfo)
return resp.Payload.Started
}
func (fc *firecracker) getVersionNumber() (string, error) {
@@ -302,7 +290,7 @@ func (fc *firecracker) getVersionNumber() (string, error) {
fields := strings.Split(string(data), " ")
if len(fields) > 1 {
// The output format of `Firecracker --verion` is as follows
// Firecracker v0.21.1
// Firecracker v0.23.1
version = strings.TrimPrefix(strings.TrimSpace(fields[1]), "v")
return version, nil
}
@@ -421,7 +409,7 @@ func (fc *firecracker) fcInit(ctx context.Context, timeout int) error {
return nil
}
func (fc *firecracker) fcEnd(ctx context.Context) (err error) {
func (fc *firecracker) fcEnd(ctx context.Context, waitOnly bool) (err error) {
span, _ := fc.trace(ctx, "fcEnd")
defer span.End()
@@ -437,33 +425,15 @@ func (fc *firecracker) fcEnd(ctx context.Context) (err error) {
pid := fc.info.PID
// Send a SIGTERM to the VM process to try to stop it properly
if err = syscall.Kill(pid, syscall.SIGTERM); err != nil {
if err == syscall.ESRCH {
return nil
}
return err
shutdownSignal := syscall.SIGTERM
if waitOnly {
// NOP
shutdownSignal = syscall.Signal(0)
}
// Wait for the VM process to terminate
tInit := time.Now()
for {
if err = syscall.Kill(pid, syscall.Signal(0)); err != nil {
return nil
}
if time.Since(tInit).Seconds() >= fcStopSandboxTimeout {
fc.Logger().Warnf("VM still running after waiting %ds", fcStopSandboxTimeout)
break
}
// Let's avoid to run a too busy loop
time.Sleep(time.Duration(50) * time.Millisecond)
}
// Let's try with a hammer now, a SIGKILL should get rid of the
// VM process.
return syscall.Kill(pid, syscall.SIGKILL)
return utils.WaitLocalProcess(pid, fcStopSandboxTimeout, shutdownSignal, fc.Logger())
}
func (fc *firecracker) client(ctx context.Context) *client.Firecracker {
@@ -612,16 +582,26 @@ func (fc *firecracker) fcSetLogger(ctx context.Context) error {
return fmt.Errorf("Failed setting log: %s", err)
}
fc.fcConfig.Logger = &models.Logger{
Level: &fcLogLevel,
LogPath: &jailedLogFifo,
}
return err
}
func (fc *firecracker) fcSetMetrics(ctx context.Context) error {
span, _ := fc.trace(ctx, "fcSetMetrics")
defer span.End()
// listen to metrics file and transfer error info
jailedMetricsFifo, err := fc.fcListenToFifo(fcMetricsFifo, fc.updateMetrics)
if err != nil {
return fmt.Errorf("Failed setting log: %s", err)
}
fc.fcConfig.Logger = &models.Logger{
Level: &fcLogLevel,
LogFifo: &jailedLogFifo,
MetricsFifo: &jailedMetricsFifo,
fc.fcConfig.Metrics = &models.Metrics{
MetricsPath: &jailedMetricsFifo,
}
return err
@@ -745,6 +725,10 @@ func (fc *firecracker) fcInitConfiguration(ctx context.Context) error {
return err
}
if err := fc.fcSetMetrics(ctx); err != nil {
return err
}
fc.state.set(cfReady)
for _, d := range fc.pendingDevices {
if err := fc.addDevice(ctx, d.dev, d.devType); err != nil {
@@ -781,7 +765,7 @@ func (fc *firecracker) startSandbox(ctx context.Context, timeout int) error {
var err error
defer func() {
if err != nil {
fc.fcEnd(ctx)
fc.fcEnd(ctx, false)
}
}()
@@ -874,11 +858,11 @@ func (fc *firecracker) cleanupJail(ctx context.Context) {
}
// stopSandbox will stop the Sandbox's VM.
func (fc *firecracker) stopSandbox(ctx context.Context) (err error) {
func (fc *firecracker) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := fc.trace(ctx, "stopSandbox")
defer span.End()
return fc.fcEnd(ctx)
return fc.fcEnd(ctx, waitOnly)
}
func (fc *firecracker) pauseSandbox(ctx context.Context) error {

View File

@@ -310,6 +310,9 @@ type HypervisorConfig struct {
// entropy (/dev/random, /dev/urandom or real hardware RNG device)
EntropySource string
// EntropySourceList is the list of valid entropy sources
EntropySourceList []string
// Shared file system type:
// - virtio-9p (default)
// - virtio-fs
@@ -787,7 +790,9 @@ func generateVMSocket(id string, vmStogarePath string) (interface{}, error) {
type hypervisor interface {
createSandbox(ctx context.Context, id string, networkNS NetworkNamespace, hypervisorConfig *HypervisorConfig) error
startSandbox(ctx context.Context, timeout int) error
stopSandbox(ctx context.Context) error
// If wait is set, don't actively stop the sandbox:
// just perform cleanup.
stopSandbox(ctx context.Context, waitOnly bool) error
pauseSandbox(ctx context.Context) error
saveSandbox() error
resumeSandbox(ctx context.Context) error

View File

@@ -57,7 +57,6 @@ type VCSandbox interface {
ResumeContainer(ctx context.Context, containerID string) error
EnterContainer(ctx context.Context, containerID string, cmd types.Cmd) (VCContainer, *Process, error)
UpdateContainer(ctx context.Context, containerID string, resources specs.LinuxResources) error
ProcessListContainer(ctx context.Context, containerID string, options ProcessListOptions) (ProcessList, error)
WaitProcess(ctx context.Context, containerID, processID string) (int32, error)
SignalProcess(ctx context.Context, containerID, processID string, signal syscall.Signal, all bool) error
WinsizeProcess(ctx context.Context, containerID, processID string, height, width uint32) error

View File

@@ -51,6 +51,9 @@ const (
// containers.
KataLocalDevType = "local"
// Allocating an FSGroup that owns the pod's volumes
fsGid = "fsgid"
// path to vfio devices
vfioPath = "/dev/vfio/"
@@ -117,7 +120,6 @@ const (
grpcListRoutesRequest = "grpc.ListRoutesRequest"
grpcAddARPNeighborsRequest = "grpc.AddARPNeighborsRequest"
grpcOnlineCPUMemRequest = "grpc.OnlineCPUMemRequest"
grpcListProcessesRequest = "grpc.ListProcessesRequest"
grpcUpdateContainerRequest = "grpc.UpdateContainerRequest"
grpcWaitProcessRequest = "grpc.WaitProcessRequest"
grpcTtyWinResizeRequest = "grpc.TtyWinResizeRequest"
@@ -215,6 +217,7 @@ type KataAgentConfig struct {
ContainerPipeSize uint32
TraceMode string
TraceType string
DialTimeout uint32
KernelModules []string
}
@@ -234,6 +237,7 @@ type kataAgent struct {
keepConn bool
dynamicTracing bool
dead bool
dialTimout uint32
kmodules []string
vmSocket interface{}
@@ -342,6 +346,7 @@ func (k *kataAgent) init(ctx context.Context, sandbox *Sandbox, config KataAgent
disableVMShutdown = k.handleTraceSettings(config)
k.keepConn = config.LongLiveConn
k.kmodules = config.KernelModules
k.dialTimout = config.DialTimeout
return disableVMShutdown, nil
}
@@ -368,19 +373,12 @@ func (k *kataAgent) capabilities() types.Capabilities {
return caps
}
func (k *kataAgent) internalConfigure(h hypervisor, id string, config interface{}) error {
func (k *kataAgent) internalConfigure(h hypervisor, id string, config KataAgentConfig) error {
var err error
if config != nil {
switch c := config.(type) {
case KataAgentConfig:
if k.vmSocket, err = h.generateSocket(id); err != nil {
return err
}
k.keepConn = c.LongLiveConn
default:
return vcTypes.ErrInvalidConfigType
}
if k.vmSocket, err = h.generateSocket(id); err != nil {
return err
}
k.keepConn = config.LongLiveConn
return nil
}
@@ -429,7 +427,7 @@ func cleanupSandboxBindMounts(sandbox *Sandbox) error {
return nil
}
func (k *kataAgent) configure(ctx context.Context, h hypervisor, id, sharePath string, config interface{}) error {
func (k *kataAgent) configure(ctx context.Context, h hypervisor, id, sharePath string, config KataAgentConfig) error {
err := k.internalConfigure(h, id, config)
if err != nil {
return err
@@ -471,7 +469,7 @@ func (k *kataAgent) configure(ctx context.Context, h hypervisor, id, sharePath s
return h.addDevice(ctx, sharedVolume, fsDev)
}
func (k *kataAgent) configureFromGrpc(h hypervisor, id string, config interface{}) error {
func (k *kataAgent) configureFromGrpc(h hypervisor, id string, config KataAgentConfig) error {
return k.internalConfigure(h, id, config)
}
@@ -1233,12 +1231,7 @@ func (k *kataAgent) buildContainerRootfs(ctx context.Context, sandbox *Sandbox,
rootfs.Source = blockDrive.DevNo
case sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioBlock:
rootfs.Driver = kataBlkDevType
if blockDrive.PCIPath.IsNil() {
rootfs.Source = blockDrive.VirtPath
} else {
rootfs.Source = blockDrive.PCIPath.String()
}
rootfs.Source = blockDrive.PCIPath.String()
case sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioSCSI:
rootfs.Driver = kataSCSIDevType
rootfs.Source = blockDrive.SCSIAddr
@@ -1324,10 +1317,18 @@ func (k *kataAgent) createContainer(ctx context.Context, sandbox *Sandbox, c *Co
k.handleShm(ociSpec.Mounts, sandbox)
epheStorages := k.handleEphemeralStorage(ociSpec.Mounts)
epheStorages, err := k.handleEphemeralStorage(ociSpec.Mounts)
if err != nil {
return nil, err
}
ctrStorages = append(ctrStorages, epheStorages...)
localStorages := k.handleLocalStorage(ociSpec.Mounts, sandbox.id, c.rootfsSuffix)
localStorages, err := k.handleLocalStorage(ociSpec.Mounts, sandbox.id, c.rootfsSuffix)
if err != nil {
return nil, err
}
ctrStorages = append(ctrStorages, localStorages...)
// We replace all OCI mount sources that match our container mount
@@ -1401,10 +1402,27 @@ func buildProcessFromExecID(token string) (*Process, error) {
// handleEphemeralStorage handles ephemeral storages by
// creating a Storage from corresponding source of the mount point
func (k *kataAgent) handleEphemeralStorage(mounts []specs.Mount) []*grpc.Storage {
func (k *kataAgent) handleEphemeralStorage(mounts []specs.Mount) ([]*grpc.Storage, error) {
var epheStorages []*grpc.Storage
for idx, mnt := range mounts {
if mnt.Type == KataEphemeralDevType {
origin_src := mounts[idx].Source
stat := syscall.Stat_t{}
err := syscall.Stat(origin_src, &stat)
if err != nil {
k.Logger().WithError(err).Errorf("failed to stat %s", origin_src)
return nil, err
}
var dir_options []string
// if volume's gid isn't root group(default group), this means there's
// an specific fsGroup is set on this local volume, then it should pass
// to guest.
if stat.Gid != 0 {
dir_options = append(dir_options, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
}
// Set the mount source path to a path that resides inside the VM
mounts[idx].Source = filepath.Join(ephemeralPath(), filepath.Base(mnt.Source))
// Set the mount type to "bind"
@@ -1417,19 +1435,37 @@ func (k *kataAgent) handleEphemeralStorage(mounts []specs.Mount) []*grpc.Storage
Source: "tmpfs",
Fstype: "tmpfs",
MountPoint: mounts[idx].Source,
Options: dir_options,
}
epheStorages = append(epheStorages, epheStorage)
}
}
return epheStorages
return epheStorages, nil
}
// handleLocalStorage handles local storage within the VM
// by creating a directory in the VM from the source of the mount point.
func (k *kataAgent) handleLocalStorage(mounts []specs.Mount, sandboxID string, rootfsSuffix string) []*grpc.Storage {
func (k *kataAgent) handleLocalStorage(mounts []specs.Mount, sandboxID string, rootfsSuffix string) ([]*grpc.Storage, error) {
var localStorages []*grpc.Storage
for idx, mnt := range mounts {
if mnt.Type == KataLocalDevType {
origin_src := mounts[idx].Source
stat := syscall.Stat_t{}
err := syscall.Stat(origin_src, &stat)
if err != nil {
k.Logger().WithError(err).Errorf("failed to stat %s", origin_src)
return nil, err
}
dir_options := localDirOptions
// if volume's gid isn't root group(default group), this means there's
// an specific fsGroup is set on this local volume, then it should pass
// to guest.
if stat.Gid != 0 {
dir_options = append(dir_options, fmt.Sprintf("%s=%d", fsGid, stat.Gid))
}
// Set the mount source path to a the desired directory point in the VM.
// In this case it is located in the sandbox directory.
// We rely on the fact that the first container in the VM has the same ID as the sandbox ID.
@@ -1444,12 +1480,12 @@ func (k *kataAgent) handleLocalStorage(mounts []specs.Mount, sandboxID string, r
Source: KataLocalDevType,
Fstype: KataLocalDevType,
MountPoint: mounts[idx].Source,
Options: localDirOptions,
Options: dir_options,
}
localStorages = append(localStorages, localStorage)
}
}
return localStorages
return localStorages, nil
}
// handleDeviceBlockVolume handles volume that is block device file
@@ -1474,11 +1510,7 @@ func (k *kataAgent) handleDeviceBlockVolume(c *Container, m Mount, device api.De
vol.Source = blockDrive.DevNo
case c.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioBlock:
vol.Driver = kataBlkDevType
if blockDrive.PCIPath.IsNil() {
vol.Source = blockDrive.VirtPath
} else {
vol.Source = blockDrive.PCIPath.String()
}
vol.Source = blockDrive.PCIPath.String()
case c.sandbox.config.HypervisorConfig.BlockDeviceDriver == config.VirtioMmio:
vol.Driver = kataMmioBlkDevType
vol.Source = blockDrive.VirtPath
@@ -1645,26 +1677,6 @@ func (k *kataAgent) winsizeProcess(ctx context.Context, c *Container, processID
return err
}
func (k *kataAgent) processListContainer(ctx context.Context, sandbox *Sandbox, c Container, options ProcessListOptions) (ProcessList, error) {
req := &grpc.ListProcessesRequest{
ContainerId: c.id,
Format: options.Format,
Args: options.Args,
}
resp, err := k.sendReq(ctx, req)
if err != nil {
return nil, err
}
processList, ok := resp.(*grpc.ListProcessesResponse)
if !ok {
return nil, fmt.Errorf("Bad list processes response")
}
return processList.ProcessList, nil
}
func (k *kataAgent) updateContainer(ctx context.Context, sandbox *Sandbox, c Container, resources specs.LinuxResources) error {
grpcResources, err := grpc.ResourcesOCItoGRPC(&resources)
if err != nil {
@@ -1785,7 +1797,7 @@ func (k *kataAgent) connect(ctx context.Context) error {
}
k.Logger().WithField("url", k.state.URL).Info("New client")
client, err := kataclient.NewAgentClient(k.ctx, k.state.URL)
client, err := kataclient.NewAgentClient(k.ctx, k.state.URL, k.dialTimout)
if err != nil {
k.dead = true
return err
@@ -1820,9 +1832,6 @@ func (k *kataAgent) disconnect(ctx context.Context) error {
// check grpc server is serving
func (k *kataAgent) check(ctx context.Context) error {
span, ctx := k.trace(ctx, "check")
defer span.End()
_, err := k.sendReq(ctx, &grpc.CheckRequest{})
if err != nil {
err = fmt.Errorf("Failed to check if grpc server is working: %s", err)
@@ -1922,9 +1931,6 @@ func (k *kataAgent) installReqFunc(c *kataclient.AgentClient) {
k.reqHandlers[grpcOnlineCPUMemRequest] = func(ctx context.Context, req interface{}) (interface{}, error) {
return k.client.AgentServiceClient.OnlineCPUMem(ctx, req.(*grpc.OnlineCPUMemRequest))
}
k.reqHandlers[grpcListProcessesRequest] = func(ctx context.Context, req interface{}) (interface{}, error) {
return k.client.AgentServiceClient.ListProcesses(ctx, req.(*grpc.ListProcessesRequest))
}
k.reqHandlers[grpcUpdateContainerRequest] = func(ctx context.Context, req interface{}) (interface{}, error) {
return k.client.AgentServiceClient.UpdateContainer(ctx, req.(*grpc.UpdateContainerRequest))
}
@@ -1994,9 +2000,6 @@ func (k *kataAgent) getReqContext(reqName string) (ctx context.Context, cancel c
func (k *kataAgent) sendReq(spanCtx context.Context, request interface{}) (interface{}, error) {
start := time.Now()
span, spanCtx := k.trace(spanCtx, "sendReq")
span.SetAttributes(label.Key("request").String(fmt.Sprintf("%+v", request)))
defer span.End()
if err := k.connect(spanCtx); err != nil {
return nil, err
@@ -2015,7 +2018,7 @@ func (k *kataAgent) sendReq(spanCtx context.Context, request interface{}) (inter
if cancel != nil {
defer cancel()
}
k.Logger().WithField("name", msgName).WithField("req", message.String()).Debug("sending request")
k.Logger().WithField("name", msgName).WithField("req", message.String()).Trace("sending request")
defer func() {
agentRPCDurationsHistogram.WithLabelValues(msgName).Observe(float64(time.Since(start).Nanoseconds() / int64(time.Millisecond)))

View File

@@ -143,9 +143,6 @@ func TestKataAgentSendReq(t *testing.T) {
err = k.winsizeProcess(ctx, container, execid, 100, 200)
assert.Nil(err)
_, err = k.processListContainer(ctx, sandbox, Container{}, ProcessListOptions{})
assert.Nil(err)
err = k.updateContainer(ctx, sandbox, Container{}, specs.LinuxResources{})
assert.Nil(err)
@@ -187,6 +184,7 @@ func TestHandleEphemeralStorage(t *testing.T) {
k := kataAgent{}
var ociMounts []specs.Mount
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
mount := specs.Mount{
Type: KataEphemeralDevType,
@@ -194,7 +192,8 @@ func TestHandleEphemeralStorage(t *testing.T) {
}
ociMounts = append(ociMounts, mount)
epheStorages := k.handleEphemeralStorage(ociMounts)
epheStorages, err := k.handleEphemeralStorage(ociMounts)
assert.Nil(t, err)
epheMountPoint := epheStorages[0].MountPoint
expected := filepath.Join(ephemeralPath(), filepath.Base(mountSource))
@@ -205,7 +204,8 @@ func TestHandleEphemeralStorage(t *testing.T) {
func TestHandleLocalStorage(t *testing.T) {
k := kataAgent{}
var ociMounts []specs.Mount
mountSource := "mountPoint"
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
mount := specs.Mount{
Type: KataLocalDevType,
@@ -216,7 +216,7 @@ func TestHandleLocalStorage(t *testing.T) {
rootfsSuffix := "rootfs"
ociMounts = append(ociMounts, mount)
localStorages := k.handleLocalStorage(ociMounts, sandboxID, rootfsSuffix)
localStorages, _ := k.handleLocalStorage(ociMounts, sandboxID, rootfsSuffix)
assert.NotNil(t, localStorages)
assert.Equal(t, len(localStorages), 1)
@@ -283,18 +283,6 @@ func TestHandleDeviceBlockVolume(t *testing.T) {
Source: testPCIPath.String(),
},
},
{
BlockDeviceDriver: config.VirtioBlock,
inputDev: &drivers.BlockDevice{
BlockDrive: &config.BlockDrive{
VirtPath: testVirtPath,
},
},
resultVol: &pb.Storage{
Driver: kataBlkDevType,
Source: testVirtPath,
},
},
{
BlockDeviceDriver: config.VirtioMmio,
inputDev: &drivers.BlockDevice{
@@ -666,6 +654,7 @@ func TestHandleShm(t *testing.T) {
// shared with the sandbox shm.
ociMounts[0].Type = KataEphemeralDevType
mountSource := "/tmp/mountPoint"
os.Mkdir(mountSource, 0755)
ociMounts[0].Source = mountSource
k.handleShm(ociMounts, sandbox)
@@ -673,7 +662,9 @@ func TestHandleShm(t *testing.T) {
assert.Equal(ociMounts[0].Type, KataEphemeralDevType)
assert.NotEmpty(ociMounts[0].Source, mountSource)
epheStorages := k.handleEphemeralStorage(ociMounts)
epheStorages, err := k.handleEphemeralStorage(ociMounts)
assert.Nil(err)
epheMountPoint := epheStorages[0].MountPoint
expected := filepath.Join(ephemeralPath(), filepath.Base(mountSource))
assert.Equal(epheMountPoint, expected,
@@ -830,10 +821,10 @@ func TestAgentCreateContainer(t *testing.T) {
hypervisor: &mockHypervisor{},
}
newStore, err := persist.GetDriver()
store, err := persist.GetDriver()
assert.NoError(err)
assert.NotNil(newStore)
sandbox.newStore = newStore
assert.NotNil(store)
sandbox.store = store
container := &Container{
ctx: sandbox.ctx,

View File

@@ -86,11 +86,6 @@ func (n *mockAgent) signalProcess(ctx context.Context, c *Container, processID s
return nil
}
// processListContainer is the Noop agent Container ps implementation. It does nothing.
func (n *mockAgent) processListContainer(ctx context.Context, sandbox *Sandbox, c Container, options ProcessListOptions) (ProcessList, error) {
return nil, nil
}
// updateContainer is the Noop agent Container update implementation. It does nothing.
func (n *mockAgent) updateContainer(ctx context.Context, sandbox *Sandbox, c Container, resources specs.LinuxResources) error {
return nil
@@ -176,12 +171,12 @@ func (n *mockAgent) resumeContainer(ctx context.Context, sandbox *Sandbox, c Con
return nil
}
// configHypervisor is the Noop agent hypervisor configuration implementation. It does nothing.
func (n *mockAgent) configure(ctx context.Context, h hypervisor, id, sharePath string, config interface{}) error {
// configure is the Noop agent configuration implementation. It does nothing.
func (n *mockAgent) configure(ctx context.Context, h hypervisor, id, sharePath string, config KataAgentConfig) error {
return nil
}
func (n *mockAgent) configureFromGrpc(h hypervisor, id string, config interface{}) error {
func (n *mockAgent) configureFromGrpc(h hypervisor, id string, config KataAgentConfig) error {
return nil
}

View File

@@ -41,7 +41,7 @@ func (m *mockHypervisor) startSandbox(ctx context.Context, timeout int) error {
return nil
}
func (m *mockHypervisor) stopSandbox(ctx context.Context) error {
func (m *mockHypervisor) stopSandbox(ctx context.Context, waitOnly bool) error {
return nil
}

Some files were not shown because too many files have changed in this diff Show More