Compare commits

..

67 Commits

Author SHA1 Message Date
Fabiano Fidêncio
af0fbb9460 Merge pull request #2723 from fidencio/2.2.1-branch-bump
# Kata Containers 2.2.1
2021-09-25 00:02:01 +02:00
Fabiano Fidêncio
bc48a58806 Merge pull request #2731 from fidencio/wip/stable-2.2-release-fix-using-vendored-sources
stable-2.2 | workflows: Fix the config file path for using vendored sources
2021-09-25 00:01:43 +02:00
Fabiano Fidêncio
d581cdab4e Merge pull request #2728 from fidencio/wip/stable-2.2-fix-wrong-tags-attribution
stable-2.2 | workflows: Fix tag attribution
2021-09-24 23:01:18 +02:00
Fabiano Fidêncio
52fdfc4fed workflows: Fix the config file path for using vendored sources
There's a typo in the file that should receive the output of `cargo
vendor`.  We should use forward the output to `.cargo/config` instead of
`.cargo/vendor`.

This was introduced by 21c8511630.

Backports: #2730
Fixes: #2729

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit a525991c2c)
2021-09-24 20:29:15 +02:00
Fabiano Fidêncio
8d98e01414 workflows: Fix tag attribution
While releasing kata-containers 2.3.0-alpha1 we've hit some issues as
the tags attribution is done incorrectly.  We want an array of tags to
iterate over, but the currently code is just lost is the parenthesis.

This issue was introduced in a156288c1f.

Fixes: #2725

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 39dcbaa672)
2021-09-24 20:07:55 +02:00
Fabiano Fidêncio
688cc8e2bd release: Kata Containers 2.2.1
- stable-2.2 | watcher: ensure we create target mount point for storage
- stable-2.2 | virtiofs: Create shared directory with 0700 mode, not 0750
- [backport]sandbox: Allow the device to be accessed,such as /dev/null and /dev/u…
- stable-2.2 | kata-deploy: Also provide "stable" & "latest" tags
- stable-2.2 | runtime: tracing: Fix logger passed in newContainer
- stable-2.2 | runtime: tracing: Use root context to stop tracing
- packaging: Backport QEMU's GitLab switch to 5.1.x
- stable-2.2 | workflows,release: Upload the vendored cargo code
- backport: Call agent shutdown test only in the correspondent CI_JOB
- packaging: Backport QEMU's switch to GitLab repos
- stable-2.2 | virtcontainers: fc: parse vcpuID correctly
- shimv2: Backport fixes for #2527
- backport-2.2: remove default config for arm64.
- stable-2.2 | versions: Upgrade to Cloud Hypervisor v18.0
- [backport]sandbox: Add device permissions such as /dev/null to cgroup
- [backport] runtime: Fix README link
- [backport] snap: Test variable instead of executing "branch"

d9b41fc5 watcher: ensure we create target mount point for storage
2b6327ac kata-deploy: Add more info about the stable tag
5256e085 kata-deploy: Improve README
02b46268 kata-deploy: Remove qemu-virtiofs runtime class
1b3058dd release: update the kata-deploy yaml files accordingly
98e2e935 kata-deploy: Add "stable" info to the README
8f25c7da kata-deploy: Update the README
84da2f8d workflows: Add "stable" & "latest" tags to kata-deploy
5c76f1c6 packaging: Backport QEMU's GitLab switch to 5.1.x
ba6fc328 packaging: Backport QEMU's switch to GitLab repos
d5f5da43 workflows,release: Upload the vendored cargo code
017cd3c5 ci: Call agent shutdown test only in the correspondent CI_JOB
2ca867da runtime: Add container field to logs
f4da502c shimv2: add information to method comment
16164241 shimv2: add logging to shimv2 api calls
25c7e118 virtiofs: Create shared directory with 0700 mode, not 0750
4c5bf057 virtcontainers: fc: parse vcpuID correctly
b3e620db runtime: tracing: Fix logger passed in newContainer
98c2ca13 runtime: tracing: Use root context to stop tracing
0481c507 backport-2.2: remove default config for arm64.
56920bc9 sandbox: Allow the device to be accessed,such as /dev/null and /dev/urandom
a1874ccd virtcontainers: clh: Revert the workaround incorrect default values
c2c65050 virtcontainers: clh: Re-generate the client code
7ee43f94 versions: Upgrade to Cloud Hypervisor v18.0
1792a9fe runtime: Fix README link
807cc8a3 sandbox: Add device permissions such as /dev/null to cgroup
5987f3b5 snap: Test variable instead of executing "branch"

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-09-24 12:34:35 +02:00
Fabiano Fidêncio
ebc23df752 Merge pull request #2714 from egernst/watcher-fixup-backport
stable-2.2 | watcher: ensure we create target mount point for storage
2021-09-24 09:32:29 +02:00
Fabiano Fidêncio
e58fabfc20 Merge pull request #2598 from c3d/backport/2589-virtiofsd-perms-perms
stable-2.2 | virtiofs: Create shared directory with 0700 mode, not 0750
2021-09-24 09:16:59 +02:00
Peng Tao
feb06dad8a Merge pull request #2623 from Bevisy/stable-2.2-2615-bp
[backport]sandbox: Allow the device to be accessed,such as /dev/null and /dev/u…
2021-09-24 14:04:36 +08:00
Eric Ernst
d9b41fc583 watcher: ensure we create target mount point for storage
We would only create the target when updating files. We need to make
sure that we create the target if the source is a directory. Without
this, we'll fail to start a container that utilizes an empty configmap,
for example.

Add unit tests for this.

Fixes: #2638

Signed-off-by: Eric Ernst <eric_ernst@apple.com>
2021-09-23 15:45:57 -07:00
Julio Montes
7852b9f8e1 Merge pull request #2711 from fidencio/wip/stable-2.2-kata-deploy-use-stable-and-latest-tags
stable-2.2 | kata-deploy: Also provide "stable" & "latest" tags
2021-09-23 12:18:00 -05:00
Chelsea Mafrica
83f219577d Merge pull request #2668 from cmaf/tracing-newContainer-logger-bp-2.2
stable-2.2 | runtime: tracing: Fix logger passed in newContainer
2021-09-23 09:58:14 -07:00
Chelsea Mafrica
97421afe17 Merge pull request #2664 from cmaf/tracing-stop-rootctx-bp-2.2
stable-2.2 | runtime: tracing: Use root context to stop tracing
2021-09-23 09:57:57 -07:00
Fabiano Fidêncio
2b6327ac37 kata-deploy: Add more info about the stable tag
Let's make it as clear as possible for the user that if they go for a
tagged version of kata-deploy, eg, 2.2.1, they'll have the kata runtime
2.2.1 deployed on their cluster.

Suggested-by: Eric Adams <eric.adams@intel.com>
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 3bdcfaa658)
2021-09-23 14:05:17 +02:00
Fabiano Fidêncio
5256e0852c kata-deploy: Improve README
Let's add more instructions in the README in order to make clear to the
reader what they can do to check whether kata-deploy is ready, or
whether they have to wait till proceeding with the next instruction.

Suggested-by: Eric Adams <eric.adams@intel.com>
Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 41c590fa0a)
2021-09-23 14:04:57 +02:00
Fabiano Fidêncio
02b46268f4 kata-deploy: Remove qemu-virtiofs runtime class
There's only one QEMU runtime class deployed as part of kata-deploy, and
that includes virtiofs support (which is the default for quite some time
already).  Knowing this, let's just remove the `qemu-virtiofs` runtime
class definition.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit debf3c9fe9)
2021-09-23 14:04:50 +02:00
Fabiano Fidêncio
1b3058dd24 release: update the kata-deploy yaml files accordingly
Let's teach our `update-repository-version.sh` script to properly update
the kata-deploy tags on both kata-deploy and kata-cleanup yaml files.

The 3 scenarios that we're dealing with, based on which branch we're
targetting, are:
```
 1) [main] ------> [main]        NO-OP
   "alpha0"       "alpha1"

                   +----------------+----------------+
                   |      from      |       to       |
  -----------------+----------------+----------------+
  kata-deploy      | "latest"       | "latest"       |
  -----------------+----------------+----------------+
  kata-deploy-base | "stable        | "stable"       |
  -----------------+----------------+----------------+

 2) [main] ------> [stable] Update kata-deploy and
   "alpha2"         "rc0"   get rid of kata-deploy-base

                   +----------------+----------------+
                   |      from      |       to       |
  -----------------+----------------+----------------+
  kata-deploy      | "latest"       | "rc0"          |
  -----------------+----------------+----------------+
  kata-deploy-base | "stable"       | REMOVED        |
  -----------------+----------------+----------------+

 3) [stable] ------> [stable]    Update kata-deploy
    "x.y.z"         "x.y.(z+1)"

                   +----------------+----------------+
                   |      from      |       to       |
  -----------------+----------------+----------------+
  kata-deploy      | "x.y.z"        | "x.y.(z+1)"    |
  -----------------+----------------+----------------+
  kata-deploy-base | NON-EXISTENT   | NON-EXISTENT   |
  -----------------+----------------+----------------+
```

And we can easily cover those 3 cases only with the information about
the "${target_branch}" and the "${new_version}", where:
* case 1) if "${target_branch}" is "main" *and* "${new_version}"
  contains "alpha", do nothing
* case 2) if "${target_branch}" is "main" *and* "${new_version}"
  contains "rc":
  * change the kata-deploy & kata-cleanup tags from "latest" to
    "${new_version}".
  * delete the kata-deploy-stable & kata-cleanup-stable files.
* case 3) if the "${target_branch}" contains "stable":
  * change the kata-deploy & kata-cleanup tags from "${current_version}"
    to "${new_version}".

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 43a72d76e2)
2021-09-23 14:04:44 +02:00
Fabiano Fidêncio
98e2e93552 kata-deploy: Add "stable" info to the README
Similar to the instructions we have for the "latest" images, let's also
add instructions about the "stable" images.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit ea9b2f9c92)
2021-09-23 14:04:38 +02:00
Fabiano Fidêncio
8f25c7da11 kata-deploy: Update the README
Let's just point to our repo URLs rather than assume users using
kata-deploy will have our repo cloned.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit e541105680)
2021-09-23 14:04:29 +02:00
Fabiano Fidêncio
84da2f8ddc workflows: Add "stable" & "latest" tags to kata-deploy
When releasing a tarball, let's *also* add the "stable" & "latest" tags
to the kata-deploy image.

The "stable" tag refers to any official release, while the "latest" tag
refers to any pre-release / release candidate.

Fixes: #2302

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit a156288c1f)
2021-09-23 14:01:33 +02:00
Fabiano Fidêncio
de0e3915b7 Merge pull request #2702 from Jakob-Naucke/backport-qemu-gitlab
packaging: Backport QEMU's GitLab switch to 5.1.x
2021-09-23 12:59:17 +02:00
Jakob Naucke
5c76f1c65a packaging: Backport QEMU's GitLab switch to 5.1.x
This brings #2699 to 5.1.x for ARM. Add a `no_patches.txt` for 5.1.0
which was missing apparently.

Fixes: #2701
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-09-23 11:11:45 +02:00
Fabiano Fidêncio
522a53010c Merge pull request #2690 from fidencio/wip/stable-2.2-upload-cargo-vendored-tarball
stable-2.2 | workflows,release: Upload the vendored cargo code
2021-09-22 22:07:08 +02:00
Julio Montes
852fc53351 Merge pull request #2688 from GabyCT/shutdown
backport: Call agent shutdown test only in the correspondent CI_JOB
2021-09-22 09:53:14 -05:00
Julio Montes
e0a27b5e90 Merge pull request #2699 from Jakob-Naucke/backport-qemu-gitlab
packaging: Backport QEMU's switch to GitLab repos
2021-09-22 09:19:16 -05:00
Jakob Naucke
ba6fc32804 packaging: Backport QEMU's switch to GitLab repos
QEMU's submodule checkout from git.qemu.org can fail. On QEMU 6.x, this
is not a problem because they moved to GitLab. However, we use QEMU 5.2
on stable-2.2, which can be a problem when no cached QEMU is used.
Backport QEMU's switch.

Fixes: #2698
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-09-22 14:59:35 +02:00
Fabiano Fidêncio
d5f5da4323 workflows,release: Upload the vendored cargo code
As part of the release, let's also upload a tarball with the vendored
cargo code.  By doing this we allow distros, which usually don't have
access to the internet while performing the builds, to just add the
vendored code as a second source, making the life of the downstream
maintainers slightly easier*.

Fixes: #1203
Backports: #2573

*: The current workflow requires the downstream maintainer to download
the tarball, unpack it, run `cargo vendor`, create the tarball, etc.
Although this doesn't look like a ridiculous amount of work, it's better
if we can have it in an automated fashion.

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
(cherry picked from commit 21c8511630)
2021-09-21 21:48:58 +02:00
Gabriela Cervantes
017cd3c53c ci: Call agent shutdown test only in the correspondent CI_JOB
The agent shutdown test should only run on the CI JOB of CRI_CONTAINERD_K8S_MINIMAL
which is the only one where testing tracing is being enabled, however, this
test is being triggered in multiple CI jobs where it should not run. This PR
fixes that issue.

Fixes #2683

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2021-09-21 17:01:09 +00:00
Chelsea Mafrica
484af1a559 Merge pull request #2678 from nubificus/stable-2.2-fix_fc_vcpu_thread
stable-2.2 | virtcontainers: fc: parse vcpuID correctly
2021-09-20 09:46:07 -07:00
Chelsea Mafrica
a572a6ebf8 Merge pull request #2679 from c3d/backport/2527-adding-debugging-msgs
shimv2: Backport fixes for #2527
2021-09-20 09:42:53 -07:00
Snir Sheriber
2ca867da7b runtime: Add container field to logs
and unified field naming

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>

Backport from commit 0c7789fad6
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-09-20 11:04:09 +02:00
Snir Sheriber
f4da502c4f shimv2: add information to method comment
add a comment to explicitly mentioned method is a binary call

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>

Backport from commit 72e3538e36
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-09-20 11:03:45 +02:00
Snir Sheriber
16164241df shimv2: add logging to shimv2 api calls
and also fetch and log container id from the request

Fixes: #2527
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>

Backport from commit 8dadca9cd1
Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-09-20 11:02:35 +02:00
Christophe de Dinechin
25c7e1181a virtiofs: Create shared directory with 0700 mode, not 0750
A discussion on the Linux kernel mailing list [1] exposed that virtiofsd makes a
core assumption that the file systems being shared are not accessible by any
non-privileged user. We currently create the `shared` directory in the sandbox
with the default `0750` permissions, which gives read and directory traversal
access to the group. There is no real good reason for a non-root user to access
the shared directory, and this is potentially dangerous.

Fixes: #2589

[1]: https://lore.kernel.org/linux-fsdevel/YTI+k29AoeGdX13Q@redhat.com/

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
2021-09-20 10:54:18 +02:00
Anastassios Nanos
4c5bf0576b virtcontainers: fc: parse vcpuID correctly
In getThreadIDs(), the cpuID variable is derived from a string that
already contains a whitespace. As a result, strings.SplitAfter returns
the cpuID with a leading space. This makes any go variant of string to int
fail (strconv.ParseInt() in our case). This patch makes sure that the
leading space character is removed so the string passed to
strconv.ParseInt() is "CPUID" and not " CPUID".

This has been caused by a change in the naming scheme of vcpu threads
for Firecracker after v0.19.1.

Fixes: #2592

Signed-off-by: Anastassios Nanos <ananos@nubificus.co.uk>
2021-09-18 08:10:13 +00:00
Chelsea Mafrica
b3e620dbcf runtime: tracing: Fix logger passed in newContainer
Change logger in Trace call in newContainer from sandbox.Logger() to
nil. Passing nil will cause an error to be logged by kataTraceLogger
instead of the sandbox logger, which will avoid having the log message
report it as part of the sandbox subsystem when it is part of the
container subsystem.

The kataTraceLogger will not log it as related to the container
subsystem, but since the container logger has not been created at this
point, and we already use the kataTraceLogger in other instances where a
subsystem's logger has not been created yet, this PR makes the call
consistent with other code.

Backport of #2666
Fixes #2667

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
2021-09-16 16:30:29 -07:00
Chelsea Mafrica
98c2ca13c1 runtime: tracing: Use root context to stop tracing
Call StopTracing with s.rootCtx, which is the root context for tracing,
instead of s.ctx, which is parent to a subset of trace spans.

Backport of #2662

Fixes #2663

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
2021-09-16 11:19:40 -07:00
Fabiano Fidêncio
a97c9063db Merge pull request #2642 from jongwu/qemu_mak_2.2
backport-2.2: remove default config for arm64.
2021-09-16 07:21:32 +02:00
Jianyong Wu
0481c5070c backport-2.2: remove default config for arm64.
The current default config in qemu for arm64 doesn't suit for qemu
version 5.1+, so remove them here.

Fixes: #2595
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
2021-09-15 10:07:13 +08:00
Samuel Ortiz
64504061c8 Merge pull request #2619 from likebreath/0913/backport_clh_v18.0
stable-2.2 | versions: Upgrade to Cloud Hypervisor v18.0
2021-09-14 12:02:50 +02:00
Binbin Zhang
56920bc943 sandbox: Allow the device to be accessed,such as /dev/null and /dev/urandom
If the device has no permission, such as /dev/null, /dev/urandom,
it needs to be added into cgroup.

Fixes: #2615
Backport: #2616

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
2021-09-14 10:33:49 +08:00
Bo Chen
a1874ccd62 virtcontainers: clh: Revert the workaround incorrect default values
Given the fix to the bugs of the openapi spec file is included in the
Cloud Hypervisor v18.0 [1], this patch reverts the workaround we carried
in the CLH driver.

This reverts commit 932ee41b3f.

[1] https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3029

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit f785ff0bf2)
2021-09-13 14:17:58 -07:00
Bo Chen
c2c650500b virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v18.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 0e0e59dc5f)
2021-09-13 14:17:58 -07:00
Bo Chen
7ee43f9468 versions: Upgrade to Cloud Hypervisor v18.0
Highlights from the Cloud Hypervisor release v18.0: 1) Experimental User
Device (vfio-user) support; 2) Migration support for vhost-user devices;
3) VHDX disk image support; 4) Device pass through on MSHV hypervisor;
5) AArch64 for support virtio-mem; 6) Live migration on MSHV hypervisor;
7) AArch64 CPU topology support; 8) Power button support on AArch64; 9)
Various bug fixes on PTY, TTY, signal handling, and live-migration on
AArch64.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v18.0

Fixes: #2543

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit f0b5331430)
2021-09-13 14:17:58 -07:00
Samuel Ortiz
eedf139076 Merge pull request #2608 from Bevisy/main-2539-bp
[backport]sandbox: Add device permissions such as /dev/null to cgroup
2021-09-13 19:07:17 +02:00
Fabiano Fidêncio
54a6890c3c Merge pull request #2614 from sameo/stable-2.2
[backport] runtime: Fix README link
2021-09-13 17:45:07 +02:00
Samuel Ortiz
1792a9fe11 runtime: Fix README link
The LICENSE file lives in the project's root.

Fixes #2612

Signed-off-by: Samuel Ortiz <s.ortiz@apple.com>
2021-09-11 09:57:49 +02:00
Julio Montes
9bf95279be Merge pull request #2588 from devimc/2021-09-07/backport/fixSnap
[backport] snap: Test variable instead of executing "branch"
2021-09-10 14:44:55 -05:00
Binbin Zhang
807cc8a3a5 sandbox: Add device permissions such as /dev/null to cgroup
adds the default devices for unix such as /dev/null, /dev/urandom to
the container's resource cgroup spec

Fixes: #2539
Backports: #2603

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
2021-09-10 17:33:26 +08:00
David Gibson
5987f3b5e1 snap: Test variable instead of executing "branch"
In snapcraft.yaml we have a case statement on $(branch) - that is on the
output of executing a command "branch".  From the selections it appears
that what it actually wants is to simply select on the contents of the
$branch variable, which should be ${branch} instead.

fixes #2558

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-09-07 09:37:17 -05:00
Fabiano Fidêncio
caafd0f952 Merge pull request #2541 from fidencio/2.2.0-branch-bump
# Kata Containers 2.2.0
2021-09-01 00:33:25 +02:00
Fabiano Fidêncio
800126b272 release: Kata Containers 2.2.0
- runtime: drop qemu-lite support
- stable-2.2 | virtcontainers: clh: Upgrade to the openapi-generator v5.2.1
- backport ci: Temporarily skip agent shutdown test on s390x
- backport: build_image: Fix error soft link about initrd.img

dca35c17 docs: remove mentioning of qemu-lite
0bdfdad2 runtime: drop qemu-lite support
60155756 runtime: fix default hypervisor path
ca9e6538 ci: Temporarily skip agent shutdown test on s390x
938b01ae virtcontainers: clh: Workaround incorrect default values
abd708e8 virtcontainers: clh: Fix the unit test
61babd45 virtcontainers: clh: Use constructors to ensure proper default value
59c51f62 virtcontainers: clh: Migrate to use the updated client APIs
c1f260cc virtcontainers: clh: Re-generate the client code
4cd6909f virtcontainers: clh: Upgrade to the openapi-generator v5.2.1
efa2d54e build_image: Fix error soft link about initrd.img

Signed-off-by: Fabiano Fidêncio <fidencio@redhat.com>
2021-08-31 18:44:03 +02:00
Archana Shinde
b1372b353f Merge pull request #2533 from bergwolf/qemu-lite
runtime: drop qemu-lite support
2021-08-31 07:39:24 -07:00
Peng Tao
dca35c1730 docs: remove mentioning of qemu-lite
vm-templating should just work with upstream qemu v4.1.0 or above.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-08-31 10:17:12 +08:00
Peng Tao
0bdfdad236 runtime: drop qemu-lite support
As the project is not maintained and we have not been testing against it
for a long time.

Fixes: #2529
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-08-31 10:17:06 +08:00
Peng Tao
60155756f3 runtime: fix default hypervisor path
Should not be qemu-lite.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2021-08-31 10:16:57 +08:00
Fabiano Fidêncio
669888c339 Merge pull request #2525 from likebreath/0827/backport_clh_generator
stable-2.2 | virtcontainers: clh: Upgrade to the openapi-generator v5.2.1
2021-08-30 21:25:05 +02:00
GabyCT
cde008f441 Merge pull request #2531 from Jakob-Naucke/backport-s390x-skip-agent-shutdown-test
backport ci: Temporarily skip agent shutdown test on s390x
2021-08-30 09:25:50 -05:00
Peng Tao
7c866073f9 Merge pull request #2520 from Bevisy/stable-2.2-2503
backport: build_image: Fix error soft link about initrd.img
2021-08-30 20:16:55 +08:00
Jakob Naucke
ca9e6538e6 ci: Temporarily skip agent shutdown test on s390x
see https://github.com/kata-containers/tests/issues/3878 for tracking

Fixes: #2507
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-08-30 14:14:43 +02:00
Bo Chen
938b01aedc virtcontainers: clh: Workaround incorrect default values
Two default values defined in the 'cloud-hypervisor.yaml' have typo, and this
patch manually overwrites them with the correct value as a workaround
before the corresponding fix is landed to Cloud Hypervisor upstream.

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 932ee41b3f)
2021-08-27 13:37:47 -07:00
Bo Chen
abd708e814 virtcontainers: clh: Fix the unit test
This patch fixes the unit tests over clh.go with the updated client code.

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit bff38e4f4d)
2021-08-27 13:37:47 -07:00
Bo Chen
61babd45ed virtcontainers: clh: Use constructors to ensure proper default value
With the updated openapi-generator, the client code now handles optional
attributes correctly, and ensures to assign the right default
values. This patch enables to use those constructors to make sure the
proper default values being used.

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit d967d3cb37)
2021-08-27 13:37:47 -07:00
Bo Chen
59c51f6201 virtcontainers: clh: Migrate to use the updated client APIs
The client code (and APIs) for Cloud Hypervisor has been changed
dramatically due to the upgrade to `openapi-generator` v5.2.1. This
patch migrate the Cloud Hypervisor driver in the kata-runtime to use
those updated APIs.

The main change from the client code is that it now uses "pointer" type
to represent "optional" attributes from the input openapi specification
file.

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit a6a2e525de)
2021-08-27 13:37:47 -07:00
Bo Chen
c1f260cc40 virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor with the
updated `openapi-generator` v5.2.1.

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 46eb07e14f)
2021-08-27 13:37:47 -07:00
Bo Chen
4cd6909f18 virtcontainers: clh: Upgrade to the openapi-generator v5.2.1
To improve the quality and correctness of the auto-generated code, this
patch upgrade the `openapi-generator` to its latest stable release
v5.2.1.

Fixes: #2487

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 80fba4d637)
2021-08-27 13:37:47 -07:00
Binbin Zhang
efa2d54e85 build_image: Fix error soft link about initrd.img
fix error soft link about initrd.img

Fixes #2503

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
2021-08-27 16:15:49 +08:00
44 changed files with 808 additions and 885 deletions

View File

@@ -100,10 +100,14 @@ jobs:
run: |
# tag the container image we created and push to DockerHub
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
docker tag katadocker/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} katadocker/kata-deploy:${tag}
docker tag quay.io/kata-containers/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} quay.io/kata-containers/kata-deploy:${tag}
docker push katadocker/kata-deploy:${tag}
docker push quay.io/kata-containers/kata-deploy:${tag}
tags=($tag)
tags+=($([[ "$tag" =~ "alpha"|"rc" ]] && echo "latest" || echo "stable"))
for tag in ${tags[@]}; do \
docker tag katadocker/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} katadocker/kata-deploy:${tag} && \
docker tag quay.io/kata-containers/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} quay.io/kata-containers/kata-deploy:${tag} && \
docker push katadocker/kata-deploy:${tag} && \
docker push quay.io/kata-containers/kata-deploy:${tag}; \
done
upload-static-tarball:
needs: kata-deploy
@@ -127,3 +131,21 @@ jobs:
pushd $GITHUB_WORKSPACE
echo "uploading asset '${tarball}' for tag: ${tag}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd
upload-cargo-vendored-tarball:
needs: upload-static-tarball
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: generate-and-upload-tarball
run: |
pushd $GITHUB_WORKSPACE/src/agent
cargo vendor >> .cargo/config
popd
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="kata-containers-$tag-vendor.tar.gz"
pushd $GITHUB_WORKSPACE
tar -cvzf "${tarball}" src/agent/.cargo/config src/agent/vendor
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd

View File

@@ -1 +1 @@
2.3.0-alpha0
2.2.1

View File

@@ -4,6 +4,6 @@
#
# This is the build root image for Kata Containers on OpenShift CI.
#
FROM registry.centos.org/centos:8
FROM centos:8
RUN yum -y update && yum -y install git sudo wget

View File

@@ -8,11 +8,14 @@
set -e
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
export CI_JOB="${CI_JOB:-}"
clone_tests_repo
pushd ${tests_repo_dir}
.ci/run.sh
# temporary fix, see https://github.com/kata-containers/tests/issues/3878
[ "$(uname -m)" != "s390x" ] && tracing/test-agent-shutdown.sh
if [ "$(uname -m)" != "s390x" ] && [ "$CI_JOB" == "CRI_CONTAINERD_K8S_MINIMAL" ]; then
tracing/test-agent-shutdown.sh
fi
popd

View File

@@ -17,9 +17,10 @@
- `firecracker`
- `ACRN`
While `qemu` , `cloud-hypervisor` and `firecracker` work out of the box with installation of Kata,
some additional configuration is needed in case of `ACRN`.
While `qemu` and `cloud-hypervisor` work out of the box with installation of Kata,
some additional configuration is needed in case of `firecracker` and `ACRN`.
Refer to the following guides for additional configuration steps:
- [Kata Containers with Firecracker](https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support)
- [Kata Containers with ACRN Hypervisor](how-to-use-kata-containers-with-acrn.md)
## Advanced Topics

View File

@@ -3,7 +3,7 @@
This document describes how to set up a single-machine Kubernetes (k8s) cluster.
The Kubernetes cluster will use the
[CRI containerd plugin](https://github.com/containerd/containerd/tree/main/pkg/cri) and
[CRI containerd plugin](https://github.com/containerd/cri) and
[Kata Containers](https://katacontainers.io) to launch untrusted workloads.
## Requirements

View File

@@ -299,7 +299,7 @@ parts:
| xargs ./configure
# Copy QEMU configurations (Kconfigs)
case "$(branch)" in
case "${branch}" in
"v5.1.0")
cp -a ${kata_dir}/tools/packaging/qemu/default-configs/* default-configs
;;

View File

@@ -269,6 +269,19 @@ impl SandboxStorages {
let entry = Storage::new(storage)
.await
.with_context(|| "Failed to add storage")?;
// If the storage source is a directory, let's create the target mount point:
if entry.source_mount_point.as_path().is_dir() {
fs::create_dir_all(&entry.target_mount_point)
.await
.with_context(|| {
format!(
"Unable to mkdir all for {}",
entry.target_mount_point.display()
)
})?;
}
self.0.push(entry);
}
@@ -475,6 +488,85 @@ mod tests {
Ok((storage, src_path))
}
#[tokio::test]
async fn test_empty_sourcedir_check() {
//skip_if_not_root!();
let dir = tempfile::tempdir().expect("failed to create tempdir");
let logger = slog::Logger::root(slog::Discard, o!());
let src_path = dir.path().join("src");
let dest_path = dir.path().join("dest");
let src_filename = src_path.to_str().expect("failed to create src filename");
let dest_filename = dest_path.to_str().expect("failed to create dest filename");
std::fs::create_dir_all(src_filename).expect("failed to create path");
let storage = protos::Storage {
source: src_filename.to_string(),
mount_point: dest_filename.to_string(),
..Default::default()
};
let mut entries = SandboxStorages {
..Default::default()
};
entries
.add(std::iter::once(storage), &logger)
.await
.unwrap();
assert!(entries.check(&logger).await.is_ok());
assert_eq!(entries.0.len(), 1);
assert_eq!(std::fs::read_dir(src_path).unwrap().count(), 0);
assert_eq!(std::fs::read_dir(dest_path).unwrap().count(), 0);
assert_eq!(std::fs::read_dir(dir.path()).unwrap().count(), 2);
}
#[tokio::test]
async fn test_single_file_check() {
//skip_if_not_root!();
let dir = tempfile::tempdir().expect("failed to create tempdir");
let logger = slog::Logger::root(slog::Discard, o!());
let src_file_path = dir.path().join("src.txt");
let dest_file_path = dir.path().join("dest.txt");
let src_filename = src_file_path
.to_str()
.expect("failed to create src filename");
let dest_filename = dest_file_path
.to_str()
.expect("failed to create dest filename");
let storage = protos::Storage {
source: src_filename.to_string(),
mount_point: dest_filename.to_string(),
..Default::default()
};
//create file
fs::write(src_file_path, "original").unwrap();
let mut entries = SandboxStorages::default();
entries
.add(std::iter::once(storage), &logger)
.await
.unwrap();
assert!(entries.check(&logger).await.is_ok());
assert_eq!(entries.0.len(), 1);
// there should only be 2 files
assert_eq!(std::fs::read_dir(dir.path()).unwrap().count(), 2);
assert_eq!(fs::read_to_string(dest_file_path).unwrap(), "original");
}
#[tokio::test]
async fn test_watch_entries() {
skip_if_not_root!();

View File

@@ -27,7 +27,7 @@ to work seamlessly with both Docker and Kubernetes respectively.
The code is licensed under an Apache 2.0 license.
See [the license file](LICENSE) for further details.
See [the license file](../../LICENSE) for further details.
## Platform support

View File

@@ -940,7 +940,7 @@ func (s *service) Shutdown(ctx context.Context, r *taskAPI.ShutdownRequest) (_ *
s.mu.Unlock()
span.End()
katatrace.StopTracing(s.ctx)
katatrace.StopTracing(s.rootCtx)
s.cancel()

View File

@@ -1,7 +1,7 @@
# Kata test utilities
This package provides a small set of test utilities. See the
[GoDoc](https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils)
[GoDoc](https://godoc.org/github.com/kata-containers/runtime/pkg/katatestutils)
for full details.
## Test Constraints
@@ -165,4 +165,4 @@ func TestOldKernelVersion(t *testing.T) {
### Full details
The public API is shown in [`constraints_api.go`](constraints_api.go) or
the [GoDoc](https://pkg.go.dev/github.com/kata-containers/kata-containers/src/runtime/pkg/katatestutils).
the [GoDoc](https://godoc.org/github.com/kata-containers/runtime/pkg/katatestutils).

View File

@@ -129,7 +129,7 @@ func StopTracing(ctx context.Context) {
// Trace creates a new tracing span based on the specified name and parent context.
// It also accepts a logger to record nil context errors and a map of tracing tags.
// Tracing tag keys and values are strings.
func Trace(parent context.Context, logger *logrus.Entry, name string, tags ...map[string]string) (otelTrace.Span, context.Context) {
func Trace(parent context.Context, logger *logrus.Entry, name string, tags map[string]string) (otelTrace.Span, context.Context) {
if parent == nil {
if logger == nil {
logger = kataTraceLogger
@@ -139,13 +139,8 @@ func Trace(parent context.Context, logger *logrus.Entry, name string, tags ...ma
}
var otelTags []label.KeyValue
// do not append tags if tracing is disabled
if tracing {
for _, tagSet := range tags {
for k, v := range tagSet {
otelTags = append(otelTags, label.Key(k).String(v))
}
}
for k, v := range tags {
otelTags = append(otelTags, label.Key(k).String(v))
}
tracer := otel.Tracer("kata")

View File

@@ -29,12 +29,15 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
)
// acrnTracingTags defines tags for the trace span
var acrnTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "acrn",
// tracingTags defines tags for the trace span
func (a *Acrn) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "acrn",
"sandbox_id": a.id,
}
}
// Since ACRN is using the store in a quite abnormal way, let's first draw it back from store to here
@@ -156,7 +159,7 @@ func (a *Acrn) kernelParameters() string {
// Adds all capabilities supported by Acrn implementation of hypervisor interface
func (a *Acrn) capabilities(ctx context.Context) types.Capabilities {
span, _ := katatrace.Trace(ctx, a.Logger(), "capabilities", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "capabilities", a.tracingTags())
defer span.End()
return a.arch.capabilities()
@@ -281,7 +284,7 @@ func (a *Acrn) buildDevices(ctx context.Context, imagePath string) ([]Device, er
// setup sets the Acrn structure up.
func (a *Acrn) setup(ctx context.Context, id string, hypervisorConfig *HypervisorConfig) error {
span, _ := katatrace.Trace(ctx, a.Logger(), "setup", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "setup", a.tracingTags())
defer span.End()
err := hypervisorConfig.valid()
@@ -324,7 +327,7 @@ func (a *Acrn) setup(ctx context.Context, id string, hypervisorConfig *Hyperviso
}
func (a *Acrn) createDummyVirtioBlkDev(ctx context.Context, devices []Device) ([]Device, error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "createDummyVirtioBlkDev", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "createDummyVirtioBlkDev", a.tracingTags())
defer span.End()
// Since acrn doesn't support hot-plug, dummy virtio-blk
@@ -347,7 +350,7 @@ func (a *Acrn) createSandbox(ctx context.Context, id string, networkNS NetworkNa
// Save the tracing context
a.ctx = ctx
span, ctx := katatrace.Trace(ctx, a.Logger(), "createSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, ctx := katatrace.Trace(ctx, a.Logger(), "createSandbox", a.tracingTags())
defer span.End()
if err := a.setup(ctx, id, hypervisorConfig); err != nil {
@@ -412,7 +415,7 @@ func (a *Acrn) createSandbox(ctx context.Context, id string, networkNS NetworkNa
// startSandbox will start the Sandbox's VM.
func (a *Acrn) startSandbox(ctx context.Context, timeoutSecs int) error {
span, ctx := katatrace.Trace(ctx, a.Logger(), "startSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, ctx := katatrace.Trace(ctx, a.Logger(), "startSandbox", a.tracingTags())
defer span.End()
if a.config.Debug {
@@ -458,7 +461,7 @@ func (a *Acrn) startSandbox(ctx context.Context, timeoutSecs int) error {
// waitSandbox will wait for the Sandbox's VM to be up and running.
func (a *Acrn) waitSandbox(ctx context.Context, timeoutSecs int) error {
span, _ := katatrace.Trace(ctx, a.Logger(), "waitSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "waitSandbox", a.tracingTags())
defer span.End()
if timeoutSecs < 0 {
@@ -472,7 +475,7 @@ func (a *Acrn) waitSandbox(ctx context.Context, timeoutSecs int) error {
// stopSandbox will stop the Sandbox's VM.
func (a *Acrn) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "stopSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "stopSandbox", a.tracingTags())
defer span.End()
a.Logger().Info("Stopping acrn VM")
@@ -544,7 +547,7 @@ func (a *Acrn) updateBlockDevice(drive *config.BlockDrive) error {
}
func (a *Acrn) hotplugAddDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "hotplugAddDevice", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "hotplugAddDevice", a.tracingTags())
defer span.End()
switch devType {
@@ -558,7 +561,7 @@ func (a *Acrn) hotplugAddDevice(ctx context.Context, devInfo interface{}, devTyp
}
func (a *Acrn) hotplugRemoveDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "hotplugRemoveDevice", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "hotplugRemoveDevice", a.tracingTags())
defer span.End()
// Not supported. return success
@@ -567,7 +570,7 @@ func (a *Acrn) hotplugRemoveDevice(ctx context.Context, devInfo interface{}, dev
}
func (a *Acrn) pauseSandbox(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, a.Logger(), "pauseSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "pauseSandbox", a.tracingTags())
defer span.End()
// Not supported. return success
@@ -576,7 +579,7 @@ func (a *Acrn) pauseSandbox(ctx context.Context) error {
}
func (a *Acrn) resumeSandbox(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, a.Logger(), "resumeSandbox", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "resumeSandbox", a.tracingTags())
defer span.End()
// Not supported. return success
@@ -587,7 +590,7 @@ func (a *Acrn) resumeSandbox(ctx context.Context) error {
// addDevice will add extra devices to acrn command line.
func (a *Acrn) addDevice(ctx context.Context, devInfo interface{}, devType deviceType) error {
var err error
span, _ := katatrace.Trace(ctx, a.Logger(), "addDevice", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "addDevice", a.tracingTags())
defer span.End()
switch v := devInfo.(type) {
@@ -620,7 +623,7 @@ func (a *Acrn) addDevice(ctx context.Context, devInfo interface{}, devType devic
// getSandboxConsole builds the path of the console where we can read
// logs coming from the sandbox.
func (a *Acrn) getSandboxConsole(ctx context.Context, id string) (string, string, error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "getSandboxConsole", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "getSandboxConsole", a.tracingTags())
defer span.End()
consoleURL, err := utils.BuildSocketPath(a.store.RunVMStoragePath(), id, acrnConsoleSocket)
@@ -640,14 +643,14 @@ func (a *Acrn) saveSandbox() error {
}
func (a *Acrn) disconnect(ctx context.Context) {
span, _ := katatrace.Trace(ctx, a.Logger(), "disconnect", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "disconnect", a.tracingTags())
defer span.End()
// Not supported.
}
func (a *Acrn) getThreadIDs(ctx context.Context) (vcpuThreadIDs, error) {
span, _ := katatrace.Trace(ctx, a.Logger(), "getThreadIDs", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "getThreadIDs", a.tracingTags())
defer span.End()
// Not supported. return success
@@ -665,7 +668,7 @@ func (a *Acrn) resizeVCPUs(ctx context.Context, reqVCPUs uint32) (currentVCPUs u
}
func (a *Acrn) cleanup(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, a.Logger(), "cleanup", acrnTracingTags, map[string]string{"sandbox_id": a.id})
span, _ := katatrace.Trace(ctx, a.Logger(), "cleanup", a.tracingTags())
defer span.End()
return nil

View File

@@ -38,6 +38,27 @@ func WithNewAgentFunc(ctx context.Context, f newAgentFuncType) context.Context {
return context.WithValue(ctx, newAgentFuncKey{}, f)
}
// ProcessListOptions contains the options used to list running
// processes inside the container
type ProcessListOptions struct {
// Format describes the output format to list the running processes.
// Formats are unrelated to ps(1) formats, only two formats can be specified:
// "json" and "table"
Format string
// Args contains the list of arguments to run ps(1) command.
// If Args is empty the agent will use "-ef" as options to ps(1).
Args []string
}
// ProcessList represents the list of running processes inside the container
type ProcessList []byte
const (
// SocketTypeVSOCK is a VSOCK socket type for talking to an agent.
SocketTypeVSOCK = "vsock"
)
// agent is the virtcontainers agent interface.
// Agents are running in the guest VM and handling
// communications between the host and guest.

View File

@@ -33,12 +33,15 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
)
// clhTracingTags defines tags for the trace span
var clhTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "clh",
// tracingTags defines tags for the trace span
func (clh *cloudHypervisor) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "clh",
"sandbox_id": clh.id,
}
}
//
@@ -193,7 +196,7 @@ var clhDebugKernelParams = []Param{
func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networkNS NetworkNamespace, hypervisorConfig *HypervisorConfig) error {
clh.ctx = ctx
span, newCtx := katatrace.Trace(clh.ctx, clh.Logger(), "createSandbox", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, newCtx := katatrace.Trace(clh.ctx, clh.Logger(), "createSandbox", clh.tracingTags())
clh.ctx = newCtx
defer span.End()
@@ -240,7 +243,6 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
clh.vmconfig.Memory = chclient.NewMemoryConfig(int64((utils.MemUnit(clh.config.MemorySize) * utils.MiB).ToBytes()))
// shared memory should be enabled if using vhost-user(kata uses virtiofsd)
clh.vmconfig.Memory.Shared = func(b bool) *bool { return &b }(true)
clh.vmconfig.Memory.HotplugMethod = func(s string) *string { return &s }("Acpi")
hostMemKb, err := getHostMemorySizeKb(procMemInfo)
if err != nil {
return nil
@@ -352,7 +354,7 @@ func (clh *cloudHypervisor) createSandbox(ctx context.Context, id string, networ
// startSandbox will start the VMM and boot the virtual machine for the given sandbox.
func (clh *cloudHypervisor) startSandbox(ctx context.Context, timeout int) error {
span, _ := katatrace.Trace(ctx, clh.Logger(), "startSandbox", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "startSandbox", clh.tracingTags())
defer span.End()
ctx, cancel := context.WithTimeout(context.Background(), clhAPITimeout*time.Second)
@@ -519,7 +521,7 @@ func (clh *cloudHypervisor) hotPlugVFIODevice(device config.VFIODev) error {
}
func (clh *cloudHypervisor) hotplugAddDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, clh.Logger(), "hotplugAddDevice", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "hotplugAddDevice", clh.tracingTags())
defer span.End()
switch devType {
@@ -536,7 +538,7 @@ func (clh *cloudHypervisor) hotplugAddDevice(ctx context.Context, devInfo interf
}
func (clh *cloudHypervisor) hotplugRemoveDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, clh.Logger(), "hotplugRemoveDevice", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "hotplugRemoveDevice", clh.tracingTags())
defer span.End()
var deviceID string
@@ -701,7 +703,7 @@ func (clh *cloudHypervisor) resumeSandbox(ctx context.Context) error {
// stopSandbox will stop the Sandbox's VM.
func (clh *cloudHypervisor) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := katatrace.Trace(ctx, clh.Logger(), "stopSandbox", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "stopSandbox", clh.tracingTags())
defer span.End()
clh.Logger().WithField("function", "stopSandbox").Info("Stop Sandbox")
return clh.terminate(ctx, waitOnly)
@@ -739,7 +741,11 @@ func (clh *cloudHypervisor) check() error {
}
func (clh *cloudHypervisor) getPids() []int {
return []int{clh.state.PID}
var pids []int
pids = append(pids, clh.state.PID)
return pids
}
func (clh *cloudHypervisor) getVirtioFsPid() *int {
@@ -747,7 +753,7 @@ func (clh *cloudHypervisor) getVirtioFsPid() *int {
}
func (clh *cloudHypervisor) addDevice(ctx context.Context, devInfo interface{}, devType deviceType) error {
span, _ := katatrace.Trace(ctx, clh.Logger(), "addDevice", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "addDevice", clh.tracingTags())
defer span.End()
var err error
@@ -781,7 +787,7 @@ func (clh *cloudHypervisor) Logger() *log.Entry {
// Adds all capabilities supported by cloudHypervisor implementation of hypervisor interface
func (clh *cloudHypervisor) capabilities(ctx context.Context) types.Capabilities {
span, _ := katatrace.Trace(ctx, clh.Logger(), "capabilities", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "capabilities", clh.tracingTags())
defer span.End()
clh.Logger().WithField("function", "capabilities").Info("get Capabilities")
@@ -792,7 +798,7 @@ func (clh *cloudHypervisor) capabilities(ctx context.Context) types.Capabilities
}
func (clh *cloudHypervisor) terminate(ctx context.Context, waitOnly bool) (err error) {
span, _ := katatrace.Trace(ctx, clh.Logger(), "terminate", clhTracingTags, map[string]string{"sandbox_id": clh.id})
span, _ := katatrace.Trace(ctx, clh.Logger(), "terminate", clh.tracingTags())
defer span.End()
pid := clh.state.PID
@@ -1116,7 +1122,6 @@ func (clh *cloudHypervisor) addNet(e Endpoint) error {
net := chclient.NewNetConfig()
net.Mac = &mac
net.Tap = &tapPath
net.VhostMode = func(s string) *string { return &s }("Client")
if clh.vmconfig.Net != nil {
*clh.vmconfig.Net = append(*clh.vmconfig.Net, *net)
} else {

View File

@@ -36,10 +36,13 @@ import (
)
// tracingTags defines tags for the trace span
var containerTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "container",
func (c *Container) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "container",
"container_id": c.id,
}
}
// https://github.com/torvalds/linux/blob/master/include/uapi/linux/major.h
@@ -616,12 +619,13 @@ func (c *Container) mountSharedDirMounts(ctx context.Context, sharedDirMounts, i
}
func (c *Container) unmountHostMounts(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, c.Logger(), "unmountHostMounts", containerTracingTags, map[string]string{"container_id": c.id})
span, ctx := katatrace.Trace(ctx, c.Logger(), "unmountHostMounts", c.tracingTags())
defer span.End()
for _, m := range c.mounts {
if m.HostPath != "" {
span, _ := katatrace.Trace(ctx, c.Logger(), "unmount", containerTracingTags, map[string]string{"container_id": c.id, "host-path": m.HostPath})
span, _ := katatrace.Trace(ctx, c.Logger(), "unmount", c.tracingTags())
katatrace.AddTag(span, "host-path", m.HostPath)
if err := syscall.Unmount(m.HostPath, syscall.MNT_DETACH|UmountNoFollow); err != nil {
c.Logger().WithFields(logrus.Fields{
@@ -749,7 +753,7 @@ func (c *Container) initConfigResourcesMemory() {
// newContainer creates a Container structure from a sandbox and a container configuration.
func newContainer(ctx context.Context, sandbox *Sandbox, contConfig *ContainerConfig) (*Container, error) {
span, ctx := katatrace.Trace(ctx, sandbox.Logger(), "newContainer", containerTracingTags, map[string]string{"container_id": contConfig.ID, "sandbox_id": sandbox.id})
span, ctx := katatrace.Trace(ctx, nil, "newContainer", sandbox.tracingTags())
defer span.End()
if !contConfig.valid() {
@@ -1045,7 +1049,7 @@ func (c *Container) start(ctx context.Context) error {
}
func (c *Container) stop(ctx context.Context, force bool) error {
span, ctx := katatrace.Trace(ctx, c.Logger(), "stop", containerTracingTags, map[string]string{"container_id": c.id})
span, ctx := katatrace.Trace(ctx, c.Logger(), "stop", c.tracingTags())
defer span.End()
// In case the container status has been updated implicitly because

View File

@@ -680,6 +680,7 @@ to manage the container lifecycle through the rest of the
* [Container `DeviceInfo`](#container-deviceinfo)
* [`Process`](#process)
* [`ContainerStatus`](#containerstatus)
* [`ProcessListOptions`](#processlistoptions)
* [`VCContainer`](#vccontainer)
@@ -872,6 +873,22 @@ type ContainerStatus struct {
}
```
#### `ProcessListOptions`
```Go
// ProcessListOptions contains the options used to list running
// processes inside the container
type ProcessListOptions struct {
// Format describes the output format to list the running processes.
// Formats are unrelated to ps(1) formats, only two formats can be specified:
// "json" and "table"
Format string
// Args contains the list of arguments to run ps(1) command.
// If Args is empty the agent will use "-ef" as options to ps(1).
Args []string
}
```
#### `VCContainer`
```Go
// VCContainer is the Container interface

View File

@@ -41,12 +41,15 @@ import (
"github.com/sirupsen/logrus"
)
// fcTracingTags defines tags for the trace span
var fcTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "firecracker",
// tracingTags defines tags for the trace span
func (fc *firecracker) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "firecracker",
"sandbox_id": fc.id,
}
}
type vmmState uint8
@@ -191,7 +194,7 @@ func (fc *firecracker) truncateID(id string) string {
func (fc *firecracker) createSandbox(ctx context.Context, id string, networkNS NetworkNamespace, hypervisorConfig *HypervisorConfig) error {
fc.ctx = ctx
span, _ := katatrace.Trace(ctx, fc.Logger(), "createSandbox", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "createSandbox", fc.tracingTags())
defer span.End()
//TODO: check validity of the hypervisor config provided
@@ -234,7 +237,7 @@ func (fc *firecracker) createSandbox(ctx context.Context, id string, networkNS N
}
func (fc *firecracker) newFireClient(ctx context.Context) *client.Firecracker {
span, _ := katatrace.Trace(ctx, fc.Logger(), "newFireClient", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "newFireClient", fc.tracingTags())
defer span.End()
httpClient := client.NewHTTPClient(strfmt.NewFormats())
@@ -277,18 +280,10 @@ func (fc *firecracker) getVersionNumber() (string, error) {
return "", fmt.Errorf("Running checking FC version command failed: %v", err)
}
return fc.parseVersion(string(data))
}
func (fc *firecracker) parseVersion(data string) (string, error) {
// Firecracker versions 0.25 and over contains multiline output on "version" command.
// So we have to check it and use first line of output to parse version.
lines := strings.Split(data, "\n")
var version string
fields := strings.Split(lines[0], " ")
fields := strings.Split(string(data), " ")
if len(fields) > 1 {
// The output format of `Firecracker --version` is as follows
// The output format of `Firecracker --verion` is as follows
// Firecracker v0.23.1
version = strings.TrimPrefix(strings.TrimSpace(fields[1]), "v")
return version, nil
@@ -312,7 +307,7 @@ func (fc *firecracker) checkVersion(version string) error {
// waitVMMRunning will wait for timeout seconds for the VMM to be up and running.
func (fc *firecracker) waitVMMRunning(ctx context.Context, timeout int) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "wait VMM to be running", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "wait VMM to be running", fc.tracingTags())
defer span.End()
if timeout < 0 {
@@ -334,7 +329,7 @@ func (fc *firecracker) waitVMMRunning(ctx context.Context, timeout int) error {
}
func (fc *firecracker) fcInit(ctx context.Context, timeout int) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcInit", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcInit", fc.tracingTags())
defer span.End()
var err error
@@ -409,7 +404,7 @@ func (fc *firecracker) fcInit(ctx context.Context, timeout int) error {
}
func (fc *firecracker) fcEnd(ctx context.Context, waitOnly bool) (err error) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcEnd", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcEnd", fc.tracingTags())
defer span.End()
fc.Logger().Info("Stopping firecracker VM")
@@ -436,7 +431,7 @@ func (fc *firecracker) fcEnd(ctx context.Context, waitOnly bool) (err error) {
}
func (fc *firecracker) client(ctx context.Context) *client.Firecracker {
span, _ := katatrace.Trace(ctx, fc.Logger(), "client", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "client", fc.tracingTags())
defer span.End()
if fc.connection == nil {
@@ -503,7 +498,7 @@ func (fc *firecracker) fcJailResource(src, dst string) (string, error) {
}
func (fc *firecracker) fcSetBootSource(ctx context.Context, path, params string) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetBootSource", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetBootSource", fc.tracingTags())
defer span.End()
fc.Logger().WithFields(logrus.Fields{"kernel-path": path,
"kernel-params": params}).Debug("fcSetBootSource")
@@ -524,7 +519,7 @@ func (fc *firecracker) fcSetBootSource(ctx context.Context, path, params string)
}
func (fc *firecracker) fcSetVMRootfs(ctx context.Context, path string) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetVMRootfs", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetVMRootfs", fc.tracingTags())
defer span.End()
jailedRootfs, err := fc.fcJailResource(path, fcRootfs)
@@ -551,7 +546,7 @@ func (fc *firecracker) fcSetVMRootfs(ctx context.Context, path string) error {
}
func (fc *firecracker) fcSetVMBaseConfig(ctx context.Context, mem int64, vcpus int64, htEnabled bool) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetVMBaseConfig", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetVMBaseConfig", fc.tracingTags())
defer span.End()
fc.Logger().WithFields(logrus.Fields{"mem": mem,
"vcpus": vcpus,
@@ -567,7 +562,7 @@ func (fc *firecracker) fcSetVMBaseConfig(ctx context.Context, mem int64, vcpus i
}
func (fc *firecracker) fcSetLogger(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetLogger", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetLogger", fc.tracingTags())
defer span.End()
fcLogLevel := "Error"
@@ -590,7 +585,7 @@ func (fc *firecracker) fcSetLogger(ctx context.Context) error {
}
func (fc *firecracker) fcSetMetrics(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetMetrics", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcSetMetrics", fc.tracingTags())
defer span.End()
// listen to metrics file and transfer error info
@@ -745,7 +740,7 @@ func (fc *firecracker) fcInitConfiguration(ctx context.Context) error {
// In the context of firecracker, this will start the hypervisor,
// for configuration, but not yet start the actual virtual machine
func (fc *firecracker) startSandbox(ctx context.Context, timeout int) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "startSandbox", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "startSandbox", fc.tracingTags())
defer span.End()
if err := fc.fcInitConfiguration(ctx); err != nil {
@@ -797,7 +792,7 @@ func fcDriveIndexToID(i int) string {
}
func (fc *firecracker) createDiskPool(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "createDiskPool", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "createDiskPool", fc.tracingTags())
defer span.End()
for i := 0; i < fcDiskPoolSize; i++ {
@@ -835,7 +830,7 @@ func (fc *firecracker) umountResource(jailedPath string) {
// cleanup all jail artifacts
func (fc *firecracker) cleanupJail(ctx context.Context) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "cleanupJail", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "cleanupJail", fc.tracingTags())
defer span.End()
fc.umountResource(fcKernel)
@@ -858,7 +853,7 @@ func (fc *firecracker) cleanupJail(ctx context.Context) {
// stopSandbox will stop the Sandbox's VM.
func (fc *firecracker) stopSandbox(ctx context.Context, waitOnly bool) (err error) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "stopSandbox", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "stopSandbox", fc.tracingTags())
defer span.End()
return fc.fcEnd(ctx, waitOnly)
@@ -877,7 +872,7 @@ func (fc *firecracker) resumeSandbox(ctx context.Context) error {
}
func (fc *firecracker) fcAddVsock(ctx context.Context, hvs types.HybridVSock) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddVsock", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddVsock", fc.tracingTags())
defer span.End()
udsPath := hvs.UdsPath
@@ -897,7 +892,7 @@ func (fc *firecracker) fcAddVsock(ctx context.Context, hvs types.HybridVSock) {
}
func (fc *firecracker) fcAddNetDevice(ctx context.Context, endpoint Endpoint) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddNetDevice", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddNetDevice", fc.tracingTags())
defer span.End()
ifaceID := endpoint.Name()
@@ -953,7 +948,7 @@ func (fc *firecracker) fcAddNetDevice(ctx context.Context, endpoint Endpoint) {
}
func (fc *firecracker) fcAddBlockDrive(ctx context.Context, drive config.BlockDrive) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddBlockDrive", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcAddBlockDrive", fc.tracingTags())
defer span.End()
driveID := drive.ID
@@ -979,7 +974,7 @@ func (fc *firecracker) fcAddBlockDrive(ctx context.Context, drive config.BlockDr
// Firecracker supports replacing the host drive used once the VM has booted up
func (fc *firecracker) fcUpdateBlockDrive(ctx context.Context, path, id string) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcUpdateBlockDrive", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "fcUpdateBlockDrive", fc.tracingTags())
defer span.End()
// Use the global block index as an index into the pool of the devices
@@ -1003,7 +998,7 @@ func (fc *firecracker) fcUpdateBlockDrive(ctx context.Context, path, id string)
// addDevice will add extra devices to firecracker. Limited to configure before the
// virtual machine starts. Devices include drivers and network interfaces only.
func (fc *firecracker) addDevice(ctx context.Context, devInfo interface{}, devType deviceType) error {
span, _ := katatrace.Trace(ctx, fc.Logger(), "addDevice", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "addDevice", fc.tracingTags())
defer span.End()
fc.state.RLock()
@@ -1073,7 +1068,7 @@ func (fc *firecracker) hotplugBlockDevice(ctx context.Context, drive config.Bloc
// hotplugAddDevice supported in Firecracker VMM
func (fc *firecracker) hotplugAddDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "hotplugAddDevice", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "hotplugAddDevice", fc.tracingTags())
defer span.End()
switch devType {
@@ -1089,7 +1084,7 @@ func (fc *firecracker) hotplugAddDevice(ctx context.Context, devInfo interface{}
// hotplugRemoveDevice supported in Firecracker VMM
func (fc *firecracker) hotplugRemoveDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, _ := katatrace.Trace(ctx, fc.Logger(), "hotplugRemoveDevice", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "hotplugRemoveDevice", fc.tracingTags())
defer span.End()
switch devType {
@@ -1122,7 +1117,7 @@ func (fc *firecracker) disconnect(ctx context.Context) {
// Adds all capabilities supported by firecracker implementation of hypervisor interface
func (fc *firecracker) capabilities(ctx context.Context) types.Capabilities {
span, _ := katatrace.Trace(ctx, fc.Logger(), "capabilities", fcTracingTags, map[string]string{"sandbox_id": fc.id})
span, _ := katatrace.Trace(ctx, fc.Logger(), "capabilities", fc.tracingTags())
defer span.End()
var caps types.Capabilities
caps.SetBlockDeviceHotplugSupport()
@@ -1170,7 +1165,11 @@ func (fc *firecracker) getThreadIDs(ctx context.Context) (vcpuThreadIDs, error)
if len(cpus) != 2 {
return vcpuInfo, errors.Errorf("Invalid fc thread info: %v", comm)
}
cpuID, err := strconv.ParseInt(cpus[1], 10, 32)
//Remove the leading whitespace
cpuIdStr := strings.TrimSpace(cpus[1])
cpuID, err := strconv.ParseInt(cpuIdStr, 10, 32)
if err != nil {
return vcpuInfo, errors.Wrapf(err, "Invalid fc thread info: %v", comm)
}

View File

@@ -52,15 +52,3 @@ func TestRevertBytes(t *testing.T) {
num := revertBytes(testNum)
assert.Equal(expectedNum, num)
}
func TestFCParseVersion(t *testing.T) {
assert := assert.New(t)
fc := firecracker{}
for rawVersion, v := range map[string]string{"Firecracker v0.23.1": "0.23.1", "Firecracker v0.25.0\nSupported snapshot data format versions: 0.23.0": "0.25.0"} {
parsedVersion, err := fc.parseVersion(rawVersion)
assert.NoError(err)
assert.Equal(parsedVersion, v)
}
}

View File

@@ -68,6 +68,9 @@ const (
kernelParamDebugConsole = "agent.debug_console"
kernelParamDebugConsoleVPort = "agent.debug_console_vport"
kernelParamDebugConsoleVPortValue = "1026"
// Restricted permission for shared directory managed by virtiofs
sharedDirMode = os.FileMode(0700) | os.ModeDir
)
var (
@@ -516,7 +519,7 @@ func (k *kataAgent) setupSharedPath(ctx context.Context, sandbox *Sandbox) (err
// create shared path structure
sharePath := getSharePath(sandbox.id)
mountPath := getMountPath(sandbox.id)
if err := os.MkdirAll(sharePath, DirMode); err != nil {
if err := os.MkdirAll(sharePath, sharedDirMode); err != nil {
return err
}
if err := os.MkdirAll(mountPath, DirMode); err != nil {

View File

@@ -33,13 +33,6 @@ var rootfsDir = "rootfs"
var systemMountPrefixes = []string{"/proc", "/sys"}
// mountTracingTags defines tags for the trace span
var mountTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "mount",
}
func mountLogger() *logrus.Entry {
return virtLog.WithField("subsystem", "mount")
}
@@ -247,7 +240,7 @@ func evalMountPath(source, destination string) (string, string, error) {
// * ensure the source exists
// * recursively create the destination
func moveMount(ctx context.Context, source, destination string) error {
span, _ := katatrace.Trace(ctx, nil, "moveMount", mountTracingTags)
span, _ := katatrace.Trace(ctx, nil, "moveMount", apiTracingTags)
defer span.End()
source, destination, err := evalMountPath(source, destination)
@@ -265,7 +258,7 @@ func moveMount(ctx context.Context, source, destination string) error {
// * recursively create the destination
// pgtypes stands for propagation types, which are shared, private, slave, and ubind.
func bindMount(ctx context.Context, source, destination string, readonly bool, pgtypes string) error {
span, _ := katatrace.Trace(ctx, nil, "bindMount", mountTracingTags)
span, _ := katatrace.Trace(ctx, nil, "bindMount", apiTracingTags)
defer span.End()
span.SetAttributes(otelLabel.String("source", source), otelLabel.String("destination", destination))
@@ -302,7 +295,7 @@ func bindMount(ctx context.Context, source, destination string, readonly bool, p
// The mountflags should match the values used in the original mount() call,
// except for those parameters that you are trying to change.
func remount(ctx context.Context, mountflags uintptr, src string) error {
span, _ := katatrace.Trace(ctx, nil, "remount", mountTracingTags)
span, _ := katatrace.Trace(ctx, nil, "remount", apiTracingTags)
defer span.End()
span.SetAttributes(otelLabel.String("source", src))
@@ -327,7 +320,7 @@ func remountRo(ctx context.Context, src string) error {
// bindMountContainerRootfs bind mounts a container rootfs into a 9pfs shared
// directory between the guest and the host.
func bindMountContainerRootfs(ctx context.Context, shareDir, cid, cRootFs string, readonly bool) error {
span, _ := katatrace.Trace(ctx, nil, "bindMountContainerRootfs", mountTracingTags)
span, _ := katatrace.Trace(ctx, nil, "bindMountContainerRootfs", apiTracingTags)
defer span.End()
rootfsDest := filepath.Join(shareDir, cid, rootfsDir)
@@ -367,7 +360,7 @@ func isSymlink(path string) bool {
}
func bindUnmountContainerRootfs(ctx context.Context, sharedDir, cID string) error {
span, _ := katatrace.Trace(ctx, nil, "bindUnmountContainerRootfs", mountTracingTags)
span, _ := katatrace.Trace(ctx, nil, "bindUnmountContainerRootfs", apiTracingTags)
defer span.End()
span.SetAttributes(otelLabel.String("shared_dir", sharedDir), otelLabel.String("container_id", cID))
@@ -390,7 +383,7 @@ func bindUnmountContainerRootfs(ctx context.Context, sharedDir, cID string) erro
}
func bindUnmountAllRootfs(ctx context.Context, sharedDir string, sandbox *Sandbox) error {
span, ctx := katatrace.Trace(ctx, nil, "bindUnmountAllRootfs", mountTracingTags)
span, ctx := katatrace.Trace(ctx, nil, "bindUnmountAllRootfs", apiTracingTags)
defer span.End()
span.SetAttributes(otelLabel.String("shared_dir", sharedDir), otelLabel.String("sandbox_id", sandbox.id))

View File

@@ -383,7 +383,7 @@ components:
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_method: acpi
hotplug_method: Acpi
disks:
- path: path
num_queues: 7
@@ -540,7 +540,7 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
vhost_mode: client
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
@@ -563,7 +563,7 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
vhost_mode: client
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
@@ -688,7 +688,7 @@ components:
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_method: acpi
hotplug_method: Acpi
disks:
- path: path
num_queues: 7
@@ -845,7 +845,7 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
vhost_mode: client
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
@@ -868,7 +868,7 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
vhost_mode: client
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
@@ -1053,7 +1053,7 @@ components:
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_method: acpi
hotplug_method: Acpi
properties:
size:
format: int64
@@ -1068,7 +1068,7 @@ components:
default: false
type: boolean
hotplug_method:
default: acpi
default: Acpi
type: string
shared:
default: false
@@ -1236,7 +1236,7 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
vhost_mode: client
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
@@ -1272,7 +1272,7 @@ components:
vhost_socket:
type: string
vhost_mode:
default: client
default: Client
type: string
id:
type: string

View File

@@ -8,7 +8,7 @@ Name | Type | Description | Notes
**HotplugSize** | Pointer to **int64** | | [optional]
**HotpluggedSize** | Pointer to **int64** | | [optional]
**Mergeable** | Pointer to **bool** | | [optional] [default to false]
**HotplugMethod** | Pointer to **string** | | [optional] [default to "acpi"]
**HotplugMethod** | Pointer to **string** | | [optional] [default to "Acpi"]
**Shared** | Pointer to **bool** | | [optional] [default to false]
**Hugepages** | Pointer to **bool** | | [optional] [default to false]
**HugepageSize** | Pointer to **int64** | | [optional]

View File

@@ -13,7 +13,7 @@ Name | Type | Description | Notes
**QueueSize** | Pointer to **int32** | | [optional] [default to 256]
**VhostUser** | Pointer to **bool** | | [optional] [default to false]
**VhostSocket** | Pointer to **string** | | [optional]
**VhostMode** | Pointer to **string** | | [optional] [default to "client"]
**VhostMode** | Pointer to **string** | | [optional] [default to "Client"]
**Id** | Pointer to **string** | | [optional]
**Fd** | Pointer to **[]int32** | | [optional]
**RateLimiterConfig** | Pointer to [**RateLimiterConfig**](RateLimiterConfig.md) | | [optional]

View File

@@ -36,7 +36,7 @@ func NewMemoryConfig(size int64) *MemoryConfig {
this.Size = size
var mergeable bool = false
this.Mergeable = &mergeable
var hotplugMethod string = "acpi"
var hotplugMethod string = "Acpi"
this.HotplugMethod = &hotplugMethod
var shared bool = false
this.Shared = &shared
@@ -52,7 +52,7 @@ func NewMemoryConfigWithDefaults() *MemoryConfig {
this := MemoryConfig{}
var mergeable bool = false
this.Mergeable = &mergeable
var hotplugMethod string = "acpi"
var hotplugMethod string = "Acpi"
this.HotplugMethod = &hotplugMethod
var shared bool = false
this.Shared = &shared

View File

@@ -51,7 +51,7 @@ func NewNetConfig() *NetConfig {
this.QueueSize = &queueSize
var vhostUser bool = false
this.VhostUser = &vhostUser
var vhostMode string = "client"
var vhostMode string = "Client"
this.VhostMode = &vhostMode
return &this
}
@@ -75,7 +75,7 @@ func NewNetConfigWithDefaults() *NetConfig {
this.QueueSize = &queueSize
var vhostUser bool = false
this.VhostUser = &vhostUser
var vhostMode string = "client"
var vhostMode string = "Client"
this.VhostMode = &vhostMode
return &this
}

View File

@@ -567,7 +567,7 @@ components:
default: false
hotplug_method:
type: string
default: "acpi"
default: "Acpi"
shared:
type: boolean
default: false
@@ -714,7 +714,7 @@ components:
type: string
vhost_mode:
type: string
default: "client"
default: "Client"
id:
type: string
fd:

View File

@@ -38,12 +38,15 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
)
// qemuTracingTags defines tags for the trace span
var qemuTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "qemu",
// tracingTags defines tags for the trace span
func (q *qemu) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "hypervisor",
"type": "qemu",
"sandbox_id": q.id,
}
}
// romFile is the file name of the ROM that can be used for virtio-pci devices.
@@ -192,7 +195,7 @@ func (q *qemu) kernelParameters() string {
// Adds all capabilities supported by qemu implementation of hypervisor interface
func (q *qemu) capabilities(ctx context.Context) types.Capabilities {
span, _ := katatrace.Trace(ctx, q.Logger(), "capabilities", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "capabilities", q.tracingTags())
defer span.End()
return q.arch.capabilities()
@@ -222,7 +225,7 @@ func (q *qemu) qemuPath() (string, error) {
// setup sets the Qemu structure up.
func (q *qemu) setup(ctx context.Context, id string, hypervisorConfig *HypervisorConfig) error {
span, _ := katatrace.Trace(ctx, q.Logger(), "setup", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "setup", q.tracingTags())
defer span.End()
err := hypervisorConfig.valid()
@@ -465,7 +468,7 @@ func (q *qemu) createSandbox(ctx context.Context, id string, networkNS NetworkNa
// Save the tracing context
q.ctx = ctx
span, ctx := katatrace.Trace(ctx, q.Logger(), "createSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, ctx := katatrace.Trace(ctx, q.Logger(), "createSandbox", q.tracingTags())
defer span.End()
if err := q.setup(ctx, id, hypervisorConfig); err != nil {
@@ -750,7 +753,7 @@ func (q *qemu) setupVirtioMem(ctx context.Context) error {
// startSandbox will start the Sandbox's VM.
func (q *qemu) startSandbox(ctx context.Context, timeout int) error {
span, ctx := katatrace.Trace(ctx, q.Logger(), "startSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, ctx := katatrace.Trace(ctx, q.Logger(), "startSandbox", q.tracingTags())
defer span.End()
if q.config.Debug {
@@ -868,7 +871,7 @@ func (q *qemu) bootFromTemplate() error {
// waitSandbox will wait for the Sandbox's VM to be up and running.
func (q *qemu) waitSandbox(ctx context.Context, timeout int) error {
span, _ := katatrace.Trace(ctx, q.Logger(), "waitSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "waitSandbox", q.tracingTags())
defer span.End()
if timeout < 0 {
@@ -919,7 +922,7 @@ func (q *qemu) waitSandbox(ctx context.Context, timeout int) error {
// stopSandbox will stop the Sandbox's VM.
func (q *qemu) stopSandbox(ctx context.Context, waitOnly bool) error {
span, _ := katatrace.Trace(ctx, q.Logger(), "stopSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "stopSandbox", q.tracingTags())
defer span.End()
q.Logger().Info("Stopping Sandbox")
@@ -1011,7 +1014,7 @@ func (q *qemu) cleanupVM() error {
}
func (q *qemu) togglePauseSandbox(ctx context.Context, pause bool) error {
span, _ := katatrace.Trace(ctx, q.Logger(), "togglePauseSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "togglePauseSandbox", q.tracingTags())
defer span.End()
if err := q.qmpSetup(); err != nil {
@@ -1616,9 +1619,9 @@ func (q *qemu) hotplugDevice(ctx context.Context, devInfo interface{}, devType d
}
func (q *qemu) hotplugAddDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, ctx := katatrace.Trace(ctx, q.Logger(), "hotplugAddDevice", qemuTracingTags, map[string]string{"sandbox_id": q.id})
katatrace.AddTag(span, "device", devInfo)
span, ctx := katatrace.Trace(ctx, q.Logger(), "hotplugAddDevice", q.tracingTags())
defer span.End()
katatrace.AddTag(span, "device", devInfo)
data, err := q.hotplugDevice(ctx, devInfo, devType, addDevice)
if err != nil {
@@ -1629,9 +1632,9 @@ func (q *qemu) hotplugAddDevice(ctx context.Context, devInfo interface{}, devTyp
}
func (q *qemu) hotplugRemoveDevice(ctx context.Context, devInfo interface{}, devType deviceType) (interface{}, error) {
span, ctx := katatrace.Trace(ctx, q.Logger(), "hotplugRemoveDevice", qemuTracingTags, map[string]string{"sandbox_id": q.id})
katatrace.AddTag(span, "device", devInfo)
span, ctx := katatrace.Trace(ctx, q.Logger(), "hotplugRemoveDevice", q.tracingTags())
defer span.End()
katatrace.AddTag(span, "device", devInfo)
data, err := q.hotplugDevice(ctx, devInfo, devType, removeDevice)
if err != nil {
@@ -1842,14 +1845,14 @@ func (q *qemu) hotplugAddMemory(memDev *memoryDevice) (int, error) {
}
func (q *qemu) pauseSandbox(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, q.Logger(), "pauseSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, ctx := katatrace.Trace(ctx, q.Logger(), "pauseSandbox", q.tracingTags())
defer span.End()
return q.togglePauseSandbox(ctx, true)
}
func (q *qemu) resumeSandbox(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, q.Logger(), "resumeSandbox", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, ctx := katatrace.Trace(ctx, q.Logger(), "resumeSandbox", q.tracingTags())
defer span.End()
return q.togglePauseSandbox(ctx, false)
@@ -1858,9 +1861,9 @@ func (q *qemu) resumeSandbox(ctx context.Context) error {
// addDevice will add extra devices to Qemu command line.
func (q *qemu) addDevice(ctx context.Context, devInfo interface{}, devType deviceType) error {
var err error
span, _ := katatrace.Trace(ctx, q.Logger(), "addDevice", qemuTracingTags, map[string]string{"sandbox_id": q.id})
katatrace.AddTag(span, "device", devInfo)
span, _ := katatrace.Trace(ctx, q.Logger(), "addDevice", q.tracingTags())
defer span.End()
katatrace.AddTag(span, "device", devInfo)
switch v := devInfo.(type) {
case types.Volume:
@@ -1917,7 +1920,7 @@ func (q *qemu) addDevice(ctx context.Context, devInfo interface{}, devType devic
// getSandboxConsole builds the path of the console where we can read
// logs coming from the sandbox.
func (q *qemu) getSandboxConsole(ctx context.Context, id string) (string, string, error) {
span, _ := katatrace.Trace(ctx, q.Logger(), "getSandboxConsole", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "getSandboxConsole", q.tracingTags())
defer span.End()
consoleURL, err := utils.BuildSocketPath(q.store.RunVMStoragePath(), id, consoleSocket)
@@ -1982,7 +1985,7 @@ func (q *qemu) waitMigration() error {
}
func (q *qemu) disconnect(ctx context.Context) {
span, _ := katatrace.Trace(ctx, q.Logger(), "disconnect", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "disconnect", q.tracingTags())
defer span.End()
q.qmpShutdown()
@@ -2186,7 +2189,7 @@ func genericAppendPCIeRootPort(devices []govmmQemu.Device, number uint32, machin
}
func (q *qemu) getThreadIDs(ctx context.Context) (vcpuThreadIDs, error) {
span, _ := katatrace.Trace(ctx, q.Logger(), "getThreadIDs", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "getThreadIDs", q.tracingTags())
defer span.End()
tid := vcpuThreadIDs{}
@@ -2251,7 +2254,7 @@ func (q *qemu) resizeVCPUs(ctx context.Context, reqVCPUs uint32) (currentVCPUs u
}
func (q *qemu) cleanup(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, q.Logger(), "cleanup", qemuTracingTags, map[string]string{"sandbox_id": q.id})
span, _ := katatrace.Trace(ctx, q.Logger(), "cleanup", q.tracingTags())
defer span.End()
for _, fd := range q.fds {
@@ -2277,7 +2280,8 @@ func (q *qemu) getPids() []int {
return []int{0}
}
pids := []int{pid}
var pids []int
pids = append(pids, pid)
if q.state.VirtiofsdPid != 0 {
pids = append(pids, q.state.VirtiofsdPid)
}

View File

@@ -49,11 +49,14 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
)
// sandboxTracingTags defines tags for the trace span
var sandboxTracingTags = map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "sandbox",
// tracingTags defines tags for the trace span
func (s *Sandbox) tracingTags() map[string]string {
return map[string]string{
"source": "runtime",
"package": "virtcontainers",
"subsystem": "sandbox",
"sandbox_id": s.id,
}
}
const (
@@ -65,6 +68,7 @@ const (
DirMode = os.FileMode(0750) | os.ModeDir
mkswapPath = "/sbin/mkswap"
rwm = "rwm"
)
var (
@@ -396,7 +400,9 @@ func (s *Sandbox) IOStream(containerID, processID string) (io.WriteCloser, io.Re
}
func createAssets(ctx context.Context, sandboxConfig *SandboxConfig) error {
span, _ := katatrace.Trace(ctx, nil, "createAssets", sandboxTracingTags, map[string]string{"sandbox_id": sandboxConfig.ID})
span, _ := katatrace.Trace(ctx, nil, "createAssets", nil)
katatrace.AddTag(span, "sandbox_id", sandboxConfig.ID)
katatrace.AddTag(span, "subsystem", "sandbox")
defer span.End()
for _, name := range types.AssetTypes() {
@@ -446,7 +452,9 @@ func (s *Sandbox) getAndStoreGuestDetails(ctx context.Context) error {
// to physically create that sandbox i.e. starts a VM for that sandbox to eventually
// be started.
func createSandbox(ctx context.Context, sandboxConfig SandboxConfig, factory Factory) (*Sandbox, error) {
span, ctx := katatrace.Trace(ctx, nil, "createSandbox", sandboxTracingTags, map[string]string{"sandbox_id": sandboxConfig.ID})
span, ctx := katatrace.Trace(ctx, nil, "createSandbox", nil)
katatrace.AddTag(span, "sandbox_id", sandboxConfig.ID)
katatrace.AddTag(span, "subsystem", "sandbox")
defer span.End()
if err := createAssets(ctx, &sandboxConfig); err != nil {
@@ -484,7 +492,9 @@ func createSandbox(ctx context.Context, sandboxConfig SandboxConfig, factory Fac
}
func newSandbox(ctx context.Context, sandboxConfig SandboxConfig, factory Factory) (sb *Sandbox, retErr error) {
span, ctx := katatrace.Trace(ctx, nil, "newSandbox", sandboxTracingTags, map[string]string{"sandbox_id": sandboxConfig.ID})
span, ctx := katatrace.Trace(ctx, nil, "newSandbox", nil)
katatrace.AddTag(span, "sandbox_id", sandboxConfig.ID)
katatrace.AddTag(span, "subsystem", "sandbox")
defer span.End()
if !sandboxConfig.valid() {
@@ -580,6 +590,33 @@ func (s *Sandbox) createCgroupManager() error {
if spec.Linux.Resources != nil {
resources.Devices = spec.Linux.Resources.Devices
intptr := func(i int64) *int64 { return &i }
// Determine if device /dev/null and /dev/urandom exist, and add if they don't
nullDeviceExist := false
urandomDeviceExist := false
for _, device := range resources.Devices {
if device.Type == "c" && device.Major == intptr(1) && device.Minor == intptr(3) {
nullDeviceExist = true
}
if device.Type == "c" && device.Major == intptr(1) && device.Minor == intptr(9) {
urandomDeviceExist = true
}
}
if !nullDeviceExist {
// "/dev/null"
resources.Devices = append(resources.Devices, []specs.LinuxDeviceCgroup{
{Type: "c", Major: intptr(1), Minor: intptr(3), Access: rwm, Allow: true},
}...)
}
if !urandomDeviceExist {
// "/dev/urandom"
resources.Devices = append(resources.Devices, []specs.LinuxDeviceCgroup{
{Type: "c", Major: intptr(1), Minor: intptr(9), Access: rwm, Allow: true},
}...)
}
if spec.Linux.Resources.CPU != nil {
resources.CPU = &specs.LinuxCPU{
Cpus: spec.Linux.Resources.CPU.Cpus,
@@ -621,7 +658,7 @@ func (s *Sandbox) createCgroupManager() error {
// storeSandbox stores a sandbox config.
func (s *Sandbox) storeSandbox(ctx context.Context) error {
span, _ := katatrace.Trace(ctx, s.Logger(), "storeSandbox", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, _ := katatrace.Trace(ctx, s.Logger(), "storeSandbox", s.tracingTags())
defer span.End()
// flush data to storage
@@ -715,7 +752,7 @@ func (s *Sandbox) Delete(ctx context.Context) error {
}
func (s *Sandbox) startNetworkMonitor(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "startNetworkMonitor", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "startNetworkMonitor", s.tracingTags())
defer span.End()
binPath, err := os.Executable()
@@ -754,7 +791,7 @@ func (s *Sandbox) createNetwork(ctx context.Context) error {
return nil
}
span, ctx := katatrace.Trace(ctx, s.Logger(), "createNetwork", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "createNetwork", s.tracingTags())
defer span.End()
s.networkNS = NetworkNamespace{
@@ -791,7 +828,7 @@ func (s *Sandbox) postCreatedNetwork(ctx context.Context) error {
}
func (s *Sandbox) removeNetwork(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "removeNetwork", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "removeNetwork", s.tracingTags())
defer span.End()
if s.config.NetworkConfig.NetmonConfig.Enable {
@@ -1107,7 +1144,7 @@ func (s *Sandbox) cleanSwap(ctx context.Context) {
// startVM starts the VM.
func (s *Sandbox) startVM(ctx context.Context) (err error) {
span, ctx := katatrace.Trace(ctx, s.Logger(), "startVM", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "startVM", s.tracingTags())
defer span.End()
s.Logger().Info("Starting VM")
@@ -1196,7 +1233,7 @@ func (s *Sandbox) startVM(ctx context.Context) (err error) {
// stopVM: stop the sandbox's VM
func (s *Sandbox) stopVM(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "stopVM", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "stopVM", s.tracingTags())
defer span.End()
s.Logger().Info("Stopping sandbox in the VM")
@@ -1549,7 +1586,7 @@ func (s *Sandbox) ResumeContainer(ctx context.Context, containerID string) error
// createContainers registers all containers, create the
// containers in the guest and starts one shim per container.
func (s *Sandbox) createContainers(ctx context.Context) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "createContainers", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "createContainers", s.tracingTags())
defer span.End()
for i := range s.config.Containers {
@@ -1621,7 +1658,7 @@ func (s *Sandbox) Start(ctx context.Context) error {
// will be destroyed.
// When force is true, ignore guest related stop failures.
func (s *Sandbox) Stop(ctx context.Context, force bool) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "Stop", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "Stop", s.tracingTags())
defer span.End()
if s.state.State == types.StateStopped {
@@ -1723,7 +1760,7 @@ func (s *Sandbox) unsetSandboxBlockIndex(index int) error {
// HotplugAddDevice is used for add a device to sandbox
// Sandbox implement DeviceReceiver interface from device/api/interface.go
func (s *Sandbox) HotplugAddDevice(ctx context.Context, device api.Device, devType config.DeviceType) error {
span, ctx := katatrace.Trace(ctx, s.Logger(), "HotplugAddDevice", sandboxTracingTags, map[string]string{"sandbox_id": s.id})
span, ctx := katatrace.Trace(ctx, s.Logger(), "HotplugAddDevice", s.tracingTags())
defer span.End()
if s.config.SandboxCgroupOnly {

View File

@@ -384,51 +384,6 @@ func TestToBytes(t *testing.T) {
assert.Equal(expected, result)
}
func TestWaitLocalProcess(t *testing.T) {
cfg := []struct {
command string
args []string
timeout uint
signal syscall.Signal
}{
{
"true",
[]string{},
waitLocalProcessTimeoutSecs,
syscall.SIGKILL,
},
{
"sleep",
[]string{"999"},
waitLocalProcessTimeoutSecs,
syscall.SIGKILL,
},
{
"sleep",
[]string{"999"},
1,
syscall.SIGKILL,
},
}
logger := logrus.WithField("foo", "bar")
for _, opts := range cfg {
assert := assert.New(t)
cmd := exec.Command(opts.command, opts.args...)
err := cmd.Start()
assert.NoError(err)
pid := cmd.Process.Pid
err = WaitLocalProcess(pid, opts.timeout, opts.signal, logger)
assert.NoError(err)
_ = cmd.Wait()
}
}
func TestWaitLocalProcessInvalidSignal(t *testing.T) {
assert := assert.New(t)
@@ -469,3 +424,58 @@ func TestWaitLocalProcessInvalidPid(t *testing.T) {
assert.Error(err, msg)
}
}
func TestWaitLocalProcessBrief(t *testing.T) {
assert := assert.New(t)
cmd := exec.Command("true")
err := cmd.Start()
assert.NoError(err)
pid := cmd.Process.Pid
logger := logrus.WithField("foo", "bar")
err = WaitLocalProcess(pid, waitLocalProcessTimeoutSecs, syscall.SIGKILL, logger)
assert.NoError(err)
_ = cmd.Wait()
}
func TestWaitLocalProcessLongRunningPreKill(t *testing.T) {
assert := assert.New(t)
cmd := exec.Command("sleep", "999")
err := cmd.Start()
assert.NoError(err)
pid := cmd.Process.Pid
logger := logrus.WithField("foo", "bar")
err = WaitLocalProcess(pid, waitLocalProcessTimeoutSecs, syscall.SIGKILL, logger)
assert.NoError(err)
_ = cmd.Wait()
}
func TestWaitLocalProcessLongRunning(t *testing.T) {
assert := assert.New(t)
cmd := exec.Command("sleep", "999")
err := cmd.Start()
assert.NoError(err)
pid := cmd.Process.Pid
logger := logrus.WithField("foo", "bar")
// Don't wait for long as the process isn't actually trying to stop,
// so it will have to timeout and then be killed.
const timeoutSecs = 1
err = WaitLocalProcess(pid, timeoutSecs, syscall.Signal(0), logger)
assert.NoError(err)
_ = cmd.Wait()
}

View File

@@ -9,7 +9,7 @@ ROOTFS_BUILDER := $(MK_DIR)/rootfs-builder/rootfs.sh
INITRD_BUILDER := $(MK_DIR)/initrd-builder/initrd_builder.sh
IMAGE_BUILDER := $(MK_DIR)/image-builder/image_builder.sh
DISTRO ?= centos
DISTRO := centos
BUILD_METHOD := distro
BUILD_METHOD_LIST := distro dracut
AGENT_INIT ?= no

View File

@@ -326,11 +326,7 @@ build_rootfs_distro()
trap error_handler ERR
fi
if [ -d "${ROOTFS_DIR}" ] && [ "${ROOTFS_DIR}" != "/" ]; then
rm -rf "${ROOTFS_DIR}"/*
else
mkdir -p ${ROOTFS_DIR}
fi
mkdir -p ${ROOTFS_DIR}
# need to detect rustc's version too?
detect_rust_version ||
@@ -373,8 +369,6 @@ build_rootfs_distro()
docker_run_args=""
docker_run_args+=" --rm"
# apt sync scans all possible fds in order to close them, incredibly slow on VMs
docker_run_args+=" --ulimit nofile=262144:262144"
docker_run_args+=" --runtime ${DOCKER_RUNTIME}"
if [ -z "${AGENT_SOURCE_BIN}" ] ; then

View File

@@ -11,19 +11,40 @@ a node only if it uses either containerd or CRI-O CRI-shims.
### Install Kata on a running Kubernetes cluster
#### Installing the latest image
The latest image refers to pre-release and release candidate content. For stable releases, please, use the "stable" instructions.
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl apply -f kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f kata-deploy/base/kata-deploy.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
```
or on a [k3s](https://k3s.io/) cluster:
#### Installing the stable image
The stable image refers to the last stable releases content.
Note that if you use a tagged version of the repo, the stable image does match that version.
For instance, if you use the 2.2.1 tagged version of the kata-deploy.yaml file, then the version 2.2.1 of the kata runtime will be deployed.
```sh
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
```
#### For your [k3s](https://k3s.io/) cluster, do:
```sh
$ GO111MODULE=auto go get github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl apply -k kata-deploy/overlays/k3s
```
#### Ensure kata-deploy is ready
```sh
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
```
### Run a sample workload
Workloads specify the runtime they'd like to utilize by setting the appropriate `runtimeClass` object within
@@ -32,8 +53,7 @@ which will ensure the workload is only scheduled on a node that has Kata Contain
`runtimeClass` is a built-in type in Kubernetes. To apply each Kata Containers `runtimeClass`:
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/runtimeclasses
$ kubectl apply -f kata-runtimeClasses.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
```
The following YAML snippet shows how to specify a workload should use Kata with Cloud Hypervisor:
@@ -66,42 +86,74 @@ spec:
To run an example with `kata-clh`:
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/examples
$ kubectl apply -f test-deploy-kata-clh.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
```
To run an example with `kata-fc`:
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/examples
$ kubectl apply -f test-deploy-kata-fc.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
```
To run an example with `kata-qemu`:
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/examples
$ kubectl apply -f test-deploy-kata-qemu.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
```
The following removes the test pods:
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy/examples
$ kubectl delete -f test-deploy-kata-clh.yaml
$ kubectl delete -f test-deploy-kata-fc.yaml
$ kubectl delete -f test-deploy-kata-qemu.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
```
### Remove Kata from the Kubernetes cluster
#### Removing the latest image
```sh
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl delete -f kata-deploy/base/kata-deploy.yaml
$ kubectl apply -f kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f runtimeclasses/kata-runtimeClasses.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
```
After ensuring kata-deploy has been deleted, cleanup the cluster:
```sh
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
```
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion.
This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
```sh
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
```
#### Removing the stable image
```sh
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
```
After ensuring kata-deploy has been deleted, cleanup the cluster:
```sh
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
```
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion.
This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
```sh
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
```
## `kata-deploy` details

View File

@@ -18,7 +18,7 @@ spec:
katacontainers.io/kata-runtime: cleanup
containers:
- name: kube-kata-cleanup
image: quay.io/kata-containers/kata-deploy:2.3.0-alpha0
image: quay.io/kata-containers/kata-deploy:2.2.1
imagePullPolicy: Always
command: [ "bash", "-c", "/opt/kata-artifacts/scripts/kata-deploy.sh reset" ]
env:

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: kata-label-node
containers:
- name: kube-kata
image: quay.io/kata-containers/kata-deploy:2.3.0-alpha0
image: quay.io/kata-containers/kata-deploy:2.2.1
imagePullPolicy: Always
lifecycle:
preStop:

View File

@@ -1,19 +1,6 @@
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-qemu-virtiofs
handler: kata-qemu-virtiofs
overhead:
podFixed:
memory: "160Mi"
cpu: "250m"
scheduling:
nodeSelector:
katacontainers.io/kata-runtime: "true"
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-qemu
handler: kata-qemu

View File

@@ -1,159 +0,0 @@
# Default configuration for aarch64-softmmu
# We support all the 32 bit boards so need all their config
include arm-softmmu.mak
CONFIG_AUX=y
CONFIG_DDC=y
CONFIG_DPCD=y
CONFIG_XLNX_ZYNQMP=y
CONFIG_XLNX_ZYNQMP_ARM=y
CONFIG_XLNX_VERSAL=y
CONFIG_SBSA_REF=y
CONFIG_ARM_SMMUV3=y
CONFIG_MEM_DEVICE=y
CONFIG_DIMM=y
# Below is borrowed from i386-softmmu.mak of Kata
# VM port
CONFIG_VMMOUSE=n
CONFIG_VMPORT=n
# VMWARE
CONFIG_VMW_PVSCSI_SCSI_PCI=n
CONFIG_VMXNET3_PCI=n
# Audio and sound cards
CONFIG_AC97=n
CONFIG_ADLIB=n
CONFIG_CS4231A=n
CONFIG_ES1370=n
CONFIG_GUS=n
CONFIG_HDA=n
CONFIG_SB16=n
CONFIG_SD=n
# Automotive
CONFIG_CAN_BUS=n
CONFIG_CAN_PCI=n
CONFIG_CAN_SJA1000=n
# Network
CONFIG_E1000_PCI=n
CONFIG_E1000E_PCI_EXPRESS=n
CONFIG_EEPRO100_PCI=n
CONFIG_NE2000_COMMON=n
CONFIG_NE2000_ISA=n
CONFIG_NE2000_PCI=n
CONFIG_PCNET_COMMON=n
CONFIG_PCNET_PCI=n
CONFIG_ROCKER=n
CONFIG_RTL8139_PCI=n
# USB
CONFIG_USB=n
CONFIG_USB_AUDIO=n
CONFIG_USB_BLUETOOTH=n
CONFIG_USB_EHCI=n
CONFIG_USB_EHCI_PCI=n
CONFIG_USB_NETWORK=n
CONFIG_USB_OHCI=n
CONFIG_USB_OHCI_PCI=n
CONFIG_USB_SERIAL=n
CONFIG_USB_SMARTCARD=n
CONFIG_USB_STORAGE_BOT=n
CONFIG_USB_STORAGE_MTP=n
CONFIG_USB_STORAGE_UAS=n
CONFIG_USB_TABLET_WACOM=n
CONFIG_USB_UHCI=n
CONFIG_USB_XHCI=n
CONFIG_USB_XHCI_NEC=n
# ISA
CONFIG_IDE_ISA=n
CONFIG_ISA_DEBUG=n
CONFIG_ISA_IPMI_BT=n
CONFIG_ISA_IPMI_KCS=n
# VGA
CONFIG_ATI_VGA=n
CONFIG_VGA=n
CONFIG_VGA_CIRRUS=n
CONFIG_VGA_ISA=n
CONFIG_VGA_PCI=n
CONFIG_VHOST_USER_VGA=n
CONFIG_VIRTIO_VGA=n
CONFIG_VMWARE_VGA=n
# Displays
CONFIG_BOCHS_DISPLAY=n
CONFIG_DDC=n
CONFIG_QXL=n
# Graphics
CONFIG_OPENGL=n
CONFIG_SPICE=n
CONFIG_X11=n
# test devices
CONFIG_HYPERV_TESTDEV=n
CONFIG_ISA_TESTDEV=n
CONFIG_PCI_TESTDEV=n
# XEN
CONFIG_XEN=n
# PCIe
CONFIG_XIO3130=n
# SCSI
CONFIG_ESP=n
CONFIG_ESP_PCI=n
CONFIG_LSI_SCSI_PCI=n
CONFIG_MEGASAS_SCSI_PCI=n
CONFIG_MPTSAS_SCSI_PCI=n
# i2c
CONFIG_BITBANG_I2C=n
# UART
CONFIG_SERIAL_PCI_MULTI=n
# PCI
CONFIG_EDU=n
CONFIG_I82801B11=n
CONFIG_IOH3420=n
CONFIG_IPACK=n
CONFIG_PXB=n
# SD
CONFIG_SDHCI=n
CONFIG_SDHCI_PCI=n
# watchdog
CONFIG_WDT_IB6300ESB=n
CONFIG_WDT_IB700=n
# Apple
CONFIG_APPLESMC=n
# Timer
CONFIG_HPET=n
# IPMI
CONFIG_IPMI=n
CONFIG_IPMI_EXTERN=n
CONFIG_IPMI_LOCAL=n
# misc
CONFIG_IVSHMEM_DEVICE=n
CONFIG_PVPANIC=n
CONFIG_SEV=n
CONFIG_SGA=n
#vhost
CONFIG_VHOST_USER_INPUT=n
# TPM
CONFIG_TPM_CRB=n
CONFIG_TPM_TIS=n

View File

@@ -1,216 +0,0 @@
# Default configuration for arm-softmmu
CONFIG_VGA=y
CONFIG_NAND=y
CONFIG_ECC=y
CONFIG_SERIAL=y
CONFIG_PTIMER=y
CONFIG_MAX7310=y
CONFIG_WM8750=y
CONFIG_TWL92230=y
CONFIG_TSC2005=y
CONFIG_LM832X=y
CONFIG_TMP105=y
CONFIG_TMP421=y
CONFIG_PCA9552=y
CONFIG_STELLARIS=y
CONFIG_STELLARIS_INPUT=y
CONFIG_STELLARIS_ENET=y
CONFIG_SSD0303=y
CONFIG_SSD0323=y
CONFIG_DDC=y
CONFIG_SII9022=y
CONFIG_ADS7846=y
CONFIG_MAX111X=y
CONFIG_SSI=y
CONFIG_SSI_SD=y
CONFIG_SSI_M25P80=y
CONFIG_LAN9118=y
CONFIG_SMC91C111=y
CONFIG_ALLWINNER_EMAC=y
CONFIG_IMX_FEC=y
CONFIG_FTGMAC100=y
CONFIG_DS1338=y
CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_MICRODRIVE=y
CONFIG_USB=y
CONFIG_USB_MUSB=y
CONFIG_USB_EHCI_SYSBUS=y
CONFIG_PLATFORM_BUS=y
CONFIG_VIRTIO_MMIO=y
CONFIG_ARM11MPCORE=y
CONFIG_A9MPCORE=y
CONFIG_A15MPCORE=y
CONFIG_ARM_V7M=y
CONFIG_NETDUINO2=y
CONFIG_ARM_GIC=y
CONFIG_ARM_GIC_KVM=$(CONFIG_KVM)
CONFIG_ARM_TIMER=y
CONFIG_ARM_MPTIMER=y
CONFIG_A9_GTIMER=y
CONFIG_PL011=y
CONFIG_PL022=y
CONFIG_PL031=y
CONFIG_PL041=y
CONFIG_PL050=y
CONFIG_PL061=y
CONFIG_PL080=y
CONFIG_PL110=y
CONFIG_PL181=y
CONFIG_PL190=y
CONFIG_PL310=y
CONFIG_PL330=y
CONFIG_CADENCE=y
CONFIG_XGMAC=y
CONFIG_EXYNOS4=y
CONFIG_PXA2XX=y
CONFIG_I2C=y
CONFIG_BITBANG_I2C=y
CONFIG_FRAMEBUFFER=y
CONFIG_XILINX_SPIPS=y
CONFIG_ZYNQ_DEVCFG=y
CONFIG_ARM11SCU=y
CONFIG_A9SCU=y
CONFIG_DIGIC=y
CONFIG_MARVELL_88W8618=y
CONFIG_OMAP=y
CONFIG_TSC210X=y
CONFIG_BLIZZARD=y
CONFIG_ONENAND=y
CONFIG_TUSB6010=y
CONFIG_IMX=y
CONFIG_MAINSTONE=y
CONFIG_MPS2=y
CONFIG_MUSCA=y
CONFIG_NSERIES=y
CONFIG_RASPI=y
CONFIG_REALVIEW=y
CONFIG_ZAURUS=y
CONFIG_ZYNQ=y
CONFIG_STM32F2XX_TIMER=y
CONFIG_STM32F2XX_USART=y
CONFIG_STM32F2XX_SYSCFG=y
CONFIG_STM32F2XX_ADC=y
CONFIG_STM32F2XX_SPI=y
CONFIG_STM32F205_SOC=y
CONFIG_NRF51_SOC=y
CONFIG_CMSDK_APB_TIMER=y
CONFIG_CMSDK_APB_DUALTIMER=y
CONFIG_CMSDK_APB_UART=y
CONFIG_CMSDK_APB_WATCHDOG=y
CONFIG_MPS2_FPGAIO=y
CONFIG_MPS2_SCC=y
CONFIG_TZ_MPC=y
CONFIG_TZ_MSC=y
CONFIG_TZ_PPC=y
CONFIG_ARMSSE=y
CONFIG_IOTKIT_SECCTL=y
CONFIG_IOTKIT_SYSCTL=y
CONFIG_IOTKIT_SYSINFO=y
CONFIG_ARMSSE_CPUID=y
CONFIG_VERSATILE=y
CONFIG_VERSATILE_PCI=y
CONFIG_VERSATILE_I2C=y
CONFIG_PCI_EXPRESS_GENERIC_BRIDGE=y
CONFIG_VFIO_PLATFORM=y
CONFIG_VFIO_XGMAC=y
CONFIG_VFIO_AMD_XGBE=y
CONFIG_INTEGRATOR=y
CONFIG_INTEGRATOR_DEBUG=y
CONFIG_ALLWINNER_A10_PIT=y
CONFIG_ALLWINNER_A10_PIC=y
CONFIG_ALLWINNER_A10=y
CONFIG_FSL_IMX6=y
CONFIG_FSL_IMX31=y
CONFIG_FSL_IMX25=y
CONFIG_FSL_IMX7=y
CONFIG_FSL_IMX6UL=y
CONFIG_IMX_I2C=y
CONFIG_PCIE_PORT=y
CONFIG_XIO3130=y
CONFIG_IOH3420=y
CONFIG_I82801B11=y
CONFIG_ACPI=y
CONFIG_ARM_VIRT=y
CONFIG_SMBIOS=y
CONFIG_ASPEED_SOC=y
CONFIG_SMBUS_EEPROM=y
CONFIG_GPIO_KEY=y
CONFIG_MSF2=y
CONFIG_FW_CFG_DMA=y
CONFIG_XILINX_AXI=y
CONFIG_PCI_EXPRESS_DESIGNWARE=y
CONFIG_STRONGARM=y
CONFIG_HIGHBANK=y
CONFIG_MUSICPAL=y
CONFIG_MEM_DEVICE=y
CONFIG_DIMM=y
CONFIG_NVDIMM=y
CONFIG_ACPI_NVDIMM=y
CONFIG_PCI=y
# For now, CONFIG_IDE_CORE requires ISA, so we enable it here
CONFIG_ISA_BUS=y
CONFIG_VIRTIO_PCI=y
include virtio.mak
CONFIG_USB_UHCI=y
CONFIG_USB_OHCI=y
CONFIG_USB_EHCI=y
CONFIG_USB_XHCI=y
CONFIG_USB_XHCI_NEC=y
CONFIG_NE2000_PCI=n
CONFIG_EEPRO100_PCI=n
CONFIG_PCNET_PCI=n
CONFIG_PCNET_COMMON=n
CONFIG_AC97=n
CONFIG_HDA=y
CONFIG_ES1370=n
CONFIG_SCSI=y
CONFIG_LSI_SCSI_PCI=y
CONFIG_VMW_PVSCSI_SCSI_PCI=n
CONFIG_MEGASAS_SCSI_PCI=n
CONFIG_MPTSAS_SCSI_PCI=n
CONFIG_RTL8139_PCI=n
CONFIG_E1000_PCI=n
CONFIG_E1000E_PCI_EXPRESS=n
CONFIG_IDE_CORE=y
CONFIG_IDE_QDEV=y
CONFIG_IDE_PCI=y
CONFIG_AHCI=y
CONFIG_ESP=n
CONFIG_ESP_PCI=n
CONFIG_SERIAL_ISA=y
CONFIG_SERIAL_PCI=y
CONFIG_CAN_BUS=y
CONFIG_CAN_SJA1000=y
CONFIG_CAN_PCI=y
CONFIG_IPACK=n
CONFIG_WDT_IB6300ESB=n
CONFIG_PCI_TESTDEV=n
CONFIG_NVME_PCI=y
CONFIG_SD=y
CONFIG_SDHCI=n
CONFIG_EDU=n
CONFIG_VGA_PCI=y
CONFIG_BOCHS_DISPLAY=n
CONFIG_IVSHMEM_DEVICE=n
CONFIG_ROCKER=n
CONFIG_VFIO=$(CONFIG_LINUX)
CONFIG_VFIO_PCI=y
CONFIG_PCI_GENERIC=y

View File

@@ -1,216 +0,0 @@
# Default configuration for arm-softmmu
CONFIG_VGA=y
CONFIG_NAND=y
CONFIG_ECC=y
CONFIG_SERIAL=y
CONFIG_PTIMER=y
CONFIG_MAX7310=y
CONFIG_WM8750=y
CONFIG_TWL92230=y
CONFIG_TSC2005=y
CONFIG_LM832X=y
CONFIG_TMP105=y
CONFIG_TMP421=y
CONFIG_PCA9552=y
CONFIG_STELLARIS=n
CONFIG_STELLARIS_INPUT=y
CONFIG_STELLARIS_ENET=y
CONFIG_SSD0303=y
CONFIG_SSD0323=y
CONFIG_DDC=y
CONFIG_SII9022=y
CONFIG_ADS7846=y
CONFIG_MAX111X=y
CONFIG_SSI=y
CONFIG_SSI_SD=y
CONFIG_SSI_M25P80=y
CONFIG_LAN9118=y
CONFIG_SMC91C111=y
CONFIG_ALLWINNER_EMAC=y
CONFIG_IMX_FEC=y
CONFIG_FTGMAC100=y
CONFIG_DS1338=y
CONFIG_PFLASH_CFI01=y
CONFIG_PFLASH_CFI02=y
CONFIG_MICRODRIVE=y
CONFIG_USB=y
CONFIG_USB_MUSB=y
CONFIG_USB_EHCI_SYSBUS=y
CONFIG_PLATFORM_BUS=y
CONFIG_VIRTIO_MMIO=y
CONFIG_ARM11MPCORE=y
CONFIG_A9MPCORE=y
CONFIG_A15MPCORE=y
CONFIG_ARM_V7M=y
CONFIG_NETDUINO2=n
CONFIG_ARM_GIC=y
CONFIG_ARM_GIC_KVM=$(CONFIG_KVM)
CONFIG_ARM_TIMER=y
CONFIG_ARM_MPTIMER=y
CONFIG_A9_GTIMER=y
CONFIG_PL011=y
CONFIG_PL022=n
CONFIG_PL031=y
CONFIG_PL041=n
CONFIG_PL050=n
CONFIG_PL061=y
CONFIG_PL080=y
CONFIG_PL110=n
CONFIG_PL181=n
CONFIG_PL190=y
CONFIG_PL310=y
CONFIG_PL330=y
CONFIG_CADENCE=y
CONFIG_XGMAC=y
CONFIG_EXYNOS4=n
CONFIG_PXA2XX=n
CONFIG_I2C=y
CONFIG_BITBANG_I2C=y
CONFIG_FRAMEBUFFER=y
CONFIG_XILINX_SPIPS=y
CONFIG_ZYNQ_DEVCFG=y
CONFIG_ARM11SCU=y
CONFIG_A9SCU=y
CONFIG_DIGIC=n
CONFIG_MARVELL_88W8618=y
CONFIG_OMAP=y
CONFIG_TSC210X=y
CONFIG_BLIZZARD=y
CONFIG_ONENAND=y
CONFIG_TUSB6010=y
CONFIG_IMX=y
CONFIG_MAINSTONE=n
CONFIG_MPS2=n
CONFIG_MUSCA=n
CONFIG_NSERIES=n
CONFIG_RASPI=n
CONFIG_REALVIEW=n
CONFIG_ZAURUS=y
CONFIG_ZYNQ=n
CONFIG_STM32F2XX_TIMER=y
CONFIG_STM32F2XX_USART=y
CONFIG_STM32F2XX_SYSCFG=y
CONFIG_STM32F2XX_ADC=y
CONFIG_STM32F2XX_SPI=y
CONFIG_STM32F205_SOC=n
CONFIG_NRF51_SOC=n
CONFIG_CMSDK_APB_TIMER=y
CONFIG_CMSDK_APB_DUALTIMER=y
CONFIG_CMSDK_APB_UART=y
CONFIG_CMSDK_APB_WATCHDOG=y
CONFIG_MPS2_FPGAIO=y
CONFIG_MPS2_SCC=y
CONFIG_TZ_MPC=y
CONFIG_TZ_MSC=y
CONFIG_TZ_PPC=y
CONFIG_ARMSSE=y
CONFIG_IOTKIT_SECCTL=y
CONFIG_IOTKIT_SYSCTL=y
CONFIG_IOTKIT_SYSINFO=y
CONFIG_ARMSSE_CPUID=y
CONFIG_VERSATILE=n
CONFIG_VERSATILE_PCI=y
CONFIG_VERSATILE_I2C=y
CONFIG_PCI_EXPRESS_GENERIC_BRIDGE=y
CONFIG_VFIO=$(CONFIG_LINUX)
CONFIG_VFIO_PLATFORM=y
CONFIG_VFIO_XGMAC=y
CONFIG_VFIO_AMD_XGBE=y
CONFIG_INTEGRATOR=n
CONFIG_INTEGRATOR_DEBUG=y
CONFIG_ALLWINNER_A10_PIT=n
CONFIG_ALLWINNER_A10_PIC=n
CONFIG_ALLWINNER_A10=n
CONFIG_FSL_IMX6=n
CONFIG_FSL_IMX31=n
CONFIG_FSL_IMX25=n
CONFIG_FSL_IMX7=n
CONFIG_FSL_IMX6UL=n
CONFIG_IMX_I2C=y
CONFIG_PCIE_PORT=y
CONFIG_XIO3130=y
CONFIG_IOH3420=y
CONFIG_I82801B11=y
CONFIG_ACPI=y
CONFIG_ARM_VIRT=y
CONFIG_SMBIOS=y
CONFIG_ASPEED_SOC=n
CONFIG_SMBUS_EEPROM=y
CONFIG_GPIO_KEY=y
CONFIG_MSF2=n
CONFIG_FW_CFG_DMA=y
CONFIG_XILINX_AXI=y
CONFIG_PCI_EXPRESS_DESIGNWARE=y
CONFIG_STRONGARM=n
CONFIG_HIGHBANK=n
CONFIG_MUSICPAL=n
CONFIG_MEM_DEVICE=y
CONFIG_DIMM=y
CONFIG_NVDIMM=y
CONFIG_ACPI_NVDIMM=y
CONFIG_PCI=y
# For now, CONFIG_IDE_CORE requires ISA, so we enable it here
CONFIG_ISA_BUS=y
CONFIG_VIRTIO_PCI=y
include virtio.mak
CONFIG_USB_UHCI=y
CONFIG_USB_OHCI=y
CONFIG_USB_EHCI=y
CONFIG_USB_XHCI=y
CONFIG_USB_XHCI_NEC=y
CONFIG_NE2000_PCI=n
CONFIG_EEPRO100_PCI=n
CONFIG_PCNET_PCI=n
CONFIG_PCNET_COMMON=n
CONFIG_AC97=n
CONFIG_HDA=y
CONFIG_ES1370=n
CONFIG_SCSI=y
CONFIG_LSI_SCSI_PCI=y
CONFIG_VMW_PVSCSI_SCSI_PCI=n
CONFIG_MEGASAS_SCSI_PCI=n
CONFIG_MPTSAS_SCSI_PCI=n
CONFIG_RTL8139_PCI=n
CONFIG_E1000_PCI=n
CONFIG_E1000E_PCI_EXPRESS=n
CONFIG_IDE_CORE=y
CONFIG_IDE_QDEV=y
CONFIG_IDE_PCI=y
CONFIG_AHCI=y
CONFIG_ESP=n
CONFIG_ESP_PCI=n
CONFIG_SERIAL_ISA=y
CONFIG_SERIAL_PCI=y
CONFIG_CAN_BUS=y
CONFIG_CAN_SJA1000=y
CONFIG_CAN_PCI=y
CONFIG_IPACK=n
CONFIG_WDT_IB6300ESB=n
CONFIG_PCI_TESTDEV=n
CONFIG_NVME_PCI=y
CONFIG_SD=y
CONFIG_SDHCI=n
CONFIG_EDU=n
CONFIG_VGA_PCI=y
CONFIG_BOCHS_DISPLAY=n
CONFIG_IVSHMEM_DEVICE=n
CONFIG_ROCKER=n
CONFIG_VFIO_PCI=y
CONFIG_PCI_GENERIC=y

View File

@@ -0,0 +1,111 @@
From 104e56711f131d80de82caed8759947509576e3b Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Mon, 11 Jan 2021 11:50:13 +0000
Subject: [PATCH] gitmodules: use GitLab repos instead of qemu.org
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
qemu.org is running out of bandwidth and the QEMU project is moving
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
(they will become mirrors).
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20210111115017.156802-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Backported-by: Jakob Naucke <jakob.naucke@ibm.com>
---
.gitmodules | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/.gitmodules b/.gitmodules
index 9c0501a4d4..fec515cec5 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,60 +1,60 @@
[submodule "roms/seabios"]
path = roms/seabios
- url = https://git.qemu.org/git/seabios.git/
+ url = https://gitlab.com/qemu-project/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
- url = https://git.qemu.org/git/SLOF.git
+ url = https://gitlab.com/qemu-project/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
- url = https://git.qemu.org/git/ipxe.git
+ url = https://gitlab.com/qemu-project/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
- url = https://git.qemu.org/git/openbios.git
+ url = https://gitlab.com/qemu-project/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
- url = https://git.qemu.org/git/qemu-palcode.git
+ url = https://gitlab.com/qemu-project/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
- url = https://git.qemu.org/git/sgabios.git
+ url = https://gitlab.com/qemu-project/sgabios.git
[submodule "dtc"]
path = dtc
- url = https://git.qemu.org/git/dtc.git
+ url = https://gitlab.com/qemu-project/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
- url = https://git.qemu.org/git/u-boot.git
+ url = https://gitlab.com/qemu-project/u-boot.git
[submodule "roms/skiboot"]
path = roms/skiboot
- url = https://git.qemu.org/git/skiboot.git
+ url = https://gitlab.com/qemu-project/skiboot.git
[submodule "roms/QemuMacDrivers"]
path = roms/QemuMacDrivers
- url = https://git.qemu.org/git/QemuMacDrivers.git
+ url = https://gitlab.com/qemu-project/QemuMacDrivers.git
[submodule "ui/keycodemapdb"]
path = ui/keycodemapdb
- url = https://git.qemu.org/git/keycodemapdb.git
+ url = https://gitlab.com/qemu-project/keycodemapdb.git
[submodule "capstone"]
path = capstone
- url = https://git.qemu.org/git/capstone.git
+ url = https://gitlab.com/qemu-project/capstone.git
[submodule "roms/seabios-hppa"]
path = roms/seabios-hppa
- url = https://git.qemu.org/git/seabios-hppa.git
+ url = https://gitlab.com/qemu-project/seabios-hppa.git
[submodule "roms/u-boot-sam460ex"]
path = roms/u-boot-sam460ex
- url = https://git.qemu.org/git/u-boot-sam460ex.git
+ url = https://gitlab.com/qemu-project/u-boot-sam460ex.git
[submodule "tests/fp/berkeley-testfloat-3"]
path = tests/fp/berkeley-testfloat-3
- url = https://git.qemu.org/git/berkeley-testfloat-3.git
+ url = https://gitlab.com/qemu-project/berkeley-testfloat-3.git
[submodule "tests/fp/berkeley-softfloat-3"]
path = tests/fp/berkeley-softfloat-3
- url = https://git.qemu.org/git/berkeley-softfloat-3.git
+ url = https://gitlab.com/qemu-project/berkeley-softfloat-3.git
[submodule "roms/edk2"]
path = roms/edk2
- url = https://git.qemu.org/git/edk2.git
+ url = https://gitlab.com/qemu-project/edk2.git
[submodule "slirp"]
path = slirp
- url = https://git.qemu.org/git/libslirp.git
+ url = https://gitlab.com/qemu-project/libslirp.git
[submodule "roms/opensbi"]
path = roms/opensbi
- url = https://git.qemu.org/git/opensbi.git
+ url = https://gitlab.com/qemu-project/opensbi.git
[submodule "roms/qboot"]
path = roms/qboot
- url = https://github.com/bonzini/qboot
+ url = https://gitlab.com/qemu-project/qboot.git
--
2.31.1

View File

@@ -0,0 +1,118 @@
From 9911ca0d1bca846f159ebdb48b9d8a784c959589 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Mon, 11 Jan 2021 11:50:13 +0000
Subject: [PATCH] gitmodules: use GitLab repos instead of qemu.org
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
qemu.org is running out of bandwidth and the QEMU project is moving
towards a gating CI on GitLab. Use the GitLab repos instead of qemu.org
(they will become mirrors).
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20210111115017.156802-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
.gitmodules | 44 ++++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/.gitmodules b/.gitmodules
index 2bdeeacef8..08b1b48a09 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,66 +1,66 @@
[submodule "roms/seabios"]
path = roms/seabios
- url = https://git.qemu.org/git/seabios.git/
+ url = https://gitlab.com/qemu-project/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
- url = https://git.qemu.org/git/SLOF.git
+ url = https://gitlab.com/qemu-project/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
- url = https://git.qemu.org/git/ipxe.git
+ url = https://gitlab.com/qemu-project/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
- url = https://git.qemu.org/git/openbios.git
+ url = https://gitlab.com/qemu-project/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
- url = https://git.qemu.org/git/qemu-palcode.git
+ url = https://gitlab.com/qemu-project/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
- url = https://git.qemu.org/git/sgabios.git
+ url = https://gitlab.com/qemu-project/sgabios.git
[submodule "dtc"]
path = dtc
- url = https://git.qemu.org/git/dtc.git
+ url = https://gitlab.com/qemu-project/dtc.git
[submodule "roms/u-boot"]
path = roms/u-boot
- url = https://git.qemu.org/git/u-boot.git
+ url = https://gitlab.com/qemu-project/u-boot.git
[submodule "roms/skiboot"]
path = roms/skiboot
- url = https://git.qemu.org/git/skiboot.git
+ url = https://gitlab.com/qemu-project/skiboot.git
[submodule "roms/QemuMacDrivers"]
path = roms/QemuMacDrivers
- url = https://git.qemu.org/git/QemuMacDrivers.git
+ url = https://gitlab.com/qemu-project/QemuMacDrivers.git
[submodule "ui/keycodemapdb"]
path = ui/keycodemapdb
- url = https://git.qemu.org/git/keycodemapdb.git
+ url = https://gitlab.com/qemu-project/keycodemapdb.git
[submodule "capstone"]
path = capstone
- url = https://git.qemu.org/git/capstone.git
+ url = https://gitlab.com/qemu-project/capstone.git
[submodule "roms/seabios-hppa"]
path = roms/seabios-hppa
- url = https://git.qemu.org/git/seabios-hppa.git
+ url = https://gitlab.com/qemu-project/seabios-hppa.git
[submodule "roms/u-boot-sam460ex"]
path = roms/u-boot-sam460ex
- url = https://git.qemu.org/git/u-boot-sam460ex.git
+ url = https://gitlab.com/qemu-project/u-boot-sam460ex.git
[submodule "tests/fp/berkeley-testfloat-3"]
path = tests/fp/berkeley-testfloat-3
- url = https://git.qemu.org/git/berkeley-testfloat-3.git
+ url = https://gitlab.com/qemu-project/berkeley-testfloat-3.git
[submodule "tests/fp/berkeley-softfloat-3"]
path = tests/fp/berkeley-softfloat-3
- url = https://git.qemu.org/git/berkeley-softfloat-3.git
+ url = https://gitlab.com/qemu-project/berkeley-softfloat-3.git
[submodule "roms/edk2"]
path = roms/edk2
- url = https://git.qemu.org/git/edk2.git
+ url = https://gitlab.com/qemu-project/edk2.git
[submodule "slirp"]
path = slirp
- url = https://git.qemu.org/git/libslirp.git
+ url = https://gitlab.com/qemu-project/libslirp.git
[submodule "roms/opensbi"]
path = roms/opensbi
- url = https://git.qemu.org/git/opensbi.git
+ url = https://gitlab.com/qemu-project/opensbi.git
[submodule "roms/qboot"]
path = roms/qboot
- url = https://git.qemu.org/git/qboot.git
+ url = https://gitlab.com/qemu-project/qboot.git
[submodule "meson"]
path = meson
- url = https://git.qemu.org/git/meson.git
+ url = https://gitlab.com/qemu-project/meson.git
[submodule "roms/vbootrom"]
path = roms/vbootrom
- url = https://git.qemu.org/git/vbootrom.git
+ url = https://gitlab.com/qemu-project/vbootrom.git
--
2.27.0

View File

@@ -111,13 +111,68 @@ bump_repo() {
fi
if [ "${repo}" == "kata-containers" ]; then
info "Updating kata-deploy / kata-cleanup image tags"
sed -i "s#quay.io/kata-containers/kata-deploy:${current_version}#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
sed -i "s#quay.io/kata-containers/kata-deploy:${current_version}#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
git diff
# Here there are 3 scenarios of what we can do, based on
# which branch we're targetting:
#
# 1) [main] ------> [main] NO-OP
# "alpha0" "alpha1"
#
# +----------------+----------------+
# | from | to |
# -----------------+----------------+----------------+
# kata-deploy | "latest" | "latest" |
# -----------------+----------------+----------------+
# kata-deploy-base | "stable | "stable" |
# -----------------+----------------+----------------+
#
#
# 2) [main] ------> [stable] Update kata-deploy and
# "alpha2" "rc0" get rid of kata-deploy-base
#
# +----------------+----------------+
# | from | to |
# -----------------+----------------+----------------+
# kata-deploy | "latest" | "rc0" |
# -----------------+----------------+----------------+
# kata-deploy-base | "stable" | REMOVED |
# -----------------+----------------+----------------+
#
#
# 3) [stable] ------> [stable] Update kata-deploy
# "x.y.z" "x.y.(z+1)"
#
# +----------------+----------------+
# | from | to |
# -----------------+----------------+----------------+
# kata-deploy | "x.y.z" | "x.y.(z+1)" |
# -----------------+----------------+----------------+
# kata-deploy-base | NON-EXISTENT | NON-EXISTENT |
# -----------------+----------------+----------------+
git add tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
git add tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
info "Updating kata-deploy / kata-cleanup image tags"
if [ "${target_branch}" == "main" ] && [[ "${new_version}" =~ "rc" ]]; then
# case 2)
## change the "latest" tag to the "#{new_version}" one
sed -i "s#quay.io/kata-containers/kata-deploy:latest#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
sed -i "s#quay.io/kata-containers/kata-deploy:latest#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
git diff
git add tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
git add tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
## and remove the kata-deploy & kata-cleanup stable yaml files
git rm tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
git rm tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
elif [[ "${target_branch}" =~ "stable" ]]; then
# case 3)
sed -i "s#quay.io/kata-containers/kata-deploy:${current_version}#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
sed -i "s#quay.io/kata-containers/kata-deploy:${current_version}#quay.io/kata-containers/kata-deploy:${new_version}#g" tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
git diff
git add tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
git add tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
fi
fi
info "Creating PR message"

View File

@@ -75,7 +75,7 @@ assets:
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
uscan-url: >-
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
version: "v17.0"
version: "v18.0"
firecracker:
description: "Firecracker micro-VMM"