Compare commits

..

51 Commits
2.3.0 ... 2.3.1

Author SHA1 Message Date
Peng Tao
365e358115 Merge pull request #3402 from snir911/2.3.1-branch-bump
# Kata Containers 2.3.1
2022-01-11 16:56:05 +08:00
Snir Sheriber
a2e524f356 release: Kata Containers 2.3.1
- stable-2.3 | kata-deploy: fix tar command in dockerfile
- stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.2
- stable-2.3 Missing backports
- stable-2.3 | docs: Fix kernel configs README spelling errors
- docs: Fix outdated links
- stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.1
- Backport osbuilder: Revert to using apk.static for Alpine
- stable-2.3 | runtime: only call stopVirtiofsd when shared_fs is virtio-fs
- Backport versions: Use Ubuntu initrd for non-musl archs
- stable-2.3 | Upgrade to Cloud Hypervisor v20.0 and Openapi-generator v5.3.0
- stable-2.3 | packaging: Fix missing commit message in building kata-runtime
- stable-2.3 | runtime: enable vhost-net for rootless hypervisor
- [backport] agent: create directories for watchable-bind mounts
- runtime: enable FUSE_DAX kernel config for DAX

dfbe74c4 kata-deploy: fix tar command in dockerfile
9e7eed7c versions: Upgrade to Cloud Hypervisor v20.2
53cf1dd0 tools/packaging: add copyright to kata-monitor's Dockerfile
a4dee6a5 packaging: delint tests dockerfiles
fd87b60c packaging: delint kata-deploy dockerfiles
2cb4f7ba ci/openshift-ci: delint dockerfiles
993dcc94 osbuilder: delint dockerfiles
bbd7cc2f packaging: delint kata-monitor dockerfiles
9837ec72 packaging: delint static-build dockerfiles
8785106f packaging/qemu: Use QEMU script to update submodules
a915f082 packaging/qemu: Use partial git clone
ec3faab8 security: Update rust crate versions
1f61be84 osbuilder: Add protoc to the alpine container
d2d8f9ac osbuilder: avoid to copy versions.txt which already deprecated
ca30eee3 kata-manager: Retrieve static tarball
0217abce kata-deploy: Deal with empty containerd conf file
572b25dd osbuilder: be runtime consistent also with podman build
84e69ecb agent: user container ID as watchable storage key for hashmap
77b6cfbd docs: Fix kernel configs README spelling errors
24085c95 docs: Fix outdated k8s link
514bf74f docs: Replicate branch rename on runtime-spec
77a2502a cri-o: Update links for the CRI-O github page
6413ecf4 docs: Backport source reorganization links
a0bed72d versions: Upgrade to Cloud Hypervisor v20.1
d03e05e8 versions: Use fixed, minor version for Alpine
0f7db91c osbuilder: Revert to using apk.static for Alpine
271d67a8 runtime: only call stopVirtiofsd when shared_fs is virtio-fs
7c15335d versions: Use Ubuntu initrd for non-musl archs
15080f20 virtcontainers: clh: Upgrade to openapi-generator v5.3.0
c2b8eb3c virtcontainers: clh: Re-generate the client code
fe0fbab5 versions: Upgrade to Cloud Hypervisor v20.0
be5468fd packaging: Fix missing commit message in building kata-runtime
18bb9a5d runtime: enable vhost-net for rootless hypervisor
3458073d agent: create directories for watchable-bind mounts
0e91503c runtime: enable FUSE_DAX kernel config for DAX

Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-01-06 20:51:21 +02:00
snir911
3d4dedefda Merge pull request #3396 from snir911/stable-2.3-fix-kata-deploy
stable-2.3 | kata-deploy: fix tar command in dockerfile
2022-01-06 20:36:36 +02:00
snir911
919fc56daa Merge pull request #3397 from likebreath/0105/backport_clh_v20.2
stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.2
2022-01-06 11:22:41 +02:00
Snir Sheriber
dfbe74c489 kata-deploy: fix tar command in dockerfile
tar params are passed wrongly

Fixes: #3394
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2022-01-06 08:26:36 +02:00
Bo Chen
9e7eed7c4b versions: Upgrade to Cloud Hypervisor v20.2
This is a bug release from Cloud Hypervisor addressing the following
issues: 1) Don't error out when setting up the SIGWINCH handler (for
console resize) when this fails due to older kernel; 2) Seccomp rules
were refined to remove syscalls that are now unused; 3) Fix reboot on
older host kernels when SIGWINCH handler was not initialised; 4) Fix
virtio-vsock blocking issue.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.2

Fixes: #3383

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 1f581a0405)
2022-01-05 10:52:53 -08:00
Archana Shinde
a0bb8c5599 Merge pull request #3368 from snir911/backports-2.3
stable-2.3 Missing backports
2022-01-04 06:42:42 -08:00
Wainer dos Santos Moschetta
53cf1dd042 tools/packaging: add copyright to kata-monitor's Dockerfile
(added dependency at backport)

The kata-monitor's Dockerfile was added by Eric Ernst on commit 2f1cb7995f
but for some reason the static checker did not catch the file misses the copyright statement
at the time it was added. But it is now complaining about it. So this assign the copyright to
him to make the static-checker happy.

Fixes #3329
github.com/kata-containers/tests#4310
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 15:31:09 +02:00
Wainer dos Santos Moschetta
a4dee6a591 packaging: delint tests dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:53:19 +02:00
Wainer dos Santos Moschetta
fd87b60c7a packaging: delint kata-deploy dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:53 +02:00
Wainer dos Santos Moschetta
2cb4f7ba70 ci/openshift-ci: delint dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:47 +02:00
Wainer dos Santos Moschetta
993dcc94ff osbuilder: delint dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:43 +02:00
Wainer dos Santos Moschetta
bbd7cc2f93 packaging: delint kata-monitor dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:39 +02:00
Wainer dos Santos Moschetta
9837ec728c packaging: delint static-build dockerfiles
Removed all errors/warnings pointed out by hadolint version 2.7.0, except for the following
ignored rules:
  - "DL3008 warning: Pin versions in apt get install"
  - "DL3041 warning: Specify version with `dnf install -y <package>-<version>`"
  - "DL3033 warning: Specify version with `yum install -y <package>-<version>`"
  - "DL3048 style: Invalid label key"
  - "DL3003 warning: Use WORKDIR to switch to a directory"
  - "DL3018 warning: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>"
  - "DL3037 warning: Specify version with zypper install -y <package>[=]<version>"

Fixes #3107
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:33 +02:00
Wainer dos Santos Moschetta
8785106f6c packaging/qemu: Use QEMU script to update submodules
Currently QEMU's submodules are git cloned but there is the scripts/git-submodule.sh
which is meant for that. Let's use that script.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:25 +02:00
Wainer dos Santos Moschetta
a915f08266 packaging/qemu: Use partial git clone
The static build of QEMU takes a good amount of time on cloning the
source tree because we do a full git clone. In order to speed up that
operation this changed the Dockerfile so that it is carried out a
partial clone by using --depth=1 argument.

Fixes #3291
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2022-01-03 10:52:17 +02:00
Snir Sheriber
ec3faab892 security: Update rust crate versions
backporting b1f4e945b3 original commit msg (modified):

Update the rust dependencies that have upstream security fixes. Issues
fixed by this change:

- [`RUSTSEC-2020-0002`](https://rustsec.org/advisories/RUSTSEC-2020-0002) (`prost` crate)
- [`RUSTSEC-2020-0036`](https://rustsec.org/advisories/RUSTSEC-2020-0036) (`failure` crate)
- [`RUSTSEC-2021-0073`](https://rustsec.org/advisories/RUSTSEC-2021-0073) (`prost-types` crate)
- [`RUSTSEC-2021-0119`](https://rustsec.org/advisories/RUSTSEC-2021-0119) (`nix` crate)

This change also includes:

- Minor code changes for the new version of `prometheus` for the agent.

Fixes: #3296.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-12-29 16:58:14 +02:00
Fabiano Fidêncio
1f61be842d osbuilder: Add protoc to the alpine container
It seems the lack of protoc in the alpine containers is causing issues
with some of our CIs, such as the VFIO one.

Fixes: #3323

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-12-29 14:40:24 +02:00
zhanghj
d2d8f9ac65 osbuilder: avoid to copy versions.txt which already deprecated
Currently the versions.txt in rootfs-builder dir is already removed,
so avoid to copy it in list of helper files.

Fixes: #3267

Signed-off-by: zhanghj <zhanghj.lc@inspur.com>
2021-12-29 14:39:34 +02:00
Jakob Naucke
ca30eee3e2 kata-manager: Retrieve static tarball
In `utils/kata-manager.sh`, we download the first asset listed for the
release, which used to be the static x86_64 tarball. If that happened to
not match the system architecture, we would abort. Besides that logic
being invalid for !x86_64 (despite not distributing other tarballs at
the moment), the first asset listed is also not the static tarball any
more, it is the vendored source tarball. Retrieve all _static_ tarballs
and select the appropriate one depending on architecture.

Fixes: #3254
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-29 14:39:25 +02:00
Fabiano Fidêncio
0217abce24 kata-deploy: Deal with empty containerd conf file
As containerd can properly run without having a existent
`/etc/containerd/config.toml` file (it'd run using the default
cobnfiguration), let's explicitly create the file in those cases.

This will avoid issues on ammending runtime classes to a non-existent
file.

Fixes: #3229

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Tested-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-29 14:39:14 +02:00
Snir Sheriber
572b25dd35 osbuilder: be runtime consistent also with podman build
Use the same runtime used for podman run also for the podman build cmd
Additionally remove "docker" from the docker_run_args variable

Fixes: #3239
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
2021-12-29 14:38:32 +02:00
bin
84e69ecb22 agent: user container ID as watchable storage key for hashmap
Use sandbox ID as the key will cause the failed containers' storage
leak.

Fixes: #3172

Signed-off-by: bin <bin@hyper.sh>
2021-12-29 14:38:18 +02:00
Archana Shinde
57a6d46376 Merge pull request #3347 from Jakob-Naucke/backport-spell-kernel-readme
stable-2.3 | docs: Fix kernel configs README spelling errors
2021-12-23 08:56:52 -08:00
Jakob Naucke
77b6cfbd15 docs: Fix kernel configs README spelling errors
- `fragments` in backticks
- s/perfoms/performs/

Fixes: #3338
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-23 15:54:10 +01:00
Peng Tao
0e1cb124b7 Merge pull request #3335 from Jakob-Naucke/backport-src-reorg
docs: Fix outdated links
2021-12-23 11:40:55 +08:00
Jakob Naucke
24085c9553 docs: Fix outdated k8s link
in virtcontainers readme

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 19:42:47 +01:00
Jakob Naucke
514bf74f8f docs: Replicate branch rename on runtime-spec
renamed branch `master` to `main`

Fixes: #3336
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 18:18:46 +01:00
Fabiano Fidêncio
77a2502a0f cri-o: Update links for the CRI-O github page
The links are either pointing to the not-used-anymore `master` branch,
or to the kubernetes-incubator page.

Let's always point to the CRI-O github page, using the `main`branch.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2021-12-22 18:18:46 +01:00
Jakob Naucke
6413ecf459 docs: Backport source reorganization links
#3244 moved directories that were referred to with links to `main`,
which affects stable.

Fixes: #3334
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-22 17:59:41 +01:00
Fabiano Fidêncio
a31b5b9ee8 Merge pull request #3269 from likebreath/1214/backport_clh_v20.1
stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.1
2021-12-15 00:18:56 +01:00
Bo Chen
a0bed72d49 versions: Upgrade to Cloud Hypervisor v20.1
This is a bug release from Cloud Hypervisor addressing the following
issues: 1) Networking performance regression with virtio-net; 2) Limit
file descriptors sent in vfio-user support; 3) Fully advertise PCI MMIO
config regions in ACPI tables; 4) Set the TSS and KVM identity maps so
they don't overlap with firmware RAM; 5) Correctly update the DeviceTree
on restore.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.1

Fixes: #3262

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit bbfb10e169)
2021-12-14 11:06:08 -08:00
Fabiano Fidêncio
d61bcb8a44 Merge pull request #3247 from Jakob-Naucke/backport-apk-static
Backport osbuilder: Revert to using apk.static for Alpine
2021-12-10 12:10:59 +01:00
Jakob Naucke
d03e05e803 versions: Use fixed, minor version for Alpine
- Set Alpine guest rootfs to 3.13 on all instances.
- Specify a minor version rather than patch level as the Alpine
  repositories use that.

Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-09 16:47:43 +01:00
Jakob Naucke
0f7db91c0f osbuilder: Revert to using apk.static for Alpine
#2399 partially reverted #418, missing on returning to bootstrapping a
rootfs with `apk.static` instead of copying the entire root, which can
result in drastically larger (more than 10x) images. Revert this as well
(requires some updates to URL building).

Fixes: #3216
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-09 16:47:43 +01:00
Julio Montes
25ee73ceb3 Merge pull request #3230 from liubin/backport/3220
stable-2.3 | runtime: only call stopVirtiofsd when shared_fs is virtio-fs
2021-12-08 08:32:04 -06:00
Fabiano Fidêncio
64ae76e967 Merge pull request #3224 from Jakob-Naucke/backport-ppc64le-s390x-ubuntu-initrd
Backport versions: Use Ubuntu initrd for non-musl archs
2021-12-08 09:05:13 +01:00
bin
271d67a831 runtime: only call stopVirtiofsd when shared_fs is virtio-fs
If shared_fs is set to virtio-9p, the virtiofsd is not started,
so there is no need to stop it.

Fixes: #3219

Signed-off-by: bin <bin@hyper.sh>
2021-12-08 11:30:35 +08:00
Julio Montes
f42c7d5125 Merge pull request #3215 from likebreath/1206/backport_clh
stable-2.3 | Upgrade to Cloud Hypervisor v20.0 and Openapi-generator v5.3.0
2021-12-07 07:51:21 -06:00
Jakob Naucke
7c15335dc9 versions: Use Ubuntu initrd for non-musl archs
ppc64le & s390x have no (well supported) musl target for Rust,
therefore, the agent must use glibc and cannot use Alpine. Specify
Ubuntu as the distribution to be used for initrd.

Fixes: #3212
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
2021-12-07 12:15:16 +01:00
Bo Chen
15080f20e7 virtcontainers: clh: Upgrade to openapi-generator v5.3.0
The latest release of openapi-generator v5.3.0 contains the fix for
`dropping err` bug [1]. This patch also re-generated the client code of
Cloud Hypervisor to have the bug fixed.

[1] https://github.com/OpenAPITools/openapi-generator/pull/10275

Fixes: #3201

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 995300260e)
2021-12-06 18:41:39 -08:00
Bo Chen
c2b8eb3c2c virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v19.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 4756a04b2d)
2021-12-06 18:38:48 -08:00
Bo Chen
fe0fbab574 versions: Upgrade to Cloud Hypervisor v20.0
Highlights from the Cloud Hypervisor release v20.0: 1) Multiple PCI
segments support (now support up to 496 PCI devices); 2) CPU pinning; 3)
Improved VFIO support; 4) Safer code; 5) Extended documentation; 6) Bug
fixes.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v20.0

Fixes: #3178

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 0bf4d2578a)
2021-12-06 18:38:48 -08:00
GabyCT
89f9672f56 Merge pull request #3205 from Bevisy/stable-2.3-3196
stable-2.3 | packaging: Fix missing commit message in building kata-runtime
2021-12-06 10:26:17 -06:00
Fabiano Fidêncio
0a32a1793d Merge pull request #3203 from fengwang666/my_2.3_pr_backport
stable-2.3 | runtime: enable vhost-net for rootless hypervisor
2021-12-06 17:08:33 +01:00
Binbin Zhang
be5468fda7 packaging: Fix missing commit message in building kata-runtime
add `git` package to the shim-v2 build image

Fixes: #3196
Backport PR: #3197

Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
2021-12-06 11:04:18 +08:00
Feng Wang
18bb9a5d9b runtime: enable vhost-net for rootless hypervisor
vhost-net is disabled in the rootless kata runtime feature, which has been abandoned since kata 2.0.
I reused the rootless flag for nonroot hypervisor and would like to enable vhost-net.

Fixes #3182

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit b3bcb7b251)
2021-12-03 11:28:40 -08:00
Bin Liu
f068057073 Merge pull request #3184 from liubin/backport/3140
[backport] agent: create directories for watchable-bind mounts
2021-12-03 21:24:14 +08:00
bin
3458073d09 agent: create directories for watchable-bind mounts
In function `update_target`, if the updated source is a directory,
we should create the corresponding directory.

Fixes: #3140

Signed-off-by: bin <bin@hyper.sh>
2021-12-03 14:32:08 +08:00
Bin Liu
f9c09ad5bc Merge pull request #3177 from fengwang666/my_2.3_pr_backport
runtime: enable FUSE_DAX kernel config for DAX
2021-12-03 13:32:18 +08:00
Feng Wang
0e91503cd4 runtime: enable FUSE_DAX kernel config for DAX
Otherwise DAX device cannot be set up.

Fixes #3165

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit 6105e3ee85)
2021-12-02 09:22:26 -08:00
76 changed files with 1557 additions and 798 deletions

View File

@@ -1 +1 @@
2.3.0
2.3.1

View File

@@ -6,4 +6,9 @@
#
FROM registry.centos.org/centos:8
RUN yum -y update && yum -y install git sudo wget
RUN yum -y update && \
yum -y install \
git \
sudo \
wget && \
yum clean all

View File

@@ -242,8 +242,8 @@ On the other hand, running all non vCPU threads under a dedicated overhead cgrou
accurate metrics on the actual Kata Container pod overhead, allowing for tuning the overhead
cgroup size and constraints accordingly.
[linux-config]: https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md
[cgroupspath]: https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#cgroups-path
[linux-config]: https://github.com/opencontainers/runtime-spec/blob/main/config-linux.md
[cgroupspath]: https://github.com/opencontainers/runtime-spec/blob/main/config-linux.md#cgroups-path
# Supported cgroups

View File

@@ -20,7 +20,7 @@ required to spawn pods and containers, and this is the preferred way to run Kata
An equivalent shim implementation for CRI-O is planned.
### CRI-O
For CRI-O installation instructions, refer to the [CRI-O Tutorial](https://github.com/kubernetes-incubator/cri-o/blob/master/tutorial.md) page.
For CRI-O installation instructions, refer to the [CRI-O Tutorial](https://github.com/cri-o/cri-o/blob/main/tutorial.md) page.
The following sections show how to set up the CRI-O configuration file (default path: `/etc/crio/crio.conf`) for Kata.
@@ -30,7 +30,7 @@ Unless otherwise stated, all the following settings are specific to the `crio.ru
# runtime used and options for how to set up and manage the OCI runtime.
[crio.runtime]
```
A comprehensive documentation of the configuration file can be found [here](https://github.com/cri-o/cri-o/blob/master/docs/crio.conf.5.md).
A comprehensive documentation of the configuration file can be found [here](https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md).
> **Note**: After any change to this file, the CRI-O daemon have to be restarted with:
>````

View File

@@ -203,12 +203,11 @@ is highly recommended. For working with the agent, you may also wish to
[enable a debug console][setup-debug-console]
to allow you to access the VM environment.
[agent-ctl]: https://github.com/kata-containers/kata-containers/blob/main/tools/agent-ctl
[enable-full-debug]: https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#enable-full-debug
[jaeger-all-in-one]: https://www.jaegertracing.io/docs/getting-started/
[jaeger-tracing]: https://www.jaegertracing.io
[opentelemetry]: https://opentelemetry.io
[osbuilder]: https://github.com/kata-containers/kata-containers/blob/main/tools/osbuilder
[setup-debug-console]: https://github.com/kata-containers/kata-containers/blob/main/docs/Developer-Guide.md#set-up-a-debug-console
[trace-forwarder]: https://github.com/kata-containers/kata-containers/blob/main/src/trace-forwarder
[trace-forwarder]: /src/trace-forwarder
[vsock]: https://wiki.qemu.org/Features/VirtioVsock

View File

@@ -7,15 +7,15 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
serde_json = "1.0.39"
serde_json = "1.0.73"
# slog:
# - Dynamic keys required to allow HashMap keys to be slog::Serialized.
# - The 'max_*' features allow changing the log level at runtime
# (by stopping the compiler from removing log calls).
slog = { version = "2.5.2", features = ["dynamic-keys", "max_level_trace", "release_max_level_debug"] }
slog-json = "2.3.0"
slog-async = "2.3.0"
slog-scope = "4.1.2"
slog = { version = "2.7.0", features = ["dynamic-keys", "max_level_trace", "release_max_level_debug"] }
slog-json = "2.4.0"
slog-async = "2.7.0"
slog-scope = "4.4.0"
[dev-dependencies]
tempfile = "3.1.0"
tempfile = "3.2.0"

746
src/agent/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ serial_test = "0.5.1"
# Async helpers
async-trait = "0.1.42"
async-recursion = "0.3.2"
futures = "0.3.12"
futures = "0.3.17"
# Async runtime
tokio = { version = "1", features = ["full"] }
@@ -45,10 +45,10 @@ slog-scope = "4.1.2"
slog-stdlog = "4.0.0"
log = "0.4.11"
prometheus = { version = "0.9.0", features = ["process"] }
procfs = "0.7.9"
prometheus = { version = "0.13.0", features = ["process"] }
procfs = "0.12.0"
anyhow = "1.0.32"
cgroups = { package = "cgroups-rs", version = "0.2.5" }
cgroups = { package = "cgroups-rs", version = "0.2.8" }
# Tracing
tracing = "0.1.26"

View File

@@ -5,7 +5,7 @@ authors = ["The Kata Containers community <kata-dev@lists.katacontainers.io>"]
edition = "2018"
[dependencies]
serde = "1.0.91"
serde_derive = "1.0.91"
serde_json = "1.0.39"
libc = "0.2.58"
serde = "1.0.131"
serde_derive = "1.0.131"
serde_json = "1.0.73"
libc = "0.2.112"

View File

@@ -23,7 +23,7 @@ scan_fmt = "0.2"
regex = "1.1"
path-absolutize = "1.2.0"
anyhow = "1.0.32"
cgroups = { package = "cgroups-rs", version = "0.2.5" }
cgroups = { package = "cgroups-rs", version = "0.2.8" }
rlimit = "0.5.3"
tokio = { version = "1.2.0", features = ["sync", "io-util", "process", "time", "macros"] }

View File

@@ -745,7 +745,7 @@ fn mount_from(
let _ = fs::create_dir_all(&dir).map_err(|e| {
log_child!(
cfd_log,
"creat dir {}: {}",
"create dir {}: {}",
dir.to_str().unwrap(),
e.to_string()
)

View File

@@ -23,50 +23,50 @@ macro_rules! sl {
lazy_static! {
static ref AGENT_SCRAPE_COUNT: IntCounter =
prometheus::register_int_counter!(format!("{}_{}",NAMESPACE_KATA_AGENT,"scrape_count").as_ref(), "Metrics scrape count").unwrap();
prometheus::register_int_counter!(format!("{}_{}",NAMESPACE_KATA_AGENT,"scrape_count"), "Metrics scrape count").unwrap();
static ref AGENT_THREADS: Gauge =
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"threads").as_ref(), "Agent process threads").unwrap();
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"threads"), "Agent process threads").unwrap();
static ref AGENT_TOTAL_TIME: Gauge =
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_time").as_ref(), "Agent process total time").unwrap();
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_time"), "Agent process total time").unwrap();
static ref AGENT_TOTAL_VM: Gauge =
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_vm").as_ref(), "Agent process total VM size").unwrap();
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_vm"), "Agent process total VM size").unwrap();
static ref AGENT_TOTAL_RSS: Gauge =
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_rss").as_ref(), "Agent process total RSS size").unwrap();
prometheus::register_gauge!(format!("{}_{}",NAMESPACE_KATA_AGENT,"total_rss"), "Agent process total RSS size").unwrap();
static ref AGENT_PROC_STATUS: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"proc_status").as_ref(), "Agent process status.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"proc_status"), "Agent process status.", &["item"]).unwrap();
static ref AGENT_IO_STAT: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"io_stat").as_ref(), "Agent process IO statistics.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"io_stat"), "Agent process IO statistics.", &["item"]).unwrap();
static ref AGENT_PROC_STAT: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"proc_stat").as_ref(), "Agent process statistics.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_AGENT,"proc_stat"), "Agent process statistics.", &["item"]).unwrap();
// guest os metrics
static ref GUEST_LOAD: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"load").as_ref() , "Guest system load.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"load") , "Guest system load.", &["item"]).unwrap();
static ref GUEST_TASKS: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"tasks").as_ref() , "Guest system load.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"tasks") , "Guest system load.", &["item"]).unwrap();
static ref GUEST_CPU_TIME: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"cpu_time").as_ref() , "Guest CPU statistics.", &["cpu","item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"cpu_time") , "Guest CPU statistics.", &["cpu","item"]).unwrap();
static ref GUEST_VM_STAT: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"vm_stat").as_ref() , "Guest virtual memory statistics.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"vm_stat") , "Guest virtual memory statistics.", &["item"]).unwrap();
static ref GUEST_NETDEV_STAT: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"netdev_stat").as_ref() , "Guest net devices statistics.", &["interface","item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"netdev_stat") , "Guest net devices statistics.", &["interface","item"]).unwrap();
static ref GUEST_DISKSTAT: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"diskstat").as_ref() , "Disks statistics in system.", &["disk","item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"diskstat") , "Disks statistics in system.", &["disk","item"]).unwrap();
static ref GUEST_MEMINFO: GaugeVec =
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"meminfo").as_ref() , "Statistics about memory usage in the system.", &["item"]).unwrap();
prometheus::register_gauge_vec!(format!("{}_{}",NAMESPACE_KATA_GUEST,"meminfo") , "Statistics about memory usage in the system.", &["item"]).unwrap();
}
#[instrument]
@@ -348,17 +348,17 @@ fn set_gauge_vec_cpu_time(gv: &prometheus::GaugeVec, cpu: &str, cpu_time: &procf
gv.with_label_values(&[cpu, "idle"])
.set(cpu_time.idle as f64);
gv.with_label_values(&[cpu, "iowait"])
.set(cpu_time.iowait.unwrap_or(0.0) as f64);
.set(cpu_time.iowait.unwrap_or(0) as f64);
gv.with_label_values(&[cpu, "irq"])
.set(cpu_time.irq.unwrap_or(0.0) as f64);
.set(cpu_time.irq.unwrap_or(0) as f64);
gv.with_label_values(&[cpu, "softirq"])
.set(cpu_time.softirq.unwrap_or(0.0) as f64);
.set(cpu_time.softirq.unwrap_or(0) as f64);
gv.with_label_values(&[cpu, "steal"])
.set(cpu_time.steal.unwrap_or(0.0) as f64);
.set(cpu_time.steal.unwrap_or(0) as f64);
gv.with_label_values(&[cpu, "guest"])
.set(cpu_time.guest.unwrap_or(0.0) as f64);
.set(cpu_time.guest.unwrap_or(0) as f64);
gv.with_label_values(&[cpu, "guest_nice"])
.set(cpu_time.guest_nice.unwrap_or(0.0) as f64);
.set(cpu_time.guest_nice.unwrap_or(0) as f64);
}
#[instrument]
@@ -470,7 +470,7 @@ fn set_gauge_vec_proc_status(gv: &prometheus::GaugeVec, status: &procfs::process
gv.with_label_values(&["vmswap"])
.set(status.vmswap.unwrap_or(0) as f64);
gv.with_label_values(&["hugetlbpages"])
.set(status.hugetblpages.unwrap_or(0) as f64);
.set(status.hugetlbpages.unwrap_or(0) as f64);
gv.with_label_values(&["voluntary_ctxt_switches"])
.set(status.voluntary_ctxt_switches.unwrap_or(0) as f64);
gv.with_label_values(&["nonvoluntary_ctxt_switches"])

View File

@@ -405,14 +405,18 @@ async fn bind_watcher_storage_handler(
logger: &Logger,
storage: &Storage,
sandbox: Arc<Mutex<Sandbox>>,
cid: Option<String>,
) -> Result<()> {
let mut locked = sandbox.lock().await;
let container_id = locked.id.clone();
locked
.bind_watcher
.add_container(container_id, iter::once(storage.clone()), logger)
.await
if let Some(cid) = cid {
locked
.bind_watcher
.add_container(cid, iter::once(storage.clone()), logger)
.await
} else {
Ok(())
}
}
// mount_storage performs the mount described by the storage structure.
@@ -518,6 +522,7 @@ pub async fn add_storages(
logger: Logger,
storages: Vec<Storage>,
sandbox: Arc<Mutex<Sandbox>>,
cid: Option<String>,
) -> Result<Vec<String>> {
let mut mount_list = Vec::new();
@@ -548,7 +553,8 @@ pub async fn add_storages(
}
DRIVER_NVDIMM_TYPE => nvdimm_storage_handler(&logger, &storage, sandbox.clone()).await,
DRIVER_WATCHABLE_BIND_TYPE => {
bind_watcher_storage_handler(&logger, &storage, sandbox.clone()).await?;
bind_watcher_storage_handler(&logger, &storage, sandbox.clone(), cid.clone())
.await?;
// Don't register watch mounts, they're handled separately by the watcher.
Ok(String::new())
}

View File

@@ -148,6 +148,10 @@ impl AgentService {
};
info!(sl!(), "receive createcontainer, spec: {:?}", &oci);
info!(
sl!(),
"receive createcontainer, storages: {:?}", &req.storages
);
// Some devices need some extra processing (the ones invoked with
// --device for instance), and that's what this call is doing. It
@@ -163,7 +167,13 @@ impl AgentService {
// After all those storages have been processed, no matter the order
// here, the agent will rely on rustjail (using the oci.Mounts
// list) to bind mount all of them inside the container.
let m = add_storages(sl!(), req.storages.to_vec(), self.sandbox.clone()).await?;
let m = add_storages(
sl!(),
req.storages.to_vec(),
self.sandbox.clone(),
Some(req.container_id.clone()),
)
.await?;
{
sandbox = self.sandbox.clone();
s = sandbox.lock().await;
@@ -573,6 +583,7 @@ impl protocols::agent_ttrpc::AgentService for AgentService {
) -> ttrpc::Result<Empty> {
trace_rpc_call!(ctx, "remove_container", req);
is_allowed!(req);
match self.do_remove_container(req).await {
Err(e) => Err(ttrpc_error(ttrpc::Code::INTERNAL, e.to_string())),
Ok(_) => Ok(Empty::new()),
@@ -1002,7 +1013,7 @@ impl protocols::agent_ttrpc::AgentService for AgentService {
.map_err(|e| ttrpc_error(ttrpc::Code::INTERNAL, e.to_string()))?;
}
match add_storages(sl!(), req.storages.to_vec(), self.sandbox.clone()).await {
match add_storages(sl!(), req.storages.to_vec(), self.sandbox.clone(), None).await {
Ok(m) => {
let sandbox = self.sandbox.clone();
let mut s = sandbox.lock().await;
@@ -1709,6 +1720,7 @@ mod tests {
fd: -1,
mh: MessageHeader::default(),
metadata: std::collections::HashMap::new(),
timeout_nano: 0,
}
}

View File

@@ -109,11 +109,12 @@ impl Storage {
// if we are creating a directory: just create it, nothing more to do
if source_file_path.symlink_metadata()?.file_type().is_dir() {
fs::create_dir_all(source_file_path)
let dest_file_path = self.make_target_path(&source_file_path)?;
fs::create_dir_all(&dest_file_path)
.await
.with_context(|| {
format!("Unable to mkdir all for {}", source_file_path.display())
})?
.with_context(|| format!("Unable to mkdir all for {}", dest_file_path.display()))?;
return Ok(());
}
// Assume we are dealing with either a file or a symlink now:
@@ -922,7 +923,7 @@ mod tests {
.file_type()
.is_symlink());
assert_eq!(fs::read_link(&dst_symlink_file).unwrap(), src_file);
assert_eq!(fs::read_to_string(&dst_symlink_file).unwrap(), "foo")
assert_eq!(fs::read_to_string(&dst_symlink_file).unwrap(), "foo");
}
#[tokio::test]
@@ -1076,6 +1077,10 @@ mod tests {
fs::create_dir_all(source_dir.path().join("A/B")).unwrap();
fs::write(source_dir.path().join("A/B/1.txt"), "two").unwrap();
// A/C is an empty directory
let empty_dir = "A/C";
fs::create_dir_all(source_dir.path().join(empty_dir)).unwrap();
// delay 20 ms between writes to files in order to ensure filesystem timestamps are unique
thread::sleep(Duration::from_millis(20));
@@ -1091,7 +1096,10 @@ mod tests {
let logger = slog::Logger::root(slog::Discard, o!());
assert_eq!(entry.scan(&logger).await.unwrap(), 5);
assert_eq!(entry.scan(&logger).await.unwrap(), 6);
// check empty directory
assert!(dest_dir.path().join(empty_dir).exists());
// Should copy no files since nothing is changed since last check
assert_eq!(entry.scan(&logger).await.unwrap(), 0);
@@ -1112,6 +1120,12 @@ mod tests {
// Update another file
fs::write(source_dir.path().join("1.txt"), "updated").unwrap();
assert_eq!(entry.scan(&logger).await.unwrap(), 1);
// create another empty directory A/C/D
let empty_dir = "A/C/D";
fs::create_dir_all(source_dir.path().join(empty_dir)).unwrap();
assert_eq!(entry.scan(&logger).await.unwrap(), 1);
assert!(dest_dir.path().join(empty_dir).exists());
}
#[tokio::test]

View File

@@ -17,7 +17,7 @@ or the [Kubernetes CRI][cri]) to the `virtcontainers` API.
`virtcontainers` was used as a foundational package for the [Clear Containers][cc] [runtime][cc-runtime] implementation.
[oci]: https://github.com/opencontainers/runtime-spec
[cri]: https://git.k8s.io/community/contributors/devel/sig-node/container-runtime-interface.md
[cri]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/container-runtime-interface.md
[cc]: https://github.com/clearcontainers/
[cc-runtime]: https://github.com/clearcontainers/runtime/

View File

@@ -27,7 +27,6 @@ import (
"github.com/kata-containers/kata-containers/src/runtime/pkg/katautils/katatrace"
pbTypes "github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/agent/protocols"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/rootless"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/pkg/uuid"
"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/utils"
)
@@ -440,12 +439,7 @@ func xConnectVMNetwork(ctx context.Context, endpoint Endpoint, h Hypervisor) err
queues = int(h.HypervisorConfig().NumVCPUs)
}
var disableVhostNet bool
if rootless.IsRootless() {
disableVhostNet = true
} else {
disableVhostNet = h.HypervisorConfig().DisableVhostNet
}
disableVhostNet := h.HypervisorConfig().DisableVhostNet
if netPair.NetInterworkingModel == NetXConnectDefaultModel {
netPair.NetInterworkingModel = DefaultNetInterworkingModel

View File

@@ -13,7 +13,7 @@ YQ := $(shell command -v yq 2> /dev/null)
generate-client-code: clean-generated-code
docker run --rm \
--user $$(id -u):$$(id -g) \
-v $${PWD}:/local openapitools/openapi-generator-cli:v5.2.1 generate \
-v $${PWD}:/local openapitools/openapi-generator-cli:v5.3.0 generate \
-i /local/cloud-hypervisor.yaml \
-g go \
-o /local/client

View File

@@ -9,6 +9,7 @@ configuration.go
docs/BalloonConfig.md
docs/CmdLineConfig.md
docs/ConsoleConfig.md
docs/CpuAffinity.md
docs/CpuTopology.md
docs/CpusConfig.md
docs/DefaultApi.md
@@ -45,6 +46,7 @@ go.sum
model_balloon_config.go
model_cmd_line_config.go
model_console_config.go
model_cpu_affinity.go
model_cpu_topology.go
model_cpus_config.go
model_device_config.go

View File

@@ -58,7 +58,7 @@ Note, enum values are always validated and all unused variables are silently ign
### URLs Configuration per Operation
Each operation can use different server URL defined using `OperationServers` map in the `Configuration`.
An operation is uniquely identifield by `"{classname}Service.{nickname}"` string.
An operation is uniquely identified by `"{classname}Service.{nickname}"` string.
Similar rules for overriding default operation server index and variables applies by using `sw.ContextOperationServerIndices` and `sw.ContextOperationServerVariables` context maps.
```
@@ -108,6 +108,7 @@ Class | Method | HTTP request | Description
- [BalloonConfig](docs/BalloonConfig.md)
- [CmdLineConfig](docs/CmdLineConfig.md)
- [ConsoleConfig](docs/ConsoleConfig.md)
- [CpuAffinity](docs/CpuAffinity.md)
- [CpuTopology](docs/CpuTopology.md)
- [CpusConfig](docs/CpusConfig.md)
- [DeviceConfig](docs/DeviceConfig.md)

View File

@@ -344,7 +344,7 @@ components:
VmInfo:
description: Virtual Machine information
example:
memory_actual_size: 3
memory_actual_size: 7
state: Created
config:
console:
@@ -352,16 +352,16 @@ components:
file: file
iommu: false
balloon:
size: 9
size: 1
deflate_on_oom: false
memory:
hugepages: false
shared: false
hugepage_size: 4
hugepage_size: 1
prefault: false
mergeable: false
size: 9
hotplugged_size: 2
size: 2
hotplugged_size: 7
zones:
- hugepages: false
shared: false
@@ -369,30 +369,31 @@ components:
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 7
- hugepages: false
shared: false
hugepage_size: 1
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_size: 7
hotplug_size: 4
hotplug_method: Acpi
disks:
- path: path
num_queues: 7
- pci_segment: 8
path: path
num_queues: 4
readonly: false
iommu: false
queue_size: 1
queue_size: 5
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -407,11 +408,12 @@ components:
one_time_burst: 0
refill_time: 0
id: id
- path: path
num_queues: 7
- pci_segment: 8
path: path
num_queues: 4
readonly: false
iommu: false
queue_size: 1
queue_size: 5
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -435,24 +437,35 @@ components:
max_phys_bits: 7
boot_vcpus: 1
max_vcpus: 1
affinity:
- vcpu: 9
host_cpus:
- 3
- 3
- vcpu: 9
host_cpus:
- 3
- 3
devices:
- path: path
- pci_segment: 3
path: path
iommu: false
id: id
- path: path
- pci_segment: 3
path: path
iommu: false
id: id
kernel:
path: path
numa:
- distances:
- distance: 3
destination: 6
- distance: 3
destination: 6
- distance: 8
destination: 4
- distance: 8
destination: 4
cpus:
- 5
- 5
- 0
- 0
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
@@ -461,13 +474,13 @@ components:
- memory_zones
guest_numa_id: 6
- distances:
- distance: 3
destination: 6
- distance: 3
destination: 6
- distance: 8
destination: 4
- distance: 8
destination: 4
cpus:
- 5
- 5
- 0
- 0
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
@@ -480,41 +493,46 @@ components:
src: /dev/urandom
sgx_epc:
- prefault: false
size: 6
size: 7
id: id
- prefault: false
size: 6
size: 7
id: id
fs:
- num_queues: 6
queue_size: 3
- pci_segment: 5
num_queues: 2
queue_size: 6
cache_size: 6
dax: true
tag: tag
socket: socket
id: id
- num_queues: 6
queue_size: 3
- pci_segment: 5
num_queues: 2
queue_size: 6
cache_size: 6
dax: true
tag: tag
socket: socket
id: id
vsock:
pci_segment: 0
iommu: false
socket: socket
id: id
cid: 3
pmem:
- mergeable: false
- pci_segment: 3
mergeable: false
file: file
size: 1
size: 6
iommu: false
id: id
discard_writes: false
- mergeable: false
- pci_segment: 3
mergeable: false
file: file
size: 1
size: 6
iommu: false
id: id
discard_writes: false
@@ -543,14 +561,15 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
pci_segment: 6
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
id: id
fd:
- 8
- 8
- 3
- 3
mask: 255.255.255.0
- tap: tap
num_queues: 9
@@ -566,21 +585,22 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
pci_segment: 6
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
id: id
fd:
- 8
- 8
- 3
- 3
mask: 255.255.255.0
device_tree:
key:
children:
- children
- children
pci_bdf: 7
pci_bdf: 3
resources:
- '{}'
- '{}'
@@ -611,7 +631,7 @@ components:
children:
- children
- children
pci_bdf: 7
pci_bdf: 3
resources:
- '{}'
- '{}'
@@ -660,16 +680,16 @@ components:
file: file
iommu: false
balloon:
size: 9
size: 1
deflate_on_oom: false
memory:
hugepages: false
shared: false
hugepage_size: 4
hugepage_size: 1
prefault: false
mergeable: false
size: 9
hotplugged_size: 2
size: 2
hotplugged_size: 7
zones:
- hugepages: false
shared: false
@@ -677,30 +697,31 @@ components:
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 7
- hugepages: false
shared: false
hugepage_size: 1
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_size: 7
hotplug_size: 4
hotplug_method: Acpi
disks:
- path: path
num_queues: 7
- pci_segment: 8
path: path
num_queues: 4
readonly: false
iommu: false
queue_size: 1
queue_size: 5
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -715,11 +736,12 @@ components:
one_time_burst: 0
refill_time: 0
id: id
- path: path
num_queues: 7
- pci_segment: 8
path: path
num_queues: 4
readonly: false
iommu: false
queue_size: 1
queue_size: 5
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -743,24 +765,35 @@ components:
max_phys_bits: 7
boot_vcpus: 1
max_vcpus: 1
affinity:
- vcpu: 9
host_cpus:
- 3
- 3
- vcpu: 9
host_cpus:
- 3
- 3
devices:
- path: path
- pci_segment: 3
path: path
iommu: false
id: id
- path: path
- pci_segment: 3
path: path
iommu: false
id: id
kernel:
path: path
numa:
- distances:
- distance: 3
destination: 6
- distance: 3
destination: 6
- distance: 8
destination: 4
- distance: 8
destination: 4
cpus:
- 5
- 5
- 0
- 0
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
@@ -769,13 +802,13 @@ components:
- memory_zones
guest_numa_id: 6
- distances:
- distance: 3
destination: 6
- distance: 3
destination: 6
- distance: 8
destination: 4
- distance: 8
destination: 4
cpus:
- 5
- 5
- 0
- 0
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections
@@ -788,41 +821,46 @@ components:
src: /dev/urandom
sgx_epc:
- prefault: false
size: 6
size: 7
id: id
- prefault: false
size: 6
size: 7
id: id
fs:
- num_queues: 6
queue_size: 3
- pci_segment: 5
num_queues: 2
queue_size: 6
cache_size: 6
dax: true
tag: tag
socket: socket
id: id
- num_queues: 6
queue_size: 3
- pci_segment: 5
num_queues: 2
queue_size: 6
cache_size: 6
dax: true
tag: tag
socket: socket
id: id
vsock:
pci_segment: 0
iommu: false
socket: socket
id: id
cid: 3
pmem:
- mergeable: false
- pci_segment: 3
mergeable: false
file: file
size: 1
size: 6
iommu: false
id: id
discard_writes: false
- mergeable: false
- pci_segment: 3
mergeable: false
file: file
size: 1
size: 6
iommu: false
id: id
discard_writes: false
@@ -851,14 +889,15 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
pci_segment: 6
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
id: id
fd:
- 8
- 8
- 3
- 3
mask: 255.255.255.0
- tap: tap
num_queues: 9
@@ -874,14 +913,15 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
pci_segment: 6
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
id: id
fd:
- 8
- 8
- 3
- 3
mask: 255.255.255.0
properties:
cpus:
@@ -941,6 +981,20 @@ components:
required:
- kernel
type: object
CpuAffinity:
example:
vcpu: 9
host_cpus:
- 3
- 3
properties:
vcpu:
type: integer
host_cpus:
items:
type: integer
type: array
type: object
CpuTopology:
example:
dies_per_package: 5
@@ -967,6 +1021,15 @@ components:
max_phys_bits: 7
boot_vcpus: 1
max_vcpus: 1
affinity:
- vcpu: 9
host_cpus:
- 3
- 3
- vcpu: 9
host_cpus:
- 3
- 3
properties:
boot_vcpus:
default: 1
@@ -980,6 +1043,10 @@ components:
$ref: '#/components/schemas/CpuTopology'
max_phys_bits:
type: integer
affinity:
items:
$ref: '#/components/schemas/CpuAffinity'
type: array
required:
- boot_vcpus
- max_vcpus
@@ -992,11 +1059,11 @@ components:
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 7
properties:
id:
type: string
@@ -1037,11 +1104,11 @@ components:
example:
hugepages: false
shared: false
hugepage_size: 4
hugepage_size: 1
prefault: false
mergeable: false
size: 9
hotplugged_size: 2
size: 2
hotplugged_size: 7
zones:
- hugepages: false
shared: false
@@ -1049,23 +1116,23 @@ components:
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 7
- hugepages: false
shared: false
hugepage_size: 1
prefault: false
mergeable: false
file: file
size: 7
hotplugged_size: 6
host_numa_node: 1
size: 1
hotplugged_size: 1
host_numa_node: 6
id: id
hotplug_size: 1
hotplug_size: 3
hotplug_size: 7
hotplug_size: 4
hotplug_method: Acpi
properties:
size:
@@ -1184,11 +1251,12 @@ components:
type: object
DiskConfig:
example:
pci_segment: 8
path: path
num_queues: 7
num_queues: 4
readonly: false
iommu: false
queue_size: 1
queue_size: 5
vhost_socket: vhost_socket
vhost_user: false
direct: false
@@ -1231,6 +1299,9 @@ components:
type: boolean
rate_limiter_config:
$ref: '#/components/schemas/RateLimiterConfig'
pci_segment:
format: int16
type: integer
id:
type: string
required:
@@ -1252,14 +1323,15 @@ components:
one_time_burst: 0
refill_time: 0
mac: mac
pci_segment: 6
vhost_mode: Client
iommu: false
vhost_socket: vhost_socket
vhost_user: false
id: id
fd:
- 8
- 8
- 3
- 3
mask: 255.255.255.0
properties:
tap:
@@ -1297,6 +1369,9 @@ components:
format: int32
type: integer
type: array
pci_segment:
format: int16
type: integer
rate_limiter_config:
$ref: '#/components/schemas/RateLimiterConfig'
type: object
@@ -1316,7 +1391,7 @@ components:
type: object
BalloonConfig:
example:
size: 9
size: 1
deflate_on_oom: false
properties:
size:
@@ -1332,8 +1407,9 @@ components:
type: object
FsConfig:
example:
num_queues: 6
queue_size: 3
pci_segment: 5
num_queues: 2
queue_size: 6
cache_size: 6
dax: true
tag: tag
@@ -1356,6 +1432,9 @@ components:
cache_size:
format: int64
type: integer
pci_segment:
format: int16
type: integer
id:
type: string
required:
@@ -1368,9 +1447,10 @@ components:
type: object
PmemConfig:
example:
pci_segment: 3
mergeable: false
file: file
size: 1
size: 6
iommu: false
id: id
discard_writes: false
@@ -1389,6 +1469,9 @@ components:
discard_writes:
default: false
type: boolean
pci_segment:
format: int16
type: integer
id:
type: string
required:
@@ -1418,6 +1501,7 @@ components:
type: object
DeviceConfig:
example:
pci_segment: 3
path: path
iommu: false
id: id
@@ -1427,6 +1511,9 @@ components:
iommu:
default: false
type: boolean
pci_segment:
format: int16
type: integer
id:
type: string
required:
@@ -1434,6 +1521,7 @@ components:
type: object
VsockConfig:
example:
pci_segment: 0
iommu: false
socket: socket
id: id
@@ -1450,6 +1538,9 @@ components:
iommu:
default: false
type: boolean
pci_segment:
format: int16
type: integer
id:
type: string
required:
@@ -1459,7 +1550,7 @@ components:
SgxEpcConfig:
example:
prefault: false
size: 6
size: 7
id: id
properties:
id:
@@ -1476,8 +1567,8 @@ components:
type: object
NumaDistance:
example:
distance: 3
destination: 6
distance: 8
destination: 4
properties:
destination:
format: int32
@@ -1492,13 +1583,13 @@ components:
NumaConfig:
example:
distances:
- distance: 3
destination: 6
- distance: 3
destination: 6
- distance: 8
destination: 4
- distance: 8
destination: 4
cpus:
- 5
- 5
- 0
- 0
sgx_epc_sections:
- sgx_epc_sections
- sgx_epc_sections

View File

@@ -366,6 +366,9 @@ func (c *APIClient) decode(v interface{}, b []byte, contentType string) (err err
return
}
_, err = (*f).Write(b)
if err != nil {
return
}
_, err = (*f).Seek(0, io.SeekStart)
return
}

View File

@@ -0,0 +1,82 @@
# CpuAffinity
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Vcpu** | Pointer to **int32** | | [optional]
**HostCpus** | Pointer to **[]int32** | | [optional]
## Methods
### NewCpuAffinity
`func NewCpuAffinity() *CpuAffinity`
NewCpuAffinity instantiates a new CpuAffinity object
This constructor will assign default values to properties that have it defined,
and makes sure properties required by API are set, but the set of arguments
will change when the set of required properties is changed
### NewCpuAffinityWithDefaults
`func NewCpuAffinityWithDefaults() *CpuAffinity`
NewCpuAffinityWithDefaults instantiates a new CpuAffinity object
This constructor will only assign default values to properties that have it defined,
but it doesn't guarantee that properties required by API are set
### GetVcpu
`func (o *CpuAffinity) GetVcpu() int32`
GetVcpu returns the Vcpu field if non-nil, zero value otherwise.
### GetVcpuOk
`func (o *CpuAffinity) GetVcpuOk() (*int32, bool)`
GetVcpuOk returns a tuple with the Vcpu field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetVcpu
`func (o *CpuAffinity) SetVcpu(v int32)`
SetVcpu sets Vcpu field to given value.
### HasVcpu
`func (o *CpuAffinity) HasVcpu() bool`
HasVcpu returns a boolean if a field has been set.
### GetHostCpus
`func (o *CpuAffinity) GetHostCpus() []int32`
GetHostCpus returns the HostCpus field if non-nil, zero value otherwise.
### GetHostCpusOk
`func (o *CpuAffinity) GetHostCpusOk() (*[]int32, bool)`
GetHostCpusOk returns a tuple with the HostCpus field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetHostCpus
`func (o *CpuAffinity) SetHostCpus(v []int32)`
SetHostCpus sets HostCpus field to given value.
### HasHostCpus
`func (o *CpuAffinity) HasHostCpus() bool`
HasHostCpus returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -8,6 +8,7 @@ Name | Type | Description | Notes
**MaxVcpus** | **int32** | | [default to 1]
**Topology** | Pointer to [**CpuTopology**](CpuTopology.md) | | [optional]
**MaxPhysBits** | Pointer to **int32** | | [optional]
**Affinity** | Pointer to [**[]CpuAffinity**](CpuAffinity.md) | | [optional]
## Methods
@@ -118,6 +119,31 @@ SetMaxPhysBits sets MaxPhysBits field to given value.
HasMaxPhysBits returns a boolean if a field has been set.
### GetAffinity
`func (o *CpusConfig) GetAffinity() []CpuAffinity`
GetAffinity returns the Affinity field if non-nil, zero value otherwise.
### GetAffinityOk
`func (o *CpusConfig) GetAffinityOk() (*[]CpuAffinity, bool)`
GetAffinityOk returns a tuple with the Affinity field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetAffinity
`func (o *CpusConfig) SetAffinity(v []CpuAffinity)`
SetAffinity sets Affinity field to given value.
### HasAffinity
`func (o *CpusConfig) HasAffinity() bool`
HasAffinity returns a boolean if a field has been set.
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)

View File

@@ -6,6 +6,7 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**Path** | **string** | |
**Iommu** | Pointer to **bool** | | [optional] [default to false]
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
@@ -72,6 +73,31 @@ SetIommu sets Iommu field to given value.
HasIommu returns a boolean if a field has been set.
### GetPciSegment
`func (o *DeviceConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *DeviceConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *DeviceConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *DeviceConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *DeviceConfig) GetId() string`

View File

@@ -14,6 +14,7 @@ Name | Type | Description | Notes
**VhostSocket** | Pointer to **string** | | [optional]
**PollQueue** | Pointer to **bool** | | [optional] [default to true]
**RateLimiterConfig** | Pointer to [**RateLimiterConfig**](RateLimiterConfig.md) | | [optional]
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
@@ -280,6 +281,31 @@ SetRateLimiterConfig sets RateLimiterConfig field to given value.
HasRateLimiterConfig returns a boolean if a field has been set.
### GetPciSegment
`func (o *DiskConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *DiskConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *DiskConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *DiskConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *DiskConfig) GetId() string`

View File

@@ -10,6 +10,7 @@ Name | Type | Description | Notes
**QueueSize** | **int32** | | [default to 1024]
**Dax** | **bool** | | [default to true]
**CacheSize** | **int64** | |
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
@@ -151,6 +152,31 @@ and a boolean to check if the value has been set.
SetCacheSize sets CacheSize field to given value.
### GetPciSegment
`func (o *FsConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *FsConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *FsConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *FsConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *FsConfig) GetId() string`

View File

@@ -16,6 +16,7 @@ Name | Type | Description | Notes
**VhostMode** | Pointer to **string** | | [optional] [default to "Client"]
**Id** | Pointer to **string** | | [optional]
**Fd** | Pointer to **[]int32** | | [optional]
**PciSegment** | Pointer to **int32** | | [optional]
**RateLimiterConfig** | Pointer to [**RateLimiterConfig**](RateLimiterConfig.md) | | [optional]
## Methods
@@ -337,6 +338,31 @@ SetFd sets Fd field to given value.
HasFd returns a boolean if a field has been set.
### GetPciSegment
`func (o *NetConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *NetConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *NetConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *NetConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetRateLimiterConfig
`func (o *NetConfig) GetRateLimiterConfig() RateLimiterConfig`

View File

@@ -9,6 +9,7 @@ Name | Type | Description | Notes
**Iommu** | Pointer to **bool** | | [optional] [default to false]
**Mergeable** | Pointer to **bool** | | [optional] [default to false]
**DiscardWrites** | Pointer to **bool** | | [optional] [default to false]
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
@@ -150,6 +151,31 @@ SetDiscardWrites sets DiscardWrites field to given value.
HasDiscardWrites returns a boolean if a field has been set.
### GetPciSegment
`func (o *PmemConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *PmemConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *PmemConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *PmemConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *PmemConfig) GetId() string`

View File

@@ -7,6 +7,7 @@ Name | Type | Description | Notes
**Cid** | **int64** | Guest Vsock CID |
**Socket** | **string** | Path to UNIX domain socket, used to proxy vsock connections. |
**Iommu** | Pointer to **bool** | | [optional] [default to false]
**PciSegment** | Pointer to **int32** | | [optional]
**Id** | Pointer to **string** | | [optional]
## Methods
@@ -93,6 +94,31 @@ SetIommu sets Iommu field to given value.
HasIommu returns a boolean if a field has been set.
### GetPciSegment
`func (o *VsockConfig) GetPciSegment() int32`
GetPciSegment returns the PciSegment field if non-nil, zero value otherwise.
### GetPciSegmentOk
`func (o *VsockConfig) GetPciSegmentOk() (*int32, bool)`
GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise
and a boolean to check if the value has been set.
### SetPciSegment
`func (o *VsockConfig) SetPciSegment(v int32)`
SetPciSegment sets PciSegment field to given value.
### HasPciSegment
`func (o *VsockConfig) HasPciSegment() bool`
HasPciSegment returns a boolean if a field has been set.
### GetId
`func (o *VsockConfig) GetId() string`

View File

@@ -1,7 +1,7 @@
#!/bin/sh
# ref: https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/
#
# Usage example: /bin/sh ./git_push.sh wing328 openapi-pestore-perl "minor update" "gitlab.com"
# Usage example: /bin/sh ./git_push.sh wing328 openapi-petstore-perl "minor update" "gitlab.com"
git_user_id=$1
git_repo_id=$2
@@ -38,14 +38,14 @@ git add .
git commit -m "$release_note"
# Sets the new remote
git_remote=`git remote`
git_remote=$(git remote)
if [ "$git_remote" = "" ]; then # git remote not defined
if [ "$GIT_TOKEN" = "" ]; then
echo "[INFO] \$GIT_TOKEN (environment variable) is not set. Using the git credential in your environment."
git remote add origin https://${git_host}/${git_user_id}/${git_repo_id}.git
else
git remote add origin https://${git_user_id}:${GIT_TOKEN}@${git_host}/${git_user_id}/${git_repo_id}.git
git remote add origin https://${git_user_id}:"${GIT_TOKEN}"@${git_host}/${git_user_id}/${git_repo_id}.git
fi
fi
@@ -55,4 +55,3 @@ git pull origin master
# Pushes (Forces) the changes in the local repository up to the remote repository
echo "Git pushing to https://${git_host}/${git_user_id}/${git_repo_id}.git"
git push origin master 2>&1 | grep -v 'To https'

View File

@@ -0,0 +1,149 @@
/*
Cloud Hypervisor API
Local HTTP based API for managing and inspecting a cloud-hypervisor virtual machine.
API version: 0.3.0
*/
// Code generated by OpenAPI Generator (https://openapi-generator.tech); DO NOT EDIT.
package openapi
import (
"encoding/json"
)
// CpuAffinity struct for CpuAffinity
type CpuAffinity struct {
Vcpu *int32 `json:"vcpu,omitempty"`
HostCpus *[]int32 `json:"host_cpus,omitempty"`
}
// NewCpuAffinity instantiates a new CpuAffinity object
// This constructor will assign default values to properties that have it defined,
// and makes sure properties required by API are set, but the set of arguments
// will change when the set of required properties is changed
func NewCpuAffinity() *CpuAffinity {
this := CpuAffinity{}
return &this
}
// NewCpuAffinityWithDefaults instantiates a new CpuAffinity object
// This constructor will only assign default values to properties that have it defined,
// but it doesn't guarantee that properties required by API are set
func NewCpuAffinityWithDefaults() *CpuAffinity {
this := CpuAffinity{}
return &this
}
// GetVcpu returns the Vcpu field value if set, zero value otherwise.
func (o *CpuAffinity) GetVcpu() int32 {
if o == nil || o.Vcpu == nil {
var ret int32
return ret
}
return *o.Vcpu
}
// GetVcpuOk returns a tuple with the Vcpu field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *CpuAffinity) GetVcpuOk() (*int32, bool) {
if o == nil || o.Vcpu == nil {
return nil, false
}
return o.Vcpu, true
}
// HasVcpu returns a boolean if a field has been set.
func (o *CpuAffinity) HasVcpu() bool {
if o != nil && o.Vcpu != nil {
return true
}
return false
}
// SetVcpu gets a reference to the given int32 and assigns it to the Vcpu field.
func (o *CpuAffinity) SetVcpu(v int32) {
o.Vcpu = &v
}
// GetHostCpus returns the HostCpus field value if set, zero value otherwise.
func (o *CpuAffinity) GetHostCpus() []int32 {
if o == nil || o.HostCpus == nil {
var ret []int32
return ret
}
return *o.HostCpus
}
// GetHostCpusOk returns a tuple with the HostCpus field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *CpuAffinity) GetHostCpusOk() (*[]int32, bool) {
if o == nil || o.HostCpus == nil {
return nil, false
}
return o.HostCpus, true
}
// HasHostCpus returns a boolean if a field has been set.
func (o *CpuAffinity) HasHostCpus() bool {
if o != nil && o.HostCpus != nil {
return true
}
return false
}
// SetHostCpus gets a reference to the given []int32 and assigns it to the HostCpus field.
func (o *CpuAffinity) SetHostCpus(v []int32) {
o.HostCpus = &v
}
func (o CpuAffinity) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if o.Vcpu != nil {
toSerialize["vcpu"] = o.Vcpu
}
if o.HostCpus != nil {
toSerialize["host_cpus"] = o.HostCpus
}
return json.Marshal(toSerialize)
}
type NullableCpuAffinity struct {
value *CpuAffinity
isSet bool
}
func (v NullableCpuAffinity) Get() *CpuAffinity {
return v.value
}
func (v *NullableCpuAffinity) Set(val *CpuAffinity) {
v.value = val
v.isSet = true
}
func (v NullableCpuAffinity) IsSet() bool {
return v.isSet
}
func (v *NullableCpuAffinity) Unset() {
v.value = nil
v.isSet = false
}
func NewNullableCpuAffinity(val *CpuAffinity) *NullableCpuAffinity {
return &NullableCpuAffinity{value: val, isSet: true}
}
func (v NullableCpuAffinity) MarshalJSON() ([]byte, error) {
return json.Marshal(v.value)
}
func (v *NullableCpuAffinity) UnmarshalJSON(src []byte) error {
v.isSet = true
return json.Unmarshal(src, &v.value)
}

View File

@@ -16,10 +16,11 @@ import (
// CpusConfig struct for CpusConfig
type CpusConfig struct {
BootVcpus int32 `json:"boot_vcpus"`
MaxVcpus int32 `json:"max_vcpus"`
Topology *CpuTopology `json:"topology,omitempty"`
MaxPhysBits *int32 `json:"max_phys_bits,omitempty"`
BootVcpus int32 `json:"boot_vcpus"`
MaxVcpus int32 `json:"max_vcpus"`
Topology *CpuTopology `json:"topology,omitempty"`
MaxPhysBits *int32 `json:"max_phys_bits,omitempty"`
Affinity *[]CpuAffinity `json:"affinity,omitempty"`
}
// NewCpusConfig instantiates a new CpusConfig object
@@ -157,6 +158,38 @@ func (o *CpusConfig) SetMaxPhysBits(v int32) {
o.MaxPhysBits = &v
}
// GetAffinity returns the Affinity field value if set, zero value otherwise.
func (o *CpusConfig) GetAffinity() []CpuAffinity {
if o == nil || o.Affinity == nil {
var ret []CpuAffinity
return ret
}
return *o.Affinity
}
// GetAffinityOk returns a tuple with the Affinity field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *CpusConfig) GetAffinityOk() (*[]CpuAffinity, bool) {
if o == nil || o.Affinity == nil {
return nil, false
}
return o.Affinity, true
}
// HasAffinity returns a boolean if a field has been set.
func (o *CpusConfig) HasAffinity() bool {
if o != nil && o.Affinity != nil {
return true
}
return false
}
// SetAffinity gets a reference to the given []CpuAffinity and assigns it to the Affinity field.
func (o *CpusConfig) SetAffinity(v []CpuAffinity) {
o.Affinity = &v
}
func (o CpusConfig) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if true {
@@ -171,6 +204,9 @@ func (o CpusConfig) MarshalJSON() ([]byte, error) {
if o.MaxPhysBits != nil {
toSerialize["max_phys_bits"] = o.MaxPhysBits
}
if o.Affinity != nil {
toSerialize["affinity"] = o.Affinity
}
return json.Marshal(toSerialize)
}

View File

@@ -16,9 +16,10 @@ import (
// DeviceConfig struct for DeviceConfig
type DeviceConfig struct {
Path string `json:"path"`
Iommu *bool `json:"iommu,omitempty"`
Id *string `json:"id,omitempty"`
Path string `json:"path"`
Iommu *bool `json:"iommu,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
// NewDeviceConfig instantiates a new DeviceConfig object
@@ -99,6 +100,38 @@ func (o *DeviceConfig) SetIommu(v bool) {
o.Iommu = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *DeviceConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *DeviceConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *DeviceConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *DeviceConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *DeviceConfig) GetId() string {
if o == nil || o.Id == nil {
@@ -139,6 +172,9 @@ func (o DeviceConfig) MarshalJSON() ([]byte, error) {
if o.Iommu != nil {
toSerialize["iommu"] = o.Iommu
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}

View File

@@ -26,6 +26,7 @@ type DiskConfig struct {
VhostSocket *string `json:"vhost_socket,omitempty"`
PollQueue *bool `json:"poll_queue,omitempty"`
RateLimiterConfig *RateLimiterConfig `json:"rate_limiter_config,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
@@ -387,6 +388,38 @@ func (o *DiskConfig) SetRateLimiterConfig(v RateLimiterConfig) {
o.RateLimiterConfig = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *DiskConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *DiskConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *DiskConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *DiskConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *DiskConfig) GetId() string {
if o == nil || o.Id == nil {
@@ -451,6 +484,9 @@ func (o DiskConfig) MarshalJSON() ([]byte, error) {
if o.RateLimiterConfig != nil {
toSerialize["rate_limiter_config"] = o.RateLimiterConfig
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}

View File

@@ -16,13 +16,14 @@ import (
// FsConfig struct for FsConfig
type FsConfig struct {
Tag string `json:"tag"`
Socket string `json:"socket"`
NumQueues int32 `json:"num_queues"`
QueueSize int32 `json:"queue_size"`
Dax bool `json:"dax"`
CacheSize int64 `json:"cache_size"`
Id *string `json:"id,omitempty"`
Tag string `json:"tag"`
Socket string `json:"socket"`
NumQueues int32 `json:"num_queues"`
QueueSize int32 `json:"queue_size"`
Dax bool `json:"dax"`
CacheSize int64 `json:"cache_size"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
// NewFsConfig instantiates a new FsConfig object
@@ -198,6 +199,38 @@ func (o *FsConfig) SetCacheSize(v int64) {
o.CacheSize = v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *FsConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *FsConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *FsConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *FsConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *FsConfig) GetId() string {
if o == nil || o.Id == nil {
@@ -250,6 +283,9 @@ func (o FsConfig) MarshalJSON() ([]byte, error) {
if true {
toSerialize["cache_size"] = o.CacheSize
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}

View File

@@ -28,6 +28,7 @@ type NetConfig struct {
VhostMode *string `json:"vhost_mode,omitempty"`
Id *string `json:"id,omitempty"`
Fd *[]int32 `json:"fd,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
RateLimiterConfig *RateLimiterConfig `json:"rate_limiter_config,omitempty"`
}
@@ -464,6 +465,38 @@ func (o *NetConfig) SetFd(v []int32) {
o.Fd = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *NetConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *NetConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *NetConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *NetConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetRateLimiterConfig returns the RateLimiterConfig field value if set, zero value otherwise.
func (o *NetConfig) GetRateLimiterConfig() RateLimiterConfig {
if o == nil || o.RateLimiterConfig == nil {
@@ -534,6 +567,9 @@ func (o NetConfig) MarshalJSON() ([]byte, error) {
if o.Fd != nil {
toSerialize["fd"] = o.Fd
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.RateLimiterConfig != nil {
toSerialize["rate_limiter_config"] = o.RateLimiterConfig
}

View File

@@ -21,6 +21,7 @@ type PmemConfig struct {
Iommu *bool `json:"iommu,omitempty"`
Mergeable *bool `json:"mergeable,omitempty"`
DiscardWrites *bool `json:"discard_writes,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
@@ -206,6 +207,38 @@ func (o *PmemConfig) SetDiscardWrites(v bool) {
o.DiscardWrites = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *PmemConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *PmemConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *PmemConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *PmemConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *PmemConfig) GetId() string {
if o == nil || o.Id == nil {
@@ -255,6 +288,9 @@ func (o PmemConfig) MarshalJSON() ([]byte, error) {
if o.DiscardWrites != nil {
toSerialize["discard_writes"] = o.DiscardWrites
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}

View File

@@ -19,9 +19,10 @@ type VsockConfig struct {
// Guest Vsock CID
Cid int64 `json:"cid"`
// Path to UNIX domain socket, used to proxy vsock connections.
Socket string `json:"socket"`
Iommu *bool `json:"iommu,omitempty"`
Id *string `json:"id,omitempty"`
Socket string `json:"socket"`
Iommu *bool `json:"iommu,omitempty"`
PciSegment *int32 `json:"pci_segment,omitempty"`
Id *string `json:"id,omitempty"`
}
// NewVsockConfig instantiates a new VsockConfig object
@@ -127,6 +128,38 @@ func (o *VsockConfig) SetIommu(v bool) {
o.Iommu = &v
}
// GetPciSegment returns the PciSegment field value if set, zero value otherwise.
func (o *VsockConfig) GetPciSegment() int32 {
if o == nil || o.PciSegment == nil {
var ret int32
return ret
}
return *o.PciSegment
}
// GetPciSegmentOk returns a tuple with the PciSegment field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *VsockConfig) GetPciSegmentOk() (*int32, bool) {
if o == nil || o.PciSegment == nil {
return nil, false
}
return o.PciSegment, true
}
// HasPciSegment returns a boolean if a field has been set.
func (o *VsockConfig) HasPciSegment() bool {
if o != nil && o.PciSegment != nil {
return true
}
return false
}
// SetPciSegment gets a reference to the given int32 and assigns it to the PciSegment field.
func (o *VsockConfig) SetPciSegment(v int32) {
o.PciSegment = &v
}
// GetId returns the Id field value if set, zero value otherwise.
func (o *VsockConfig) GetId() string {
if o == nil || o.Id == nil {
@@ -170,6 +203,9 @@ func (o VsockConfig) MarshalJSON() ([]byte, error) {
if o.Iommu != nil {
toSerialize["iommu"] = o.Iommu
}
if o.PciSegment != nil {
toSerialize["pci_segment"] = o.PciSegment
}
if o.Id != nil {
toSerialize["id"] = o.Id
}

View File

@@ -480,6 +480,16 @@ components:
default: false
description: Virtual machine configuration
CpuAffinity:
type: object
properties:
vcpu:
type: integer
host_cpus:
type: array
items:
type: integer
CpuTopology:
type: object
properties:
@@ -507,9 +517,13 @@ components:
default: 1
type: integer
topology:
$ref: '#/components/schemas/CpuTopology'
$ref: '#/components/schemas/CpuTopology'
max_phys_bits:
type: integer
affinity:
type: array
items:
$ref: '#/components/schemas/CpuAffinity'
MemoryZoneConfig:
required:
@@ -687,6 +701,9 @@ components:
default: true
rate_limiter_config:
$ref: '#/components/schemas/RateLimiterConfig'
pci_segment:
type: integer
format: int16
id:
type: string
@@ -728,6 +745,9 @@ components:
items:
type: integer
format: int32
pci_segment:
type: integer
format: int16
rate_limiter_config:
$ref: '#/components/schemas/RateLimiterConfig'
@@ -783,6 +803,9 @@ components:
type: integer
format: int64
default: 8589934592
pci_segment:
type: integer
format: int16
id:
type: string
@@ -805,6 +828,9 @@ components:
discard_writes:
type: boolean
default: false
pci_segment:
type: integer
format: int16
id:
type: string
@@ -832,6 +858,9 @@ components:
iommu:
type: boolean
default: false
pci_segment:
type: integer
format: int16
id:
type: string
@@ -852,6 +881,9 @@ components:
iommu:
type: boolean
default: false
pci_segment:
type: integer
format: int16
id:
type: string

View File

@@ -992,8 +992,10 @@ func (q *qemu) StopVM(ctx context.Context, waitOnly bool) error {
}
}
if err := q.stopVirtiofsd(ctx); err != nil {
return err
if q.config.SharedFS == config.VirtioFS {
if err := q.stopVirtiofsd(ctx); err != nil {
return err
}
}
return nil

View File

@@ -13,7 +13,7 @@ edition = "2018"
futures = "0.3.15"
clap = "2.33.0"
vsock = "0.2.3"
nix = "0.21.0"
nix = "0.23.0"
libc = "0.2.94"
serde = { version = "1.0.126", features = ["derive"] }
bincode = "1.3.3"
@@ -23,9 +23,9 @@ anyhow = "1.0.31"
opentelemetry = { version = "0.14.0", features=["serialize"] }
opentelemetry-jaeger = "0.13.0"
protobuf = "=2.14.0"
tracing-opentelemetry = "0.13.0"
tracing = "0.1.26"
tracing-subscriber = "0.2.18"
tracing-opentelemetry = "0.16.0"
tracing = "0.1.29"
tracing-subscriber = "0.3.3"
# Note: this crate sets the slog 'max_*' features which allows the log level
# to be modified at runtime.

View File

@@ -21,19 +21,19 @@ hex = "0.4.2"
byteorder = "1.3.4"
logging = { path = "../../pkg/logging" }
slog = "2.5.2"
slog-scope = "4.3.0"
rand = "0.7.3"
slog = "2.7.0"
slog-scope = "4.4.0"
rand = "0.8.4"
protobuf = "2.14.0"
nix = "0.21.0"
libc = "0.2.69"
nix = "0.23.0"
libc = "0.2.112"
# XXX: Must be the same as the version used by the agent
ttrpc = { version = "0.5.0" }
# For parsing timeouts
humantime = "2.0.0"
humantime = "2.1.0"
# For Options (state passing)
serde = { version = "1.0.110", features = ["derive"] }
serde_json = "1.0.53"
serde = { version = "1.0.131", features = ["derive"] }
serde_json = "1.0.73"

View File

@@ -234,7 +234,7 @@ pub fn generate_random_hex_string(len: u32) -> String {
let str: String = (0..len)
.map(|_| {
let idx = rng.gen_range(0, CHARSET.len());
let idx = rng.gen_range(0..CHARSET.len());
CHARSET[idx] as char
})
.collect();

View File

@@ -182,7 +182,6 @@ SCRIPTS += image-builder/image_builder.sh
SCRIPTS += initrd-builder/initrd_builder.sh
HELPER_FILES :=
HELPER_FILES += rootfs-builder/versions.txt
HELPER_FILES += scripts/lib.sh
HELPER_FILES += image-builder/nsdax.gpl.c
@@ -202,7 +201,7 @@ install-scripts:
@$(foreach f,$(SCRIPTS),$(call INSTALL_SCRIPT,$f,$(INSTALL_DIR)))
@echo "Installing helper files"
@$(foreach f,$(HELPER_FILES),$(call INSTALL_FILE,$f,$(INSTALL_DIR)))
@echo "Installing installing config files"
@echo "Installing config files"
@$(foreach f,$(DIST_CONFIGS),$(call INSTALL_FILE,$f,$(INSTALL_DIR)))
.PHONY: clean

View File

@@ -42,7 +42,8 @@ RUN dnf install -y \
systemd-devel \
sudo \
xz \
yasm
yasm && \
dnf clean all
# Add in non-privileged user
RUN useradd qatbuilder -p "" && \

View File

@@ -3,8 +3,13 @@
#
# SPDX-License-Identifier: Apache-2.0
# openSUSE Tumbleweed image has only 'latest' tag so ignore DL3006 rule.
# hadolint ignore=DL3006
from opensuse/tumbleweed
# zypper -y or --non-interactive can be used interchangeably here so ignore
# DL3034 rule.
# hadolint ignore=DL3034
RUN zypper --non-interactive refresh; \
zypper --non-interactive install --no-recommends --force-resolution \
autoconf \

View File

@@ -5,6 +5,14 @@
ARG IMAGE_REGISTRY=registry.fedoraproject.org
FROM ${IMAGE_REGISTRY}/fedora:34
RUN [ -n "$http_proxy" ] && sed -i '$ a proxy='$http_proxy /etc/dnf/dnf.conf ; true
RUN dnf install -y qemu-img parted gdisk e2fsprogs gcc xfsprogs findutils
RUN ([ -n "$http_proxy" ] && \
sed -i '$ a proxy='$http_proxy /etc/dnf/dnf.conf ; true) && \
dnf install -y \
e2fsprogs \
findutils \
gcc \
gdisk \
parted \
qemu-img \
xfsprogs && \
dnf clean all

View File

@@ -137,13 +137,16 @@ build_with_container() {
image_dir=$(readlink -f "$(dirname "${image}")")
image_name=$(basename "${image}")
REGISTRY_ARG=""
engine_build_args=""
if [ -n "${IMAGE_REGISTRY}" ]; then
REGISTRY_ARG="--build-arg IMAGE_REGISTRY=${IMAGE_REGISTRY}"
engine_build_args+=" --build-arg IMAGE_REGISTRY=${IMAGE_REGISTRY}"
fi
if [ -n "${USE_PODMAN}" ]; then
engine_build_args+=" --runtime ${DOCKER_RUNTIME}"
fi
"${container_engine}" build \
${REGISTRY_ARG} \
${engine_build_args} \
--build-arg http_proxy="${http_proxy}" \
--build-arg https_proxy="${https_proxy}" \
-t "${container_image_name}" "${script_dir}"

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: Apache-2.0
ARG IMAGE_REGISTRY=docker.io
FROM ${IMAGE_REGISTRY}/alpine:3.13.5
FROM ${IMAGE_REGISTRY}/alpine:3.13
RUN apk update && apk add \
apk-tools-static \
@@ -26,4 +26,5 @@ RUN apk update && apk add \
make \
musl \
musl-dev \
protoc \
tar

View File

@@ -5,13 +5,13 @@
OS_NAME="Alpine"
OS_VERSION=${OS_VERSION:-latest-stable}
OS_VERSION=${OS_VERSION:-3.13}
BASE_PACKAGES="alpine-base"
# Alpine mirror to use
# See a list of mirrors at http://nl.alpinelinux.org/alpine/MIRRORS.txt
MIRROR=http://dl-5.alpinelinux.org/alpine
MIRROR=https://dl-5.alpinelinux.org/alpine
PACKAGES=""

View File

@@ -9,6 +9,8 @@
#
# - Optional environment variables
#
# EXTRA_PKGS: Variable to add extra PKGS provided by the user
#
# BIN_AGENT: Name of the Kata-Agent binary
#
# Any other configuration variable for a specific distro must be added
@@ -22,13 +24,20 @@ build_rootfs() {
# Mandatory
local ROOTFS_DIR=$1
# Add extra packages to the rootfs when specified
local EXTRA_PKGS=${EXTRA_PKGS:-}
# Populate ROOTFS_DIR
check_root
mkdir -p "${ROOTFS_DIR}"
rm -rf ${ROOTFS_DIR}/var/tmp
cp -a -r -f /bin /etc /lib /sbin /usr /var ${ROOTFS_DIR}
mkdir -p ${ROOTFS_DIR}{/root,/proc,/dev,/home,/media,/mnt,/opt,/run,/srv,/sys,/tmp}
/sbin/apk.static \
-X ${MIRROR}/v${OS_VERSION}/main \
-U \
--allow-untrusted \
--root ${ROOTFS_DIR} \
--initdb add ${BASE_PACKAGES} ${EXTRA_PKGS} ${PACKAGES}
echo "${MIRROR}/${OS_VERSION}/main" > ${ROOTFS_DIR}/etc/apk/repositories
mkdir -p ${ROOTFS_DIR}{/root,/etc/apk,/proc}
echo "${MIRROR}/v${OS_VERSION}/main" > ${ROOTFS_DIR}/etc/apk/repositories
}

View File

@@ -32,7 +32,8 @@ RUN yum -y update && yum install -y \
sed \
tar \
vim \
which
which && \
yum clean all
# This will install the proper packages to build Kata components
@INSTALL_MUSL@

View File

@@ -35,7 +35,8 @@ RUN dnf -y update && dnf install -y \
systemd \
tar \
vim \
which
which && \
dnf clean all
# This will install the proper packages to build Kata components
@INSTALL_MUSL@

View File

@@ -35,7 +35,8 @@ RUN dnf -y update && dnf install -y \
systemd \
tar \
vim \
which
which && \
dnf clean all
# This will install the proper packages to build Kata components
@INSTALL_MUSL@

View File

@@ -4,6 +4,8 @@
# SPDX-License-Identifier: Apache-2.0
ARG IMAGE_REGISTRY=docker.io
# stage3-amd64 image has only 'latest' tag so ignore DL3006 rule.
# hadolint ignore=DL3007
FROM ${IMAGE_REGISTRY}/gentoo/stage3-amd64:latest
# This dockerfile needs to provide all the componets need to build a rootfs

View File

@@ -353,23 +353,24 @@ build_rootfs_distro()
info "build directly"
build_rootfs ${ROOTFS_DIR}
else
engine_build_args=""
if [ -n "${USE_DOCKER}" ]; then
container_engine="docker"
elif [ -n "${USE_PODMAN}" ]; then
container_engine="podman"
engine_build_args+=" --runtime ${DOCKER_RUNTIME}"
fi
image_name="${distro}-rootfs-osbuilder"
REGISTRY_ARG=""
if [ -n "${IMAGE_REGISTRY}" ]; then
REGISTRY_ARG="--build-arg IMAGE_REGISTRY=${IMAGE_REGISTRY}"
engine_build_args+=" --build-arg IMAGE_REGISTRY=${IMAGE_REGISTRY}"
fi
# setup to install rust here
generate_dockerfile "${distro_config_dir}"
"$container_engine" build \
${REGISTRY_ARG} \
${engine_build_args} \
--build-arg http_proxy="${http_proxy}" \
--build-arg https_proxy="${https_proxy}" \
-t "${image_name}" "${distro_config_dir}"
@@ -377,21 +378,21 @@ build_rootfs_distro()
# fake mapping if KERNEL_MODULES_DIR is unset
kernel_mod_dir=${KERNEL_MODULES_DIR:-${ROOTFS_DIR}}
docker_run_args=""
docker_run_args+=" --rm"
engine_run_args=""
engine_run_args+=" --rm"
# apt sync scans all possible fds in order to close them, incredibly slow on VMs
docker_run_args+=" --ulimit nofile=262144:262144"
docker_run_args+=" --runtime ${DOCKER_RUNTIME}"
engine_run_args+=" --ulimit nofile=262144:262144"
engine_run_args+=" --runtime ${DOCKER_RUNTIME}"
if [ -z "${AGENT_SOURCE_BIN}" ] ; then
docker_run_args+=" -v ${GOPATH_LOCAL}:${GOPATH_LOCAL} --env GOPATH=${GOPATH_LOCAL}"
engine_run_args+=" -v ${GOPATH_LOCAL}:${GOPATH_LOCAL} --env GOPATH=${GOPATH_LOCAL}"
else
docker_run_args+=" --env AGENT_SOURCE_BIN=${AGENT_SOURCE_BIN}"
docker_run_args+=" -v ${AGENT_SOURCE_BIN}:${AGENT_SOURCE_BIN}"
docker_run_args+=" -v ${GOPATH_LOCAL}:${GOPATH_LOCAL} --env GOPATH=${GOPATH_LOCAL}"
engine_run_args+=" --env AGENT_SOURCE_BIN=${AGENT_SOURCE_BIN}"
engine_run_args+=" -v ${AGENT_SOURCE_BIN}:${AGENT_SOURCE_BIN}"
engine_run_args+=" -v ${GOPATH_LOCAL}:${GOPATH_LOCAL} --env GOPATH=${GOPATH_LOCAL}"
fi
docker_run_args+=" $(docker_extra_args $distro)"
engine_run_args+=" $(docker_extra_args $distro)"
# Relabel volumes so SELinux allows access (see docker-run(1))
if command -v selinuxenabled > /dev/null && selinuxenabled ; then
@@ -432,7 +433,7 @@ build_rootfs_distro()
-v "${ROOTFS_DIR}":"/rootfs" \
-v "${script_dir}/../scripts":"/scripts" \
-v "${kernel_mod_dir}":"${kernel_mod_dir}" \
$docker_run_args \
$engine_run_args \
${image_name} \
bash /kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh "${distro}"

View File

@@ -6,7 +6,7 @@
ARG IMAGE_REGISTRY=docker.io
#suse: docker image to be used to create a rootfs
#@OS_VERSION@: Docker image version to build this dockerfile
FROM ${IMAGE_REGISTRY}/opensuse/leap
FROM ${IMAGE_REGISTRY}/opensuse/leap:15.0
# This dockerfile needs to provide all the componets need to build a rootfs
# Install any package need to create a rootfs (package manager, extra tools)

View File

@@ -35,7 +35,9 @@ RUN apt-get update && apt-get install -y \
sed \
systemd \
tar \
vim
vim && \
apt-get clean && rm -rf /var/lib/apt/lists/
# This will install the proper packages to build Kata components
@INSTALL_MUSL@
@INSTALL_RUST@

View File

@@ -6,7 +6,7 @@ FROM registry.centos.org/centos:7 AS base
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
RUN (cd /lib/systemd/system/sysinit.target.wants/ && for i in *; do [ "$i" = systemd-tmpfiles-setup.service ] || rm -f "$i"; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*; \
rm -f /etc/systemd/system/*.wants/*; \
rm -f /lib/systemd/system/local-fs.target.wants/*; \
@@ -25,7 +25,7 @@ ARG KUBE_ARCH=amd64
ARG KATA_ARTIFACTS=./kata-static.tar.xz
ARG DESTINATION=/opt/kata-artifacts
COPY ${KATA_ARTIFACTS} .
COPY ${KATA_ARTIFACTS} ${WORKDIR}
RUN \
yum -y update && \
@@ -37,7 +37,7 @@ tar xvf ${KATA_ARTIFACTS} -C ${DESTINATION}/ && \
chown -R root:root ${DESTINATION}/
RUN \
curl -Lso /bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/${KUBE_ARCH}/kubectl && \
curl -Lso /bin/kubectl "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/${KUBE_ARCH}/kubectl" && \
chmod +x /bin/kubectl
COPY scripts ${DESTINATION}/scripts

View File

@@ -1,7 +1,7 @@
# Copyright (c) 2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
FROM mcr.microsoft.com/azure-cli:latest
FROM mcr.microsoft.com/azure-cli:2.9.1
LABEL com.github.actions.name="Test kata-deploy in an AKS cluster"
LABEL com.github.actions.description="Test kata-deploy in an AKS cluster"
@@ -16,14 +16,14 @@ ENV GITHUB_ACTION_NAME="Test kata-deploy in an AKS cluster"
# PKG_SHA environment variable
ENV PKG_SHA=HEAD
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/${ARCH}/kubectl \
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/${ARCH}/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
RUN curl -LO https://github.com/Azure/aks-engine/releases/download/${AKS_ENGINE_VER}/aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz \
&& tar xvf aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz \
&& mv aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}/aks-engine /usr/local/bin/aks-engine \
&& rm aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz
RUN curl -LO "https://github.com/Azure/aks-engine/releases/download/${AKS_ENGINE_VER}/aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz" \
&& tar xvf "aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz" \
&& mv "aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}/aks-engine" /usr/local/bin/aks-engine \
&& rm "aks-engine-${AKS_ENGINE_VER}-linux-${ARCH}.tar.gz"
COPY kubernetes-containerd.json /
COPY setup-aks.sh test-kata.sh entrypoint.sh /

View File

@@ -18,7 +18,7 @@ spec:
katacontainers.io/kata-runtime: cleanup
containers:
- name: kube-kata-cleanup
image: quay.io/kata-containers/kata-deploy:2.3.0
image: quay.io/kata-containers/kata-deploy:2.3.1
imagePullPolicy: Always
command: [ "bash", "-c", "/opt/kata-artifacts/scripts/kata-deploy.sh reset" ]
env:

View File

@@ -16,7 +16,7 @@ spec:
serviceAccountName: kata-label-node
containers:
- name: kube-kata
image: quay.io/kata-containers/kata-deploy:2.3.0
image: quay.io/kata-containers/kata-deploy:2.3.1
imagePullPolicy: Always
lifecycle:
preStop:

View File

@@ -6,17 +6,19 @@ FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
ENV INSTALL_IN_GOPATH=false
ADD install_yq.sh /usr/bin/install_yq.sh
COPY install_yq.sh /usr/bin/install_yq.sh
# yq installer deps
RUN apt update && apt-get install -y curl sudo
# Install yq
RUN install_yq.sh
RUN curl -fsSL https://get.docker.com -o get-docker.sh
RUN sh get-docker.sh
# Install yq and docker
RUN apt-get update && \
apt-get install -y --no-install-recommends \
ca-certificates \
curl \
sudo && \
apt-get clean && rm -rf /var/lib/apt/lists/ && \
install_yq.sh && \
curl -fsSL https://get.docker.com -o get-docker.sh && \
sh get-docker.sh
ARG IMG_USER=kata-builder
ARG UID=1000
@@ -27,12 +29,14 @@ RUN sh -c "echo '${IMG_USER} ALL=NOPASSWD: ALL' >> /etc/sudoers"
#FIXME: gcc is required as agent is build out of a container build.
RUN apt-get update && \
apt install --no-install-recommends -y \
cpio \
gcc \
git \
make \
xz-utils
apt-get install --no-install-recommends -y \
build-essential \
cpio \
gcc \
git \
make \
xz-utils && \
apt-get clean && rm -rf /var/lib/apt/lists
ENV USER ${IMG_USER}
USER ${UID}:${GID}

View File

@@ -266,6 +266,11 @@ function main() {
containerd_conf_file="${containerd_conf_tmpl_file}"
containerd_conf_file_backup="${containerd_conf_file}.bak"
else
# runtime == containerd
if [ ! -f "$containerd_conf_file" ]; then
containerd config default > "$containerd_conf_file"
fi
fi
action=${1:-}

View File

@@ -1,13 +1,14 @@
# Copyright (c) 2020 Eric Ernst
# SPDX-License-Identifier: Apache-2.0
FROM golang:1.15-alpine
FROM golang:1.15-alpine AS builder
RUN apk add bash curl git make
RUN apk add --no-cache bash curl git make
WORKDIR /go/src/github.com/kata-containers/kata-containers/src/runtime
COPY . /go/src/github.com/kata-containers/kata-containers
RUN SKIP_GO_VERSION_CHECK=true make monitor
FROM alpine:latest
COPY --from=0 /go/src/github.com/kata-containers/kata-containers/src/runtime/kata-monitor /usr/bin/kata-monitor
FROM alpine:3.14
COPY --from=builder /go/src/github.com/kata-containers/kata-containers/src/runtime/kata-monitor /usr/bin/kata-monitor
CMD ["-h"]
ENTRYPOINT ["/usr/bin/kata-monitor"]

View File

@@ -7,7 +7,7 @@ Containers VM kernels.
This directory holds config files for the Kata Linux Kernel in two forms:
- A tree of config file 'fragments' in the `fragments` sub-folder, that are
- A tree of config file `fragments` in the `fragments` sub-folder, that are
constructed into a complete config file using the kernel
`scripts/kconfig/merge_config.sh` script.
- As complete config files that can be used as-is.
@@ -56,7 +56,7 @@ Example of valid exclusion:
# !s390x !ppc64le
```
The fragment gathering tool perfoms some basic sanity checks, and the `build-kernel.sh` will
The fragment gathering tool performs some basic sanity checks, and the `build-kernel.sh` will
fail and report the error in the cases of:
- A duplicate `CONFIG` symbol appearing.

View File

@@ -32,3 +32,4 @@ CONFIG_NVMEM=y
#CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_FS_DAX=y
CONFIG_FUSE_DAX=y

View File

@@ -1 +1 @@
87
88

View File

@@ -2,19 +2,20 @@
#
# SPDX-License-Identifier: Apache-2.0
FROM ubuntu
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
# kernel deps
RUN apt update
RUN apt install -y \
RUN apt-get update && \
apt-get install -y --no-install-recommends \
bc \
bison \
build-essential \
ca-certificates \
curl \
flex \
git \
iptables \
libelf-dev
RUN [ "$(uname -m)" = "s390x" ] && apt-get install -y libssl-dev || true
libelf-dev && \
if [ "$(uname -m)" = "s390x" ]; then apt-get install -y --no-install-recommends libssl-dev; fi && \
apt-get clean && rm -rf /var/lib/lists/

View File

@@ -12,8 +12,8 @@ WORKDIR /root/qemu
ARG CACHE_TIMEOUT
RUN echo "$CACHE_TIMEOUT"
RUN apt-get update && apt-get upgrade -y
RUN apt-get --no-install-recommends install -y \
RUN apt-get update && apt-get upgrade -y && \
apt-get --no-install-recommends install -y \
apt-utils \
autoconf \
automake \
@@ -46,40 +46,33 @@ RUN apt-get --no-install-recommends install -y \
python \
python-dev \
rsync \
zlib1g-dev
RUN [ "$(uname -m)" != "s390x" ] && apt-get install -y libpmem-dev || true
zlib1g-dev && \
if [ "$(uname -m)" != "s390x" ]; then apt-get install -y --no-install-recommends libpmem-dev; fi && \
apt-get clean && rm -rf /var/lib/apt/lists/
ARG QEMU_REPO
RUN cd .. && git clone "${QEMU_REPO}" qemu
# commit/tag/branch
ARG QEMU_VERSION
RUN git checkout "${QEMU_VERSION}"
RUN git clone https://github.com/qemu/capstone.git capstone
RUN git clone https://github.com/qemu/keycodemapdb.git ui/keycodemapdb
RUN git clone https://github.com/qemu/meson.git meson
RUN git clone https://github.com/qemu/berkeley-softfloat-3.git tests/fp/berkeley-softfloat-3
RUN git clone https://github.com/qemu/berkeley-testfloat-3.git tests/fp/berkeley-testfloat-3
ADD scripts/configure-hypervisor.sh /root/configure-hypervisor.sh
ADD qemu /root/kata_qemu
ADD scripts/apply_patches.sh /root/apply_patches.sh
ADD scripts/patch_qemu.sh /root/patch_qemu.sh
RUN /root/patch_qemu.sh "${QEMU_VERSION}" "/root/kata_qemu/patches"
ARG PREFIX
ARG BUILD_SUFFIX
RUN PREFIX="${PREFIX}" /root/configure-hypervisor.sh -s "kata-qemu${BUILD_SUFFIX}" | xargs ./configure \
--with-pkgversion="kata-static${BUILD_SUFFIX}"
RUN make -j$(nproc)
ARG QEMU_DESTDIR
RUN make install DESTDIR="${QEMU_DESTDIR}"
ARG QEMU_TARBALL
ADD static-build/scripts/qemu-build-post.sh /root/static-build/scripts/qemu-build-post.sh
ADD static-build/qemu.blacklist /root/static-build/qemu.blacklist
RUN /root/static-build/scripts/qemu-build-post.sh
COPY scripts/configure-hypervisor.sh /root/configure-hypervisor.sh
COPY qemu /root/kata_qemu
COPY scripts/apply_patches.sh /root/apply_patches.sh
COPY scripts/patch_qemu.sh /root/patch_qemu.sh
COPY static-build/scripts/qemu-build-post.sh /root/static-build/scripts/qemu-build-post.sh
COPY static-build/qemu.blacklist /root/static-build/qemu.blacklist
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN git clone --depth=1 "${QEMU_REPO}" qemu && \
cd qemu && \
git fetch --depth=1 origin "${QEMU_VERSION}" && git checkout FETCH_HEAD && \
scripts/git-submodule.sh update meson capstone && \
/root/patch_qemu.sh "${QEMU_VERSION}" "/root/kata_qemu/patches" && \
(PREFIX="${PREFIX}" /root/configure-hypervisor.sh -s "kata-qemu${BUILD_SUFFIX}" | xargs ./configure \
--with-pkgversion="kata-static${BUILD_SUFFIX}") && \
make -j"$(nproc)" && \
make install DESTDIR="${QEMU_DESTDIR}" && \
/root/static-build/scripts/qemu-build-post.sh

View File

@@ -2,12 +2,21 @@
#
# SPDX-License-Identifier: Apache-2.0
FROM ubuntu
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y make curl sudo gcc
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
curl \
gcc \
git \
make \
sudo && \
apt-get clean && rm -rf /var/lib/apt/lists/
ADD install_go.sh /usr/bin/install_go.sh
COPY install_go.sh /usr/bin/install_go.sh
ARG GO_VERSION
RUN install_go.sh "${GO_VERSION}"
ENV PATH=/usr/local/go/bin:${PATH}

View File

@@ -14,15 +14,14 @@ ENV GOPATH=/home/go
ENV TESTS_REPOSITORY_PATH="${GOPATH}/src/${TESTS_REPO}"
ENV AGENT_INIT=yes TEST_INITRD=yes OSBUILDER_DISTRO=alpine
# Install packages
RUN sudo dnf -y install kata-proxy kata-ksm-throttler kata-osbuilder kata-runtime kata-shim
RUN sudo mkdir "${GOPATH}"
RUN sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
RUN sudo dnf makecache
RUN sudo dnf -y install docker-ce
RUN go get -d "${TESTS_REPO}"
RUN cd "${TESTS_REPOSITORY_PATH}" && .ci/install_kata_image.sh
RUN cd "${TESTS_REPOSITORY_PATH}" && .ci/install_kata_kernel.sh
RUN kata-runtime kata-env
# Install packages and build and install Kata Containers
RUN dnf -y install kata-proxy kata-ksm-throttler kata-osbuilder kata-runtime kata-shim && \
mkdir "${GOPATH}" && \
dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo && \
dnf makecache && dnf -y install docker-ce && dnf clean all && \
go get -d "${TESTS_REPO}" && \
cd "${TESTS_REPOSITORY_PATH}" && .ci/install_kata_image.sh && \
cd "${TESTS_REPOSITORY_PATH}" && .ci/install_kata_kernel.sh && \
kata-runtime kata-env
CMD ["/bin/bash"]

View File

@@ -136,17 +136,16 @@ github_get_release_file_url()
local url="${1:-}"
local version="${2:-}"
download_url=$(curl -sL "$url" |\
download_urls=$(curl -sL "$url" |\
jq --arg version "$version" \
-r '.[] | select(.tag_name == $version) | .assets[0].browser_download_url' || true)
-r '.[] | select(.tag_name == $version) | .assets[].browser_download_url' |\
grep static)
[ "$download_url" = null ] && download_url=""
[ -z "$download_url" ] && die "Cannot determine download URL for version $version ($url)"
[ -z "$download_urls" ] && die "Cannot determine download URL for version $version ($url)"
local arch=$(uname -m)
[ "$arch" = x86_64 ] && arch="($arch|amd64)"
echo "$download_url" | egrep -q "$arch" || die "No release for '$arch architecture ($url)"
local download_url=$(grep "$arch" <<< "$download_urls")
[ -z "$download_url" ] && die "No release for architecture '$arch' ($url)"
echo "$download_url"
}

View File

@@ -75,7 +75,7 @@ assets:
url: "https://github.com/cloud-hypervisor/cloud-hypervisor"
uscan-url: >-
https://github.com/cloud-hypervisor/cloud-hypervisor/tags.*/v?(\d\S+)\.tar\.gz
version: "v19.0"
version: "v20.2"
firecracker:
description: "Firecracker micro-VMM"
@@ -139,13 +139,15 @@ assets:
architecture:
aarch64:
name: &default-initrd-name "alpine"
version: &default-initrd-version "3.13.5"
version: &default-initrd-version "3.13"
# Do not use Alpine on ppc64le & s390x, the agent cannot use musl because
# there is no such Rust target
ppc64le:
name: *default-initrd-name
version: *default-initrd-version
name: &glibc-initrd-name "ubuntu"
version: &glibc-initrd-version "20.04"
s390x:
name: *default-initrd-name
version: *default-initrd-version
name: *glibc-initrd-name
version: *glibc-initrd-version
x86_64:
name: *default-initrd-name
version: *default-initrd-version