kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
This is to make a docker version to v20.10 in docker upstream image ubuntu:20.04 for s390x and ppc64le.
Fixes: #6211 for stable-3.0
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
(cherry picked from commit f49b89b632)
Signed-off-by: Greg Kurz <groug@kaod.org>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If a pod of kata is deployed on a machine, after the machine restarts, the pod status of kata-deploy will be CrashLoopBackOff.
Fixes: #5868
Signed-off-by: SinghWang <wangxin_0611@126.com>
When moving to building the CI artefacts using the kata-deploy scripts,
we've noticed that the build would fail on any machine where the tarball
wasn't officially provided.
This happens as rust is missing from the 1st layer container. However,
it's a very common practice to leave the 1st layer container with the
minimum possible dependencies and install whatever is needed for
building a specific component in a 2nd layer container, which virtiofsd
never had.
In this commit we introduce the second layer containers (yes,
comtainers), one for building virtiofsd using musl, and one for building
virtiofsd using glibc. The reason for taking this approach was to
actually simplify the scripts and avoid building the dependencies
(libseccomp, libcap-ng) using musl libc.
Fixes: #5425
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 7e5941c578)
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
Fix threading conflicts when kata-deploy 'make kata-tarball' is called.
Force the creation of rootfs tarballs to happen serially instead of in parallel.
Fixes: #4787
Signed-Off-By: Ryan Savino <ryan.savino@amd.com>
The clone_tests_repo() in ci/lib.sh relies on CI variable to decide
whether to checkout the tests repository or not. So it is required to
pass that variable down to the build container of kata-deploy, otherwise
it can fail on some scenarios.
Fixes#4949
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
The install_yq.sh is copied to tools/packaging/kata-deploy/local-build/dockerbuild
so that it is added in the kata-deploy build image. Let's tell git to
ignore that file.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Instead of passing a `KATA_CONF_FILE` environament variable, let's rely
on the configured (in the container engine) config path, as both
containerd and CRI-O support it, and we're using this for both of them.
Fixes: #4608
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we're already doing for containerd, let's also pass the configuration
path to CRI-O, as all the supported CRI-O versions do support this
configuration option.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
While running make as non-privileged user, the make errors out with
the following message:
"INFO: Build cloud-hypervisor enabling the following features: tdx
Got permission denied while trying to connect to the Docker daemon
socket at unix:///var/run/docker.sock: Post
"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=cloudhypervisor%2Fdev&tag=20220524-0":
dial unix /var/run/docker.sock: connect: permission denied"
Even though the user may be part of docker group, the clh build from
source does a docker in docker build. It is necessary for the user of
the nested container to be part of docker build for the build to
succeed.
Fixes#4594
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Replaces calls of nproc with nproc with
nproc ${CI:+--ignore 1}
to run nproc with one less processing unit than the maximum to prevent
DOS-ing the local machine.
If process is being run in a container (determined via whether $CI is
null), all processing units avaliable will be used.
Fixes#3967
Signed-off-by: Derek Lee <derlee@redhat.com>
As 2.5.0-rc0 has been released, let's switch the kata-deploy / kata-cleanup
tags back to "latest", and re-add the kata-deploy-stable and the
kata-cleanup-stable files.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
As done for the other binaries we release, let's add support for
"building" (or pulling down) the static binary we ship as part of the
kata-containers static tarball (the same one used by kata-deploy).
Right now the virtiofsd is installed in /opt/kata/libexec/virtiofsd, a
different path than the virtiofsd that comes with QEMU.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As virtiofsd comes in the `zip` format, let's install unzip in the
containers and then be able to access the virtiofsd binary.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
"RKE2 - Rancher's Next Generation Kuberentes Distribution" can easily be
supported by kata-deploy with some simple adjustments to what we've been
relying on for "k3s".
The main differences between k3s and RKE2 are, basically:
1. The location where the containerd configuration is stored
- k3s: /var/lib/rancher/k3s/agent/etc/containerd/
- rke2: /var/lib/rancher/rke2/agent/etc/containerd/
2. The name of the systemd services used:
- k3s: k3s.service or k3s-agent.service
- rke2: rke2-server.service or rke2-agent.service
Knowing this, let's add a new overlay for RKE2, adapt the kata-deploy
and the kata-cleanup scripts, and that's it.
Fixes: #4160
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's move the specific installation instructions, such as for k3s,
upper in the document.
This helps reading (and also skipping) according to what the user
is looking for.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
The idea is to pass this README file to kata-doc-to-script.sh script and
then execute the result.
Added comments with a file name on top of each YAML snippet.
This helps in assigning a file name when we cat the YAML to a file.
Fixes: #3943
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
`make kata-tarball` relies on `kata-deploy-binaries.sh -s` which
silently ignores errors, and you may end up with an incomplete
tarball without noticing it because `make`'s exit status is 0.
`kata-deploy-binaries.sh` does set the `errexit` option and all the
code in the script seems to assume that since it doesn't do error
checking. Unfortunately, bash automatically disables `errexit` when
calling a function from a conditional pipeline, like done in the `-s`
case:
if [ "${silent}" == true ]; then
if ! handle_build "${t}" &>"$log_file"; then
^^^^^^
this disables `errexit`
and `handle_build` ends with a `tar tvf` that always succeeds.
Adding error checking all over the place isn't really an option
as it would seriously obfuscate the code. Drop the conditional
pipeline instead and print the final error message from a `trap`
handler on the special ERR signal. This requires the `errtrace`
option as `trap`s aren't propagated to functions by default.
Since all outputs of `handle_build` are redirected to the build
log file, some file descriptor duplication magic is needed for
the handler to be able to write to the orignal stdout and stderr.
Fixes#3757
Signed-off-by: Greg Kurz <groug@kaod.org>
'make kata-tarball' sometimes fails early with:
cp: cannot create regular file '[...]/tools/packaging/kata-deploy/local-build/dockerbuild/install_yq.sh': File exists
This happens because all assets are built in parallel using the same
`kata-deploy-binaries-in-docker.sh` script, and thus all try to copy
the `install_yq.sh` script to the same location with the `cp` command.
This is a well known race condition that cannot be avoided without
serialization of `cp` invocations.
Move the copying of `install_yq.sh` to a separate script and ensure
it is called *before* parallel builds. Make the presence of the copy
a prerequisite for each sub-build so that they still can be triggered
individually. Update the GH release workflow to also call this script
before calling `kata-deploy-binaries-in-docker.sh`.
Fixes#3756
Signed-off-by: Greg Kurz <groug@kaod.org>
NO_TTY configured whether to add the -t option to docker run. It makes no
sense for the caller to configure this, since whether you need it depends
on the commands you're running. Since the point here is to run
non-interactive build scripts, we don't need -t, or -i either.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
This directory consists entirely of files built during a make kata-tarball,
so it should not be committed to the tree. A symbolic link to this directory
might be created during 'make tarball', ignore it as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[greg: - rearranged the subject to make the subsystem checker happy
- also ignore the symbolic link created by
`kata-deploy-binaries-in-docker.sh`]
Signed-off-by: Greg Kurz <groug@kaod.org>
As 2.4.0-rc0 has been released, let's switch the kata-deploy / kata-cleanup
tags back to "latest", and re-add the kata-deploy-stable and the
kata-cleanup-stable files.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
Let's bump the Cloud Hypervisor version to 5343e09e7b8db, as that brings
a few fixes we're interested in, such as:
* hypervisor, vmm: Handle TDX hypercalls with INVALID_OPERAND
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3723
- This is needed for the TDX support on the cloud hypervisor driver,
which is part of this very same series.
* openapi: Update the PciBdf types
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3748
- This is needed due to a change in a DeviceNode field, which would
cause a marshalling / demarshalling error when running with a
version of cloud-hypervisor that includes the TDX fixes mentioned
above.
* scripts: dev_cli: Don't quote $features_build
* scripts: dev_cli: Add --features option
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3773
- This is needed due to changes in the scripts used to build Cloud
Hypervisor, which are used as part of Kata Containers CIs and
github actions.
Due to this change, we're also adapting the build scripts as part
of this very same commit.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The kata-deploy Dockerfile is based on CentOS 7, which has no s390x
support. Add an `IMAGE` argument to specify the registry, which still
defaults to CentOS, but e.g. ClefOS can be selected instead.
Other x86_64 assumptions are also removed. Other general simplicifations
are made.
This does not address the more general issue of #3723 -- what we're
doing here does not seem to be working with systemd >= something between
235-237.
Fixes: #3722
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
When using kata-deploy, no `containerd-shim-kata-v2` binary is deployed,
but we do deploy a `kata` runtime class, which seems very much
incosistent.
As the default configuration for kata-containers points to QEMU, let's
also use kata with QEMU as the default shim-v2 binary.
Fixes: #3228, #3734
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Right now we're getting the info for the Cloud Hypervisor repo and
version, but we don't do anything with them, as those are not passed
down to the build script.
Morever, the build script itself gets the info from exactly the same
place when those are not passed, making those redundant.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Right now TDx support on Cloud Hypervisor is gated behind a "--features
tdx" flag. However, having TDx support enabled should not and does not
impact on the general usability of cloud-hypervisor.
As sooner than later we'll need kata-deploy binaries to be tested on a
CI that's TDx capable, for the confidential containers effort, let's
take the bullet and already enable it by default.
By the way, touching kata-deploy-binaries.sh as it's ensure the change
will be used in the following workflows:
* kata-deploy-push
* kata-deploy-test
* release
Fixes: #3688
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When building a non-stable release, the tag is **always** "latest¨,
instead of the version. The same magic done for setting the correct
tags up should be done for replacing the tag on the kata-deploy and
kata-cleanup yaml files, as part of the kata-deploy test.
Fixes: #3559
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>