Let's ensure the node is ready after the CRI Engine restart, otherwise
we may proceed and scripts may simply fail if they try to deploy a pod
while the CRI Engine is not yet restarted (and, consequently, the node
is not Ready).
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
readinessProbe will help us to only have the kata-deploy pod marked as
Ready when it finishes all the needed configurations in the node.
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As TEEs cannot hotplug memory / CPU, we *must* consider the default
values for those as part of the podOverhead.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure we configure containerd for the kata-qemu-tdx handler
and ship the kata-qemu-tdx runtime class for kubernetes.
Fixes: #6537
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
OVMF for TDX as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
kernel-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's create a `install_kernel_helper()` function, as it was already
done for QEMU, and rely on that when calling `install_kernel` and
`install_kernel_dragonball_experimental`.
This helps us to reduce the code duplication by a fair amount.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
qemu-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As the dragonball kernel is shipped as part of our releases, it must be
added to the `all` target.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
In order to make it easier to read, let's just rename the
install_dragonball_experimental_kernel and install_experimental_kernel
to install_kernel_dragonball_experimental and
install_kernel_experimental, respectively.
This allows us to quickly get to those functions when looking for
`install_kernel`.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's just print "to the registry" instead of printing "to quay.io", as
the registry used is not tied to quay.io.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The kata-deploy install method tried to `chmod +x /opt/kata/runtime-rs/bin/*` but it isn't
always true that /opt/kata/runtime-rs/bin/ exists. For example, the
s390x payload does not build the kernel-dragonball-experimental
artifacts. So let's ensure the dir exist before issuing the command.
Fixes#6494
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Let's adjust the kernel names in versions.yaml so those can match the
names used as part of the kata-deploy local build scripts.
Right now this doesn't bring any benefit nor drawback, but it'll make
our life easier later on in this same series.
Depends-on: github.com/kata-containers/tests#5534
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is used in several parts of the code, and can have a single
declaration as part of the `lib.sh` file, which is already imported by
all the places where it's used.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now that we've switched the base container image to using Ubuntu instead
of CentOS, we don't need any kind of extra logic to correctly build the
image for different architectures, as Ubuntu is a multi-arch image that
supports all the architectures we're targetting.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure we use a multi-arch image for building kata-deploy.
A few changes were also added in order to get systemd working inside the
kata-deploy image, due to the switch from CentOS to Ubuntu.
Fixes: #6358
Signed-off-by: SinghWang <wangxin_0611@126.com>
As part of bd1ed26c8d, we've pointed to
the Dockerfile that's used in the CC branch, which is wrong.
For what we're doing on main, we should be pointing to the one under the
`kata-deploy` folder, and not the one under the non-existent
`kata-deploy-cc` one.
Fixes: #6343
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As the image provided as part of registry.centos.org is not a multi-arch
one, at least not for CentOS 7, we need to expand the script used to
build the image to pass images that are known to work for s390x (ClefOS)
and aarch64 (CentOS, but coming from dockerhub).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's break the IMAGE build parameter into BASE_IMAGE_NAME and
BASE_IMAGE_TAG, as it makes it easier to replace the default CentOS
image by something else.
Spoiler alert, the default CentOS image is **not** multi-arch, and we do
want to support at least aarch64 and s390x in the near term future.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
For the architectures we know that `make kata-tarball` works as
expected, let's start publishing the kata-deploy payload after each
merge.
This will help to:
* Easily test the content of current `main` or `stable-*` branch
* Easily bisect issues
* Start providing some sort of CI/CD content pipeline for those who
need that
This is a forward-port work from the `CCv0` and groups together patches
that I've worked on, with the work that Choi did in order to support
different architectures.
Fixes: #6343
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This, combined with the effort of caching builder images *and* only
performing the build itself inside the builder images, is the very first
step for reproducible builds for the project.
Reproducible builds are quite important when we talk about Confidential
Containers, as users may want to verify the content used / provided by
the CSPs, and this is the first step towards that direction.
Fixes: #5517
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This function will push a specific tag to a registry, whenever the
PUSH_TO_REGISTRY environment variable is set, otherwise it's a no-op.
This will be used in the future to avoid replicating that logic in every
builder used by the kata-deploy scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As 3.1.0-rc0 has been released, let's switch the kata-deploy / kata-cleanup
tags back to "latest", and re-add the kata-deploy-stable and the
kata-cleanup-stable files.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
We already have verbose output while merging the builds from various
build targets. Getting rid of verbose output to speed up.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
This is to make a docker version to v20.10 in docker upstream image ubuntu:20.04 for s390x and ppc64le.
Fixes: #6211
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
As Chao Wu added the support for building the dragonball kernel as a new
experimental kernel, let's make sure we reflect that as part of the
kata-deploy build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If a pod of kata is deployed on a machine, after the machine restarts, the pod status of kata-deploy will be CrashLoopBackOff.
Fixes: #5868
Signed-off-by: SinghWang <wangxin_0611@126.com>
When moving to building the CI artefacts using the kata-deploy scripts,
we've noticed that the build would fail on any machine where the tarball
wasn't officially provided.
This happens as rust is missing from the 1st layer container. However,
it's a very common practice to leave the 1st layer container with the
minimum possible dependencies and install whatever is needed for
building a specific component in a 2nd layer container, which virtiofsd
never had.
In this commit we introduce the second layer containers (yes,
comtainers), one for building virtiofsd using musl, and one for building
virtiofsd using glibc. The reason for taking this approach was to
actually simplify the scripts and avoid building the dependencies
(libseccomp, libcap-ng) using musl libc.
Fixes: #5425
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
In order to ensure that the proxy configuration is passed to the 2nd
layer container, let's ensure the $HOME/.docker/config.json file is
exposed inside the 1st layer container.
For some reason which I still don't fully understand exporting
https_proxy / http_proxy / no_proxy was not enough to get those
variables exported to the 2nd layer container.
In this commit we're creating a "$HOME/.docker" directory, and removing
it after the build, in case it doesn't exist yet. The reason we do this
is to avoid docker not running in case "$HOME/.docker" doesn't exist.
This was not tested with podman, but if there's an issue with podman,
the issue was already there beforehand and should be treated as a
different problem than the one addressed in this commit.
Fixes: #5077
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As 3.0.0-rc0 has been released, let's switch the kata-deploy / kata-cleanup
tags back to "latest", and re-add the kata-deploy-stable and the
kata-cleanup-stable files.
Signed-off-by: Peng Tao <bergwolf@hyper.sh>