Let's ensure the node is ready after the CRI Engine restart, otherwise
we may proceed and scripts may simply fail if they try to deploy a pod
while the CRI Engine is not yet restarted (and, consequently, the node
is not Ready).
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
readinessProbe will help us to only have the kata-deploy pod marked as
Ready when it finishes all the needed configurations in the node.
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As TEEs cannot hotplug memory / CPU, we *must* consider the default
values for those as part of the podOverhead.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If conf_guest is set we need to update the CONFIG_LOCALVERSION
to match the suffix created in install_kata
-nvidia-gpu-{snp|tdx}, the linux headers will be named the very
same if build with make deb-pkg for TDX or SNP.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Let's make sure we configure containerd for the kata-qemu-tdx handler
and ship the kata-qemu-tdx runtime class for kubernetes.
Fixes: #6537
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the ability to cache OVMF, which right now we're only building
and shipping it for TDX.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
OVMF for TDX as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the OVMF for TDX version to what's the latest tested
release of the Intel TDX tools with Kata Containers.
This change requires a newer version of `nasm` than the one provided by
the container used to build the project. This change will also be
needed for SEV-SNP and was originally done by Alex Carter (thanks!).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
As we'll be using this from different places in the near future, let's
create a helper function as part of the libs.sh.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make users aware of the cache_components_main.sh that they can
also cache the kernel-tdx-experimental builds.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
kernel-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's create a `install_kernel_helper()` function, as it was already
done for QEMU, and rely on that when calling `install_kernel` and
`install_kernel_dragonball_experimental`.
This helps us to reduce the code duplication by a fair amount.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the Kernel TDX version to what's the latest tested release
of the Intel TDX tools with Kata Containers.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's do what we already did when caching the kernel, and allow passing
a FLAVOUR of the project to build.
By doing this we can re-use the same function used to cache QEMU to also
cache any kind of experimental QEMU that we may happen to have.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
qemu-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure the `qemu_suffix` and `qemu_tarball_name` can be
specified. With this we make it really easy to reuse this script for
any addition flavour of an experimental QEMU that ends up having to be
built (specifically looking at the ones for Confidential Containers
here).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the QEMU TDX version to what's the latest tested release of
the Intel TDX tools with Kata Containers.
In order to do such update, we had to relax the checks on the QEMU
version for some of the configuration options, as those were removed
right after the window was open for the 7.1.0 development (thus the
7.0.50 check).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As the dragonball kernel is shipped as part of our releases, it must be
added to the `all` target.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
In order to make it easier to read, let's just rename the
install_dragonball_experimental_kernel and install_experimental_kernel
to install_kernel_dragonball_experimental and
install_kernel_experimental, respectively.
This allows us to quickly get to those functions when looking for
`install_kernel`.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's just print "to the registry" instead of printing "to quay.io", as
the registry used is not tied to quay.io.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Two different kernel build targets (build,install) have both instructions to
build the kernel, hence it was executed twice. Install should only do
install and build should only do build.
Fixes: #6588
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
The kata-deploy install method tried to `chmod +x /opt/kata/runtime-rs/bin/*` but it isn't
always true that /opt/kata/runtime-rs/bin/ exists. For example, the
s390x payload does not build the kernel-dragonball-experimental
artifacts. So let's ensure the dir exist before issuing the command.
Fixes#6494
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
The function is returning "" when called from the script used to cache
the artefacts and one difference noted between this version and the
already working one from the CCv0 is that we make sure to `pushd
${repo_root_dir}` in the CCv0 version.
Let's give it a try here and see if it solves the issue.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add support for caching VirtioFS artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching shim v2 artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching RootFS artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching QEMU artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching Nydus artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching Kernel artefacts that are generated using
the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching Firecracker artefacts that are generated
using the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's add support for caching Cloud Hypervisor artefacts that are
generated using the kata-deploy local-build scripts.
Right now those are not used, but we'll switch to using them very soon
as part of upcoming changes of how we build the components we test in
our CI.
Fixes: #6480
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Let's adjust the kernel names in versions.yaml so those can match the
names used as part of the kata-deploy local build scripts.
Right now this doesn't bring any benefit nor drawback, but it'll make
our life easier later on in this same series.
Depends-on: github.com/kata-containers/tests#5534
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
There's no need to pass repo_root_dir to get_last_modification() as the
variable used everywhere is exported from that very same file.
Fixes: #6431
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is used in several parts of the code, and can have a single
declaration as part of the `lib.sh` file, which is already imported by
all the places where it's used.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>