Add http_proxy and https_proxy as part of the docker build arguments
in order to build properly when we are behind a proxy.
Fixes#6834
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
There's absolutely no reason to ship the kata-static tarball as part of
the payload image, as:
* The tarball is already part of the release process
* The payload image already has uncompressed content of the tarball
* The tarball itself is not used anywhere by the kata-deploy scripts
Fixes: #6828
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
There's no reason for us to build firecracker instead of simply
downloading the official released tarball, as tarballs are provided for
the architectures we want to use them.
Fixes: #6770
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When building from aarch64, just use "arm64" as that's what's used in
the name of the released nydus tarballs.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
There's no need to build Cloud Hypervisor aarch64 as, for a few releases
already, Cloud Hypervisor provides an official release binary for the
architecture.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
`uname -m` produces `x86_64`, but container image convention
is to use `amd64`, so update this in the tag
Fixes: #6820
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This reverts commit 5ec9ae0f04, for two
main reasons:
* The readinessProbe was misintepreted by myself when working on the
original PR
* It's actually causing issues, as the pod ends up marked as not
healthy.
All the kernel-foo instances, such as "kernel-sev" or "kernel-snp",
should be transformed into "kernel.foo" when looking at the
versions.yaml file.
This was already done for SEV, but missed on the SNP case.
Fixes: #6777
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We were passing "kernel-nvidia-gpu-tdx", missing the "-experimental"
part, leading to a non-valid URL.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
c9bf7808b6 introduced the logic to
properly get the version of nvidia-gpu kernels, but one important part
was dropped during the rebase into main, which is actually getting the
correct version of the kernel.
Fixing this now, and using the old issue as reference.
Fixes: #6777
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We need to make sure that, when caching a `-nvidia-gpu` kernel, we still
look at the version of the base kernel used to build the nvidia-gpu
drivers, as the ${vendor}-gpu kernels are based on already existing
entries in the versions.yaml file and do not require a new entry to be
added.
Fixes: #6777
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
A few pieces of the local-build tooling are supposed to be
alphabetized. Fixup a couple minor issues that have accumulated.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Now that we have the SNP components in place, make sure that
kata-deploy knows about the qemu-snp shim so that it will be
added to containerd config.
Fixes: #6575
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Since SEV-SNP has limited hotplug support, increase
the pod overhead to account for fixed resource usage.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Add targets to build the "plain" x86_64 OVMF.
This will be used by anyone who is using SEV or SNP
without kernel hashes. The SNP QEMU does not yet
support kernel hashes so the OvmfPkg will be used
by default.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
Refactor SNP QEMU entry in versions.yaml to match
qemu-experimental and qemu-tdx-experimental.
Also, update the version of QEMU to what we are using
in CCv0. This is the non-UPM QEMU and it does not
have kernel hashes support.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
Add Make targets and helper functions to build the QEMU
needed for SEV-SNP.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
SEV requires special OVMF.
Now that we have ability to build this custom OVMF, let's optimize
it by caching so that we don't have to build it for every run.
Fixes: sev: #6572
Signed-Off-By: Unmesh Deodhar <udeodhar@amd.com>
The SEV initrd build requires kernel modules.
So, for SEV case, we need to cache kernel modules tarball in
addition to kernel tarball.
Fixes: #6572
Signed-Off-By: Unmesh Deodhar <udeodhar@amd.com>
SEV requires special OVMF to work with kernel hashes.
Thus, adding changes that builds this custom OVMF for SEV.
Fixes: #6572
Signed-Off-By: Unmesh Deodhar <udeodhar@amd.com>
We need special initrd for SEV. The work on SEV initrd is based on
Ubuntu. Thus, adding another entry in versions.yaml
This binary will have '-sev' suffix to distinguish it from the generic
binary.
Fixes: #6572
Signed-Off-By: Unmesh Deodhar <udeodhar@amd.com>
Last but not least add the continerd shim configuration
pointing to the correct configuration-<shim>.toml
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
With each release make sure we ship a GPU and TEE enabled kernel
This adds tdx-experimental kernel support
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
TDVF caching is not working as the tarball name is incorrect. The result
expected is kata-static-tdvf.tar.xz, but it's looking for
kata-static-tdx.tar.xz.
This happens as a logic to convert tdx -> tdvf has been added as part of
the building scripts, but I missed doing this as part of the caching
scripts.
Fixes: #6669
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDX QEMU caching is not working as expected, as we're checking for its
version looking at "assets.hypervisor.${QEMU_FLAVOUR}.version", which is
correct for standard QEMU. However, for TDX QEMU we should be checking
for "assets.hypervisor.${QEMU_FLAVOUR}.tag"
Fixes: #6668
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's ensure the node is ready after the CRI Engine restart, otherwise
we may proceed and scripts may simply fail if they try to deploy a pod
while the CRI Engine is not yet restarted (and, consequently, the node
is not Ready).
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
readinessProbe will help us to only have the kata-deploy pod marked as
Ready when it finishes all the needed configurations in the node.
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As TEEs cannot hotplug memory / CPU, we *must* consider the default
values for those as part of the podOverhead.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If conf_guest is set we need to update the CONFIG_LOCALVERSION
to match the suffix created in install_kata
-nvidia-gpu-{snp|tdx}, the linux headers will be named the very
same if build with make deb-pkg for TDX or SNP.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Let's make sure we configure containerd for the kata-qemu-tdx handler
and ship the kata-qemu-tdx runtime class for kubernetes.
Fixes: #6537
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the ability to cache OVMF, which right now we're only building
and shipping it for TDX.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
OVMF for TDX as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the OVMF for TDX version to what's the latest tested
release of the Intel TDX tools with Kata Containers.
This change requires a newer version of `nasm` than the one provided by
the container used to build the project. This change will also be
needed for SEV-SNP and was originally done by Alex Carter (thanks!).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
As we'll be using this from different places in the near future, let's
create a helper function as part of the libs.sh.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>