We should use {} instead of () when passing the component version to the
install_cached_component() function.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
asserts -> assets
stastic -> static
Those were not caught during the first merge of the series as we didn't
have CI jobs testing for the TEE artefacts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We're currently building Cloud Hypervusor with thE TDX feature
regardless of using with TDX or not.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The name of the asset was wrong, "cloud-hypervisor" instead of
"hypervisor.cloud_hypervsior", generating an empty "latest" file.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The final cloud hypervisor tarball name is either
kata-static-cc-cloud-hypervisor.tar.xz or
kata-static-cc-tdx-cloud-hypervisor.tar.xz, meaning it uses
"cloud-hypervisor" instead of "clh" in the name.
Fixes: #5816
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's check for the cached version of the components as part of the
kata-deploy-binaries.sh as here we already have the needed info for
checking whether a component is cached or not, and to use it without
depending on changes made on each one of the builder scripts.
Fixes: #5816
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Instead of caching files generated during the component build, let's
cache the final tarball generated for each component.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's do this as the component name will be re-used later on, when we
start checking whether a cached component needs to be rebuilt or not.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is used in several parts of the code, and can have a single
declaration as part of the `lib.sh` file, which is already imported by
all the places where it's used.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We're going to use this function from different places, so we better
move it to lib.sh and avoid rewriting it.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If you're directly using the output of this function, the info message
will show up as part of the string, and that's not what we want.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Change the if statement to check if the CWD is set to /
Add unit tests for the correct merging of working directory
in the container and image process
Note: there is an outstanding question about one test case
Format code
Fixes: #5721
Co-authored-by: stevenhorsman <steven@uk.ibm.com>
Signed-off-by: Jordan Jackson <jordan.jackson@ibm.com>
Now that we're caching the kernel, we're relying on the kernel version
being exported. This is already done for the CC kernel, but not for the
TEE specific ones.
Fixes: #5770
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Loop through the images enviroment variables, checking if it exists
inside the target. If it does then do not append it.
Add unit tests for correctly merging the env variables of the pod yaml
and image itself in the container and image process
Format code
Fixes: #5730
Signed-off-by: Jordan Jackson <jordan.jackson@ibm.com>
As we bumped containerd dependency to v1.6.8, let's also do the
re-vendor of its code on the runtime side.
Fixes: #5745
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If the attestation-agent is used then enable image_client_auth
to enable the attempt to get registry credentials for the pull
Fixes: #5652
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
CONFIG_X86_SGX is introduced after kernel 5.11, and that config is a
default x86_64 config for Kata build-kernel.sh script.
But if we use -v to specify any kernel version below 5.11 will cause an
inevitable error because CONFIG_X86_SGX is not supported in older
kernels and that may cause problem for the situation if we need kernel
version below 5.11.
So I propose to put CONFIG_X86_SGX into whitelist.conf to avoid break
building guest kernel below 5.11.
fixes: #5741
Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
As the increase of the I/O intensive tasks, two issues could be caused:
1. When the future is blocked, the current thread (which is in the network namespace)
might be take over by other tasks. After the future is finished, the thread take over
the current task might not be in the pod network namespace
2. When finish setting up the network, the current thread will be set back to the host namsapce.
But the task which be taken over would still stay in the pod network namespace
To avoid that, we need to block the future on the current thread.
Fixes:#5728
Signed-off-by: Zhongtao Hu <zhongtaohu.tim@linux.alibaba.com>