Those tests are covering:
* go runtime:
* cloud-hypervisor
* qemu
* rust runtime:
* cloud-hypervsuor
* dragonball
The tests are running on a ARM64 baremetal machine, which has vanilla
kubernetes deployed.
I don't want to use the same machine for devmapper tests / cri-o tests,
as that would require deploying and re-deploying k8s there, which is not
exactly optimal for a baremetal machine.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
This reverts commit d35320472c.
Although the commit in question does solve an issue related to the usage
of busybox from docker.io, as it's reasonably easy to hit the rate
limit, the commit also brings in functionalities that are causing issues
in, at least, the TDX CI, such as:
```sh
[2024-08-16T16:03:52Z INFO actix_web::middleware::logger] 10.244.0.1 "POST /kbs/v0/attest HTTP/1.1" 401 259 "-" "attestation-agent-kbs-client/0.1.0" 0.065266
[2024-08-16T16:03:53Z INFO kbs::http::attest] Auth API called.
[2024-08-16T16:03:53Z INFO actix_web::middleware::logger] 10.244.0.1 "POST /kbs/v0/auth HTTP/1.1" 200 74 "-" "attestation-agent-kbs-client/0.1.0" 0.000169
[2024-08-16T16:03:54Z INFO kbs::http::attest] Attest API called.
[2024-08-16T16:03:54Z INFO verifier::tdx] Quote DCAP check succeeded.
[2024-08-16T16:03:54Z INFO verifier::tdx] MRCONFIGID check succeeded.
[2024-08-16T16:03:54Z INFO verifier::tdx] CCEL integrity check succeeded.
[2024-08-16T16:03:54Z ERROR kbs::http::error] Attestation failed: Verifier evaluate failed: TDX Verifier: failed to parse AA Eventlog from evidence
Caused by:
at least one line should be included in AAEL
```
Let's revert this for now, and then once we get this one fixed on
trustee side we'll update again.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Add bind mounts for volumes defined by docker container images, unless
those mounts have been defined in the input K8s YAML file too.
For example, quay.io/opstree/redis defines two mounts:
/data
/node-conf
Before these changes, if these mounts were not defined in the YAML file
too, the auto-generated policy did not allow this container image to
start.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
Improved error handling to provide clearer feedback on request failures.
For example:
Improve createcontainer request timeout error message from
"Error: failed to create containerd task: failed to create shim task:context deadline exceed"
to "Error: failed to create containerd task: failed to create shim task: CreateContainerRequest timed out: context deadline exceed".
Fixes: #10173 -- part II
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
For the rare cases where containerd_conf_file does not exist, cp could fail
and let the pod in Error state. Let's make it a little bit more robust.
Signed-off-by: Beraldo Leal <bleal@redhat.com>
Install luks-encrypt-storage script by guest-components. So that we can maintain a single source and prevent synchronization issues.
Fixes: #10173 -- part I
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Container/Sandbox clean up should not fail if root FS is not mounted.
This PR handles EINVAL errors when umount2 is called.
Fixes: #10166
Signed-off-by: Silenio Quarti <silenio_quarti@ca.ibm.com>
Instead of deploying and removing the snapshotter on every single run,
let's make sure the snapshotter is always deploy on the TDX case.
We're doing this as an experiment, in order to see if we'll be able to
reduce the failures we've been facing with the nydus snapshotter.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The `exec_host()` function often fails to capture the output of a given command
because the node debugger pod is prematurely terminated. To address this issue,
the function has been refactored to ensure consistent output capture by adjusting
the `kubectl debug` process as follows:
- Keep the node debugger pod running
- Wait until the pod is fully ready
- Execute the command using `kubectl exec`
- Capture the output and terminate the pod
This commit refactors `exec_host()` to implement the above steps, improving its reliability.
Fixes: #10081
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
This reverts commit 8d9bec2e01, as it
causes issues in the operator and kata-deploy itself, leading to the
node to be NotReady.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
I can't set up loop device with `exec_host`, which the command is
necessary for qemu-coco-dev. See issue #10133.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Update error message in pulling image encrypted to "failed to get decrypt key no suitable key found for decrypting layer key".
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Introduces `secure_mount` API in the cdh. It includes:
- Adding the `SecureMountServiceClient`.
- Implementing the `secure_mount` function to handle secure mounting requests.
- Updating the confidential_data_hub.proto file to define SecureMountRequest and SecureMountResponse messages
and adding the SecureMountService service.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
renames the sealed_secret.proto file to confidential_data_hub.proto and
updates the corresponding API namespace from sealed_secret to confidential_data_hub.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
add tests for guest pull with configured timeout:
1) failed case: Test we cannot pull a large image that pull time exceeds a short creatcontainer timeout(10s) inside the guest
2) successful case: Test we can pull a large image inside the guest with increasing createcontainer timeout(120s)
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
add tests for pull images in the guest using trusted storage:
1) failed case: Test we cannot pull an image that exceeds the memory limit inside the guest
2) successful case: Test we can pull an image inside the guest using
trusted ephemeral storage.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
This PR adds a k8s stability test that will be part of the CoCo Kata
stability tests that will run weekly.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR disables the k8s file volume test as we are having random failures
in multiple GHA CIs mainly because the exec_host function sometimes
does it not work properly.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>