Copied the new_pod_config() and pod-config.yaml.in from CCv0 branch
tests' integration/kubernetes/confidential/tests_common.sh and fixtures.
Unlike the original version, new_pod_config() now gets the runtimeclass
by parameter as the RUNTIMECLASS environment variable seems not broadly
used on main branch's CI.
The pod-config.yaml.in was changed as the diff shows below. In
particular the imagePullSecrets was removed to avoid it throwing a
warning on the pod's log.
```
--- a/tests/integration/kubernetes/runtimeclass_workloads/pod-config.yaml.in
+++ b/tests/integration/kubernetes/runtimeclass_workloads/pod-config.yaml.in
@@ -5,12 +5,10 @@
apiVersion: v1
kind: Pod
metadata:
- name: busybox-cc
+ name: test-e2e
spec:
runtimeClassName: $RUNTIMECLASS
containers:
- - name: nginx
+ - name: test_container
image: $IMAGE
- imagePullPolicy: Always
- imagePullSecrets:
- - name: cococred
\ No newline at end of file
+ imagePullPolicy: Always
\ No newline at end of file
```
Co-authored-by: Georgina Kinge <georgina.kinge@ibm.com>
Co-authored-by: Megan Wright <Megan.Wright@ibm.com>
Co-authored-by: stevenhorsman <steven@uk.ibm.com>
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
As we've done some changes in the VMM vcpu allocation, let's introduce
basic tests to make sure that we're getting the expected behaviour.
The test consists in checking 3 scenarios:
* default_vcpus = 0 | no limits set
* this should allocate 1 vcpu
* default_vcpus = 0.75 | limits set to 0.25
* this should allocate 1 vcpu
* default_vcpus = 0.75 | limits set to 1.2
* this should allocate 2 vcpus
The tests are very basic, but they do ensure we're rounding things up to
what the new logic is supposed to do.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This test can give false-positive on a multi-node cluster. Changed it to
use the new get_one_kata_node() and the modified exec_host() to run the
setup commands on a given node (that has kata installed) and ensure the
test pod is scheduled at that same node.
Fixes#7619
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
This test can give false-positive on a multi-node cluster. Changed it to
use the new get_one_kata_node() and the modified exec_host() to run the
setup commands on a given node (that has kata installed) and ensure the
test pod is scheduled at that same node.
Fixes#7619
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Without setting the cpu limit / request to 1, we can make this test run
in a smaller VM instance without any issue.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Add a test case for the launch of unencrypted confidential
container, verifying that we are running inside a TEE.
Right now the test only works with SEV, but it'll be expanded in the
coming commits, as part of this very same series.
Fixes: #7184
Signed-Off-By: Unmesh Deodhar <udeodhar@amd.com>
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add here the image we'll be using for unencrypted confidential
tests. Later on, we'll make sure to build and use this image as part of
our CI.
The image can easily be built as a multi-arch image, and has `cpuid`
installed in case of `x86_64` build, so it can be used to detect whether
we're running on a TEE guest without having to rely on `dmesg | grep
...`.
Fixes: #7595
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This imports the k8s-volume test from the tests repo and modifies it
slightly to set up the host volume on the AKS host.
Fixes: #6566
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
Let's make sure we run our tests in a specific namespace, as in case of
any kind of issue, we will just get rid of the namespace itself, which
will take care of cleaning up any leftover from failing tests.
One important thing to mention is why we can get rid of the `namespace:
${namespace}` on the tests that are already using it, and let's do it in
parts:
* namespace: default
We can easily get rid of this as that's the default namespace where
pods are created, so it was a no-op so far.
* namespace: test-quota-ns
My understanding is that we'd need this in order to get a clean
namespace where we'd be setting a quota for. Doing this in the
namespace that's only used for tests should **not** cause any
side-effect on the tests, as we're running those in serial and there's
no other pods running on the `kata-containers-k8s-tests` namespace
Last but not least, we're not dynamically creating namespaces as the
tests are not running in parallel, **never**, not in the case of having
2 tests being ran at same time, neither in the case of having 2 jobs
being scheduled to the same machine.
Fixes: #6864
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The first part of simplifying things to have all our tests using GitHub
actions is moving the k8s tests to this repo, as those will be the first
vict^W targets to be migrated to GitHub actions.
Those tests have been slightly adapted, mainly related to what they load
/ import, so they are more self-contained and do not require us bringing
a lot of scripts from the tests repo here.
A few scripts were also dropped along the way, as we no longer plan to
deploy kubernetes as part of every single run, but rather assume there
will always be k8s running whenever we land to run those tests.
It's important to mention that a few tests were not added here:
* k8s-block-volume:
* k8s-file-volume:
* k8s-volume:
* k8s-ro-volume:
These tests depend on some sort of volume being created on the
kubernetes node where the test will run, and this won't fly as the
tests will run from a GitHub runner, targetting a different machine
where kubernetes will be running.
* https://github.com/kata-containers/kata-containers/issues/6566
* k8s-hugepages: This test depends a whole lot on the host where it
lands and right now we cannot assume anything about that anymore, as
the tests will run from a GitHub runner, targetting a different
machine where kubernetes will be running.
* https://github.com/kata-containers/kata-containers/issues/6567
* k8s-expose-ip: This is simply hanging when running on AKS and has to
be debugged in order to figure out the root cause of that, and then
adapted to also work on AKS.
* https://github.com/kata-containers/kata-containers/issues/6578
Till those issues are solved, we'll keep running a jenkins job with
hose tests to avoid any possible regression.
Last but not least, I've decided to **not** keep the history when
bringing those tests here, otherwise we'd end up polluting a lot the
history of this repo, without any clear benefit on doing so.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>