Commit Graph

7 Commits

Author SHA1 Message Date
Xuewei Niu
6aa3517393 tests: Prevent the shim from being killed in k8s-oom test
The actual memory usage on the host is equal to the hypervisor memory usage
plus the user memory usage. An OOM killer might kill the shim when the
memory limit on host is same with that of container and the container
consumes all available memory. In this case, the containerd will never
receive OOM event, but get "task exit" event. That makes the `k8s-oom.bats`
test fail.

The fix is to add a new container to increase the sandbox memory limit.
When the container "oom-test" is killed by OOM killer, there is still
available memory for the shim, so it will not be killed.

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2025-07-24 23:44:21 +08:00
Fabiano Fidêncio
53ac0f00c5 tests: Re-enable oom tests for mariner
Since we bumped to the 6.12.x LTS kernel, we've also adjusted the
aggressivity of the OOM test, which may be enough to allow us to
re-enable it for mariner.

Fixes: #8821

Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
2025-01-07 18:33:17 +01:00
Dan Mihai
74a52c6d25 tests: k8s: k8s-oom.bats auto-generated policy
Auto-generate policy for k8s-oom.bats.

Fixes: #9072

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2024-02-21 18:08:08 +00:00
Dan Mihai
b7c31e3b98 tests: cbl-mariner: disable k8s-oom.bats
Disable k8s-oom.bats on cbl-mariner until it passes more often.

Fixes: #8824

Signed-off-by: Dan Mihai <dmihai@microsoft.com>
2024-01-14 17:39:25 +00:00
Yushuo
28c29b248d bugfix: plus default_memory when calculating mem size
We've noticed this caused regressions with the k8s-oom tests, and then
decided to take a step back and do this in the same way it was done
before 67972ec48a.

Moreover, this step back is also more reasonable in terms of the
controlling logic.

And by doing this we can re-enable the k8s-oom.bats tests, which is done
as part of this PR.

Fixes: #7271
Depends-on: github.com/kata-containers/tests#5705

Signed-off-by: Yushuo <y-shuo@linux.alibaba.com>
2023-07-10 15:53:04 +08:00
Fabiano Fidêncio
828a721838 gha: k8s: dragonball: Skip k8s-oom
Let's skip the k8s-oom, as the test is currently failing.

We've an issue opened for that, and we'll be working on re-enabling it
as soon as possible.

Reference:
https://github.com/kata-containers/kata-containers/issues/7271

Fixes: #7253

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-07-08 14:27:49 +02:00
Fabiano Fidêncio
11e0099fb5 tests: Move k8s tests to this repo
The first part of simplifying things to have all our tests using GitHub
actions is moving the k8s tests to this repo, as those will be the first
vict^W targets to be migrated to GitHub actions.

Those tests have been slightly adapted, mainly related to what they load
/ import, so they are more self-contained and do not require us bringing
a lot of scripts from the tests repo here.

A few scripts were also dropped along the way, as we no longer plan to
deploy kubernetes as part of every single run, but rather assume there
will always be k8s running whenever we land to run those tests.

It's important to mention that a few tests were not added here:

* k8s-block-volume:
* k8s-file-volume:
* k8s-volume:
* k8s-ro-volume:
  These tests depend on some sort of volume being created on the
  kubernetes node where the test will run, and this won't fly as the
  tests will run from a GitHub runner, targetting a different machine
  where kubernetes will be running.
  * https://github.com/kata-containers/kata-containers/issues/6566

* k8s-hugepages: This test depends a whole lot on the host where it
  lands and right now we cannot assume anything about that anymore, as
  the tests will run from a GitHub runner, targetting a different
  machine where kubernetes will be running.
  * https://github.com/kata-containers/kata-containers/issues/6567

* k8s-expose-ip: This is simply hanging when running on AKS and has to
  be debugged in order to figure out the root cause of that, and then
  adapted to also work on AKS.
  * https://github.com/kata-containers/kata-containers/issues/6578

Till those issues are solved, we'll keep running a jenkins job with
hose tests to avoid any possible regression.

Last but not least, I've decided to **not** keep the history when
bringing those tests here, otherwise we'd end up polluting a lot the
history of this repo, without any clear benefit on doing so.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2023-03-31 21:55:41 +02:00