tests/k8s: fix wait for pods on deploy-kata action

On commit 51690bc157 we switched the installation from kubectl to helm
and used its `--wait` expecting the execution would continue when all
kata-deploy Pods were Ready. It turns out that there is a limitation on
helm install that won't wait properly when the daemonset is made of a
single replica and maxUnavailable=1. In order to fix that issue, let's
revert the changes partially to keep using kubectl and waitForProcess
to the exection while Pods aren't Running.

Fixes #10168
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
This commit is contained in:
Wainer dos Santos Moschetta 2024-08-20 12:13:42 -03:00 committed by Fabiano Fidêncio
parent 40f8aae6db
commit 3b23d62635

View File

@ -207,11 +207,12 @@ function deploy_kata() {
[ "$(yq .image.tag ${values_yaml})" = "${DOCKER_TAG}" ] || die "Failed to set image tag"
echo "::endgroup::"
# will wait until all Pods, PVCs, Services, and minimum number of Pods
# of a Deployment, StatefulSet, or ReplicaSet are in a ready state
# before marking the release as successful. It will wait for as long
# as --timeout -- Ready >> Running
helm install --wait --timeout 10m kata-deploy "${helm_chart_dir}" --values "${values_yaml}" --namespace kube-system
helm install kata-deploy "${helm_chart_dir}" --values "${values_yaml}" --namespace kube-system
# `helm install --wait` does not take effect on single replicas and maxUnavailable=1 DaemonSets
# like kata-deploy on CI. So wait for pods being Running in the "tradicional" way.
local cmd="kubectl -n kube-system get -l name=kata-deploy pod 2>/dev/null | grep '\<Running\>'"
waitForProcess "${KATA_DEPLOY_WAIT_TIMEOUT}" 10 "$cmd"
# This is needed as the kata-deploy pod will be set to "Ready" when it starts running,
# which may cause issues like not having the node properly labeled or the artefacts