storage tests: really wait for pod to disappear

As seen in one case (https://github.com/intel/pmem-csi/issues/587), a
pod can reach the "not running" state although its ephemeral volumes
are still being torn down by kubelet and the CSI driver. What happens
then is that the test returns too early and even deleting the
namespace and thus the pod succeeds before the NodeVolumeUnpublish
really finishes.

To avoid this, StopPod now waits for the pod to really disappear.
This commit is contained in:
Patrick Ohly 2020-04-16 17:44:37 +02:00
parent da5e6ad347
commit 0cdd5365a1

View File

@ -693,8 +693,7 @@ func StopPod(c clientset.Interface, pod *v1.Pod) {
} else {
framework.Logf("Pod %s has the following logs: %s", pod.Name, body)
}
e2epod.DeletePodOrFail(c, pod.Namespace, pod.Name)
e2epod.WaitForPodNoLongerRunningInNamespace(c, pod.Name, pod.Namespace)
e2epod.DeletePodWithWait(c, pod)
}
func verifyPVCsPending(client clientset.Interface, pvcs []*v1.PersistentVolumeClaim) {