Increase provisioning test timeouts.

We've encountered flakes in our e2e infrastructure when kubelet took more than
one minute to detach a volume used by a deleted pod.

Let's increase the wait period from 1 to 3 minutes. This slows down the test
by 2 minutes, but it makes the test more stable.

In addition, when kubelet cannot detach a volume for 3 minutes, let the test
wait for additional recycle controller retry interval (10 minutes) and hope the
volume is deleted by then. This should not increase usual test time, it makes
the test stable when kubelet is _extremely_ slow when releasing the volume.
This commit is contained in:
Jan Safranek 2016-04-18 13:06:09 +02:00
parent 26c99fee00
commit 3137b4cd02

View File

@ -110,16 +110,16 @@ var _ = framework.KubeDescribe("Dynamic provisioning", func() {
// 10 minutes here. There is no way how to see if kubelet is
// finished with cleaning volumes. A small sleep here actually
// speeds up the test!
// One minute should be enough to clean up the pods properly.
// Detaching e.g. a Cinder volume takes some time.
// Three minutes should be enough to clean up the pods properly.
// We've seen GCE PD detach to take more than 1 minute.
By("Sleeping to let kubelet destroy all pods")
time.Sleep(time.Minute)
time.Sleep(3 * time.Minute)
By("deleting the claim")
framework.ExpectNoError(c.PersistentVolumeClaims(ns).Delete(claim.Name))
// Wait for the PV to get deleted too.
framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(c, pv.Name, 1*time.Second, 10*time.Minute))
framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(c, pv.Name, 5*time.Second, 20*time.Minute))
})
})
})