e2e TCP CLOSE test wait until pod is ready

the e2e TCP CLOSE_WAIT has to create a server pod and then, from
a client, it creates a connection but doesn't notify the server
when closing it, so it stays on the CLOSE_WAIT status until it
times out.

Current test use a simple timeout for waiting the that server pod
is ready, it's better to use WaitForPodsReady for waiting that
the pod is available to avoid problems on busy environments like
the CI.

It also deletes the pods once the tests finish to avoid leaking
pods.
This commit is contained in:
Antonio Ojea 2020-04-05 00:38:00 +02:00
parent 5ea2d69ccd
commit 0748a75dfb
No known key found for this signature in database
GPG Key ID: E4833AA228D4E824

View File

@ -208,9 +208,12 @@ var _ = SIGDescribe("Network", func() {
serverNodeInfo.nodeIP,
kubeProxyE2eImage))
fr.PodClient().CreateSync(serverPodSpec)
defer fr.PodClient().DeleteSync(serverPodSpec.Name, metav1.DeleteOptions{}, framework.DefaultPodDeletionTimeout)
// The server should be listening before spawning the client pod
<-time.After(time.Duration(2) * time.Second)
if readyErr := e2epod.WaitForPodsReady(fr.ClientSet, fr.Namespace.Name, serverPodSpec.Name, 0); readyErr != nil {
framework.Failf("error waiting for server pod %s to be ready: %w", serverPodSpec.Name, readyErr)
}
// Connect to the server and leak the connection
ginkgo.By(fmt.Sprintf(
"Launching a client connection on node %v (node ip: %v, image: %v)",
@ -218,6 +221,7 @@ var _ = SIGDescribe("Network", func() {
clientNodeInfo.nodeIP,
kubeProxyE2eImage))
fr.PodClient().CreateSync(clientPodSpec)
defer fr.PodClient().DeleteSync(clientPodSpec.Name, metav1.DeleteOptions{}, framework.DefaultPodDeletionTimeout)
ginkgo.By("Checking /proc/net/nf_conntrack for the timeout")
// These must be synchronized from the default values set in