Sometimes the pod has already been cleaned up by the time the test
tried to grab the logs.
Mar 27 16:19:38.066: INFO: Waiting for client-a-jt4tf to complete.
Mar 27 16:19:38.066: INFO: Waiting up to 5m0s for pod "client-a-jt4tf" in namespace "e2e-network-policy-c-9007" to be "success or failure"
Mar 27 16:19:38.072: INFO: Pod "client-a-jt4tf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270302ms
Mar 27 16:19:40.078: INFO: Pod "client-a-jt4tf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01233019s
Mar 27 16:19:42.086: INFO: Pod "client-a-jt4tf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020186873s
STEP: Saw pod success
Mar 27 16:19:42.086: INFO: Pod "client-a-jt4tf" satisfied condition "success or failure"
Mar 27 16:19:42.093: FAIL: Error getting container logs: the server could not find the requested resource (get pods client-a-jt4tf)
Full Stack Trace
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network.checkNoConnectivity(0xc00104adc0, 0xc0016b82c0, 0xc001666400, 0xc000c32000)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network/network_policy.go:1457 +0x2a0
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network.testCannotConnect(0xc00104adc0, 0xc0016b82c0, 0x55587e9, 0x8, 0xc000c32000, 0x50)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network/network_policy.go:1406 +0x1fc
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network.glob..func13.2.7()
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network/network_policy.go:285 +0x883
github.com/openshift/origin/pkg/test/ginkgo.(*TestOptions).Run(0xc001e47830, 0xc001e50b70, 0x1, 0x1, 0x0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/pkg/test/ginkgo/cmd_runtest.go:59 +0x41f
main.newRunTestCommand.func1(0xc00121b900, 0xc001e50b70, 0x1, 0x1, 0x0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:238 +0x15d
github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).execute(0xc00121b900, 0xc001e50b30, 0x1, 0x1, 0xc00121b900, 0xc001e50b30)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:826 +0x460
github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00121b180, 0x0, 0x60d2d00, 0x9887ec8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:914 +0x2fb
github.com/openshift/origin/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/spf13/cobra/command.go:864
main.main.func1(0xc00121b180, 0x0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:59 +0x9c
main.main()
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/cmd/openshift-tests/openshift-tests.go:60 +0x341
STEP: Cleaning up the pod client-a-jt4tf
STEP: Cleaning up the policy.
Currently, /readyz starts reporting failure after ShutdownDelayDuration
elapses. We expect /readyz to start returning failure as soon as
shutdown is initiated. This gives the load balancer a window defined by
ShutdownDelayDuration to detect that /readyz is red and stop sending
traffic to this server.