All queue ShutDown() calls should be able to be invoked multiple times.
```
Observed a panic: "close of closed channel" (close of closed channel)
/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:573
/usr/local/go/src/runtime/panic.go:502
/usr/local/go/src/runtime/chan.go:333
/go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/client-go/util/workqueue/delaying_queue.go:137
```
Use sync.Once to guarantee a single close.
There was some weird queuing going on. The queue was implement by
spawning goroutines that would block on the "ticketer" until it was
their turn to write to the output channel. If N events where in the
watch when the consumer of the watch stopped reading events, N
goroutines would leak. In unit tests of the certificate manager, this
was causing ~10k goroutines to leak.
Fix it with a buffering event processor that uses only one routine and
cancels correctly.
The pod name is missing in suggested cmd usage when executing kubectl attach
and exec:
"Defaulting container name to master.
Use 'kubectl describe pod/ -n default' to see all of the containers in this pod."
The PodName is empty in func Complete as the attached Pod isn't populated yet,
causing suggested cmd usage imcomplete. This patch renders PodName after it is
populated.
conflict.
Adding unit test verify that deleteValidation is retried.
adding e2e test verifying the webhook can intercept configmap and custom
resource deletion, and the existing object is sent via the
admissionreview.OldObject.
update the admission integration test to verify that the existing object
is passed to the deletion admission webhook as oldObject, in case of an
immediate deletion and in case of an update-on-delete.
In some cases, an Update event with no "NominatedNode" present is received right
after a node("NominatedNode") is reserved for this pod in memory.
If we go updating (delete and add) it, it actually un-reserves the node since
the newPod doesn't carry sped.status.nominatedNode.
In this case, during this time other low-priority pods have chances to take space which
was reserved for the nominatedPod.