mirror of
https://github.com/kubernetes/client-go.git
synced 2026-05-15 11:43:33 +00:00
When watch.Broadcaster.Shutdown() is called it drains all queued events
then calls closeAll(), which closes every watcher's result channel.
eventBroadcasterImpl.Shutdown() calls Broadcaster.Shutdown() first,
then calls the cancellation context's cancel() function. Between those
two steps there is a window in which the result channel is closed while
the cancellation context is still live.
Without the two-value channel receive the goroutine in StartEventWatcher
would spin on the already-closed channel: each select iteration
immediately receives the zero-value watch.Event, the type assertion
fails (nil interface, ok == false), and the loop continues burning CPU
until the select scheduler eventually picks the cancelationCtx.Done()
case.
Guard against this by reading the ok boolean from the channel receive:
case watchEvent, ok := <-watcher.ResultChan():
if !ok {
return
}
This is the correct and idiomatic Go pattern for a channel that may be
closed by its producer. Note that when this return path is taken the
broadcaster has already delivered every queued event (Broadcaster.Shutdown
blocks until the distribute loop exits before closeAll runs), so no
events are silently dropped.
Add a regression test (TestStartEventWatcherExitsOnDirectShutdown) that
creates a broadcaster without an external context so Shutdown() is
fully synchronous, starts a watcher, and verifies the goroutine exits
cleanly via goleak.VerifyNone.
Signed-off-by: Rajneesh180 <rajneeshrehsaan48@gmail.com>
Kubernetes-commit: 95c15b54069922b0a66c198a064577ea0a160694