The error returned from Until() is solely from context cancellation
which is expected behavior when the reflector is stopped. Logging
this as an error (or even at V(4)) creates unnecessary noise.
Kubernetes-commit: cc483208aa306b8c4078d4118cf78a10e58481ec
Move backoff documentation comments to var block for better discoverability.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Kubernetes-commit: 33ebd41b2c1abdc03beabd9ccff3428a8fd46473
Passing a context to StartWithContext enables context-aware reflector
logging. This is the main remaining source of log spam (output to stderr
instead of per-test logger) in controller unit tests.
WaitForCacheSynceWithContext takes advantage of the new cache.WaitFor +
NamedHasSynced functionality to finish "immediately" (= no virtual time
passed) in a synctest bubble. While at it, the return type gets improved so
that a failure is easier to handle.
Kubernetes-commit: 5ff323de791df88880f6e065f5de4b445e5c90ed
The main advantage is that waiting on channels creates a causal relationship
between goroutines which is visible to synctest. When a controller in a
synctest bubble does a WaitFor in a test's background goroutine for the
controller, the test can use synctest.Wait to wait for completion of cache
sync, without requiring any test specific "has controller synced" API. Without
this, the test had to poll or otherwise wait for the controller.
The polling in WaitForCacheSync moved the virtual clock forward by a random
amount, depending on how often it had to check in wait.Poll. Now tests can be
written such that all events during a test happen at a predictable time. This
will be demonstrated in a separate commit for the
pkg/controller/devicetainteviction unit test.
The benefit for normal production is immediate continuation when the last
informer is synced (not really a problem, but still...) and more important,
nicer logging thanks to the names associated with the thing that is being
waited for. The caller decides whether logging is enabled or disabled and
describes what is being waited for (typically informer caches, but maybe also
event handlers or even something else entirely as long as it implements the
DoneChecker interface).
Before:
Waiting for caches to sync
Caches are synced
After:
Waiting for="cache and event handler sync"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.Pod"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.ResourceClaim"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.ResourceSlice"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.DeviceClass"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1alpha3.DeviceTaintRule"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.ResourceClaim + event handler k8s.io/kubernetes/pkg/controller/devicetainteviction.(*Controller).Run"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.Pod + event handler k8s.io/kubernetes/pkg/controller/devicetainteviction.(*Controller).Run"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1alpha3.DeviceTaintRule + event handler k8s.io/kubernetes/pkg/controller/devicetainteviction.(*Controller).Run"
Done waiting for="cache and event handler sync" instance="SharedIndexInformer *v1.ResourceSlice + event handler k8s.io/kubernetes/pkg/controller/devicetainteviction.(*Controller).Run"
The "SharedIndexInformer *v1.Pod" is also how this appears in metrics.
Kubernetes-commit: fdcbb6cba9a04c028b158bf66d505df7431f63fe
This improves logging and enables more informative waiting for cache sync in a
following commit. It addresses one klog.TODO in the Reflector.
The RealFIFOOptions and InformerOptions structs get extended the same way as
DeltaFIFOOptions before: a logger may be set, but it's not required. This is
not an API break.
That the name has to be passed separately is a bit annoying at first glance
because it could also be set directly on the logger through WithName, but
keeping it separate is better:
- name can be set without providing a logger
- name can be defaulted
- less code in the caller when passing through a logger and adding
the name only in the field
- last but not least, extracting the name is not supported in a portable
manner by logr
All in-tree references in production code get updated.
While at it, logging in the fifos gets updated to follow best practices: if
some code encounters an abnormal situation and then continues, it should use
utilruntime.HandleErrorWithLogger instead of normal error logging.
Existing "logger" fields get moved to the top because that is a more common
place for such a read-only field.
Kubernetes-commit: 45251e5f654e6c052659d110cd721f9fbe185191
While time.Sleep is what the test needs, maybe an arbitrary hook invocation is
more acceptable in the production code because it is more general.
Kubernetes-commit: 2ec0305d728bf5ce8f8df314a18e71aa120a00cf
In the unlikely situation that sharedProcessor.distribute was triggered by a
resync before sharedProcessor.run had a chance to start the listeners, the
sharedProcessor deadlocked: sharedProcessor.distribute held a read/write lock
on listenersLock while being blocked on the write to the listener's
channel. The listeners who would have read from those weren't get started
because sharedProcessor.run was blocked trying to get a read lock for
listenersLock.
This gets fixed by releasing the read/write lock in sharedProcessor.distribute
while waiting for all listeners to be started. Because either all or no
listeners are started, the existing global listenersStarted boolean is
sufficient.
The TestListenerResyncPeriods tests now runs twice, with and without the
artificial delay. It gets converted to a synctest, so it executes quickly
despite the time.Sleep calls and timing is deterministic. The enhanced log
output confirms that with the delay, the initial sync completes later:
=== RUN TestListenerResyncPeriods
shared_informer_test.go:236: 0s: listener3: handle: pod1
shared_informer_test.go:236: 0s: listener3: handle: pod2
shared_informer_test.go:236: 0s: listener1: handle: pod1
shared_informer_test.go:236: 0s: listener1: handle: pod2
shared_informer_test.go:236: 0s: listener2: handle: pod1
shared_informer_test.go:236: 0s: listener2: handle: pod2
shared_informer_test.go:236: 2s: listener2: handle: pod1
shared_informer_test.go:236: 2s: listener2: handle: pod2
shared_informer_test.go:236: 3s: listener3: handle: pod1
shared_informer_test.go:236: 3s: listener3: handle: pod2
--- PASS: TestListenerResyncPeriods (0.00s)
=== RUN TestListenerResyncPeriodsDelayed
shared_informer_test.go:236: 1s: listener1: handle: pod1
shared_informer_test.go:236: 1s: listener1: handle: pod2
shared_informer_test.go:236: 1s: listener2: handle: pod1
shared_informer_test.go:236: 1s: listener2: handle: pod2
shared_informer_test.go:236: 1s: listener3: handle: pod1
shared_informer_test.go:236: 1s: listener3: handle: pod2
shared_informer_test.go:236: 2s: listener2: handle: pod1
shared_informer_test.go:236: 2s: listener2: handle: pod2
shared_informer_test.go:236: 3s: listener3: handle: pod1
shared_informer_test.go:236: 3s: listener3: handle: pod2
--- PASS: TestListenerResyncPeriodsDelayed (0.00s)
Kubernetes-commit: e6ef79b2f6bb05205652e4fe48ffa523d9e3a1ec
I wasn't entirely sure whether this should return a value or a pointer to
satisfy the interface. Both works, so I benchmarked it elsewhere (REST
mapper). Mem allocs are the same (one alloc/call), but returning a value is 10%
slower when calling one method.
What I then benchmarked is whether pointer vs value receiver in the wrapper
makes a difference. Converting from value receiver (what I had before) to
pointer receiver reduced call overhead by 6%. That's because with a value
receiver, Go has to auto-generate a variant with pointer receiver and calls the
value receiver through that.
That can be seen in a debugger (call stack) and when setting breakpoints:
(dlv) b restMapperWrapper.KindForWithContext
Command failed: Location "restMapperWrapper.KindForWithContext" ambiguous: k8s.io/apimachinery/pkg/api/meta.restMapperWrapper.KindForWithContext, k8s.io/apimachinery/pkg/api/meta.(*restMapperWrapper).KindForWithContext…
Conventional wisdom is to define types with value receiver because those can be
called also on unmutable instances, making them more flexible.
But for types which will only ever be used via a pointer, I think pointer
receiver is better for the reasons above (small performance difference, easier
to debug).
Kubernetes-commit: b21dcbcaa1ccf4995bf486afc37dc0321c5bdf0b
Signed-off-by: Min Jin <minkimzz@amazon.com>
Update staging/src/k8s.io/client-go/tools/cache/the_real_fifo.go
optimizing fifo loop
Co-authored-by: Marek Siarkowicz <marek.siarkowicz@protonmail.com>
Signed-off-by: Min Jin <minkimzz@amazon.com>
refactoring PopBatch to accept []Delta
Signed-off-by: Min Jin <minkimzz@amazon.com>
Kubernetes-commit: 611b4c1408f529de4d4e94e6dd33be2ed1df9276
returns a ListerWatcher that knows whether the provided client explicitly
does NOT support the WatchList semantics. This allows Reflectors
to adapt their behavior based on client capabilities.
Kubernetes-commit: 3b93755c0c07ce898f1c2a3924adef6c3143f247