This fix allows Reflector/Informer callers to detect API errors using the standard Go errors.As unwrapping methods used by the apimachinery helper methods. Combined with a custom WatchErrorHandler, this can be used to stop an informer that encounters specific errors, like resource not found or forbidden.
Kubernetes-commit: 9ace604b63045ebbb066cab5e8508b51d0900a05
Currenlty an event recorder can send an event to a
broadcaster that is already stopped, resulting
in a panic. This ensures the broadcaster holds
a lock while it is shutting down and then forces
any senders to drop queued events following
broadcaster shutdown.
It also updates the Action, ActionOrDrop, Watch,
and WatchWithPrefix functions to return an error
in the case where data is sent on the closed bradcaster
channel rather than panicing.
Lastly it updates unit tests to ensure the fix works correctly
fixes: https://github.com/kubernetes/kubernetes/issues/108518
Signed-off-by: Andrew Stoycos <astoycos@redhat.com>
Kubernetes-commit: 6aa779f4ed3d3acdad2f2bf17fb27e11e23aabe4
* client-go: Remove unreachable return
Due to the way the switch statement is done,
the return at the end of the function will neverbe reached.
Signed-off-by: Ismayil Mirzali <ismayilmirzeli@gmail.com>
* client-go: Refactor for clarity
Fixed one instance where the error message should be lowercase.
Made the fields in the struct literal more explicit
Signed-off-by: Ismayil Mirzali <ismayilmirzeli@gmail.com>
Kubernetes-commit: 75c0987de3cb9a0380873745f68dea2f0835a7a2
TransformingInfomer is like a regular Informer, but allows for applying
custom transform functions on the objects received via list/watch API calls.
Kubernetes-commit: efd3490076c391823095b4c8bd6e847ae18eb012
If the informers handlers are slow processing the objects, the deltaFIFO
blocks the queue and the streamWatchers can not add new elements to the
queue, creating contention and causing different problems, like high
memory usage.
The problem is not easy to identify from a user perspective, typically
you can use pprof to identify a high memory usage on the StreamWatchers
or some handler consuming most of the cpu time, but users should not
have to profile the golang binary in order to know that.
Metrics were disabled on the reflector because of memory leaks, also
monitoring the queue depth can't give a good signal, since it never goes high
However, we can trace slow handlers and inform users about the problem.
Kubernetes-commit: d38c2df2c4b945bcf1f81714fc6bfd01bbd0f538
https://github.com/kubernetes/kubernetes/pull/87795 most likely
unintentionally increased the log level of "Starting reflector" and
"Stopping reflector", with the result that since Kubernetes 1.21
clients have printed that message by default. This is undesirable, we
should use the original level 3 again.
Kubernetes-commit: fd972934e4916879b04508686302659ce82cfa75
Currently there is no way to specify WatchListPageSize used by Controller. This PR adds a field that can be used to specify this.
Change-Id: I241454a45dd94d3ea65a91b297f530e217f843aa
Kubernetes-commit: 43f5afe1a1dd058a2564cd3b2f330fc2a401f607
Currently when ListAndWatch() receives a connection refused error, it is
assumed to be due to the apiserver being transiently unresponsive. In
situations where a controller is running outside the k8s cluster it's
controlling, it is more common for the controller to lose connection
permanently to the apiserver and needs to exponentially backoff its
retry rather than continously spamming logs with Watch attempts that
will never succeed.
Kubernetes-commit: 1ff789f2bb9bf7fbb3df35977bc249c0dd019d31