odinuge: sorted out some function signature changes during
cherry-picking that caused conflicts.
(cherry picked from commit e76dff38cf74c3c8ad9ed4d3bc6e3641d9b64565)
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: a247e48bcd3742da7ddb43aa0b4d2f947afc3d33
Since the behavior is now changed, and the old behavior leaked objects,
this adds a new comment about how Replace works.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: cc675a5367d9d09992d7f12b8a43a10d672370b9
This is useful to both reduce the code complexity, and to ensure clients
get the "newest" version of an object known when its deleted. This is
all best-effort, but for clients it makes more sense giving them the
newest object they observed rather than an old one.
This is especially useful when an object is recreated. eg.
Object A with key K is in the KnownObjects store;
- DELETE delta for A is queued with key K
- CREATE delta for B is queued with key K
- Replace without any object with key K in it.
In this situation its better to create a DELETE delta with
DeletedFinalStateUnknown with B (with this patch), than it is to give
the client an DeletedFinalStateUnknown with A (without this patch).
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: 4283020151ab233101e77996fd8084488057f9c2
This fixes an issue where a relist could result in a DELETED delta
with an object wrapped in a DeletedFinalStateUnknown object; and then on
the next relist, it would wrap that object inside another
DeletedFinalStateUnknown, leaving the user with a "double" layer
of DeletedFinalStateUnknown's.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: a818874dce54226ecc8ef384ff8b4c82aa6aaa85
This fixes a race condition when a "short lived" object
is created and the create event is still present on the queue
when a relist replaces the state. Previously that would lead in the
object being leaked.
The way this could happen is roughly;
1. new Object is added O, agent gets CREATED event for it
2. watch is terminated, and the agent runs a new list, L
3. CREATE event for O is still on the queue to be processed.
4. informer replaces the old data in store with L, and O is not in L
- Since O is not in the store, and not in the list L, no DELETED event
is queued
5. CREATE event for O is still on the queue to be processed.
6. CREATE event for O is processed
7. O is <leaked>; its present in the cache but not in k8s.
With this patch, on step 4. above it would create a DELETED event
ensuring that the object will be removed.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: bdc4a22309fc51f824aca41f11ee4466758ea9b0
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Kubernetes-commit: a9593d634c6a053848413e600dadbf974627515f
This fix allows Reflector/Informer callers to detect API errors using the standard Go errors.As unwrapping methods used by the apimachinery helper methods. Combined with a custom WatchErrorHandler, this can be used to stop an informer that encounters specific errors, like resource not found or forbidden.
Kubernetes-commit: 9ace604b63045ebbb066cab5e8508b51d0900a05
Currenlty an event recorder can send an event to a
broadcaster that is already stopped, resulting
in a panic. This ensures the broadcaster holds
a lock while it is shutting down and then forces
any senders to drop queued events following
broadcaster shutdown.
It also updates the Action, ActionOrDrop, Watch,
and WatchWithPrefix functions to return an error
in the case where data is sent on the closed bradcaster
channel rather than panicing.
Lastly it updates unit tests to ensure the fix works correctly
fixes: https://github.com/kubernetes/kubernetes/issues/108518
Signed-off-by: Andrew Stoycos <astoycos@redhat.com>
Kubernetes-commit: 6aa779f4ed3d3acdad2f2bf17fb27e11e23aabe4
* client-go: Remove unreachable return
Due to the way the switch statement is done,
the return at the end of the function will neverbe reached.
Signed-off-by: Ismayil Mirzali <ismayilmirzeli@gmail.com>
* client-go: Refactor for clarity
Fixed one instance where the error message should be lowercase.
Made the fields in the struct literal more explicit
Signed-off-by: Ismayil Mirzali <ismayilmirzeli@gmail.com>
Kubernetes-commit: 75c0987de3cb9a0380873745f68dea2f0835a7a2
TransformingInfomer is like a regular Informer, but allows for applying
custom transform functions on the objects received via list/watch API calls.
Kubernetes-commit: efd3490076c391823095b4c8bd6e847ae18eb012
If the informers handlers are slow processing the objects, the deltaFIFO
blocks the queue and the streamWatchers can not add new elements to the
queue, creating contention and causing different problems, like high
memory usage.
The problem is not easy to identify from a user perspective, typically
you can use pprof to identify a high memory usage on the StreamWatchers
or some handler consuming most of the cpu time, but users should not
have to profile the golang binary in order to know that.
Metrics were disabled on the reflector because of memory leaks, also
monitoring the queue depth can't give a good signal, since it never goes high
However, we can trace slow handlers and inform users about the problem.
Kubernetes-commit: d38c2df2c4b945bcf1f81714fc6bfd01bbd0f538
https://github.com/kubernetes/kubernetes/pull/87795 most likely
unintentionally increased the log level of "Starting reflector" and
"Stopping reflector", with the result that since Kubernetes 1.21
clients have printed that message by default. This is undesirable, we
should use the original level 3 again.
Kubernetes-commit: fd972934e4916879b04508686302659ce82cfa75