The kube-apiserver validation expects the Count of an EventSeries to be
at least 2, otherwise it rejects the Event. There was is discrepancy
between the client and the server since the client was iniatizing an
EventSeries to a count of 1.
According to the original KEP, the first event emitted should have an
EventSeries set to nil and the second isomorphic event should have an
EventSeries with a count of 2. Thus, we should matcht the behavior
define by the KEP and update the client.
Also, as an effort to make the old clients compatible with the servers,
we should allow Events with an EventSeries count of 1 to prevent any
unexpected rejections.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Kubernetes-commit: 62c9fa8fe6c2e04b1a40970e93055c2e92259b12
There was a data race in the recordToSink function that caused changes
to the events cache to be overriden if events were emitted
simultaneously via Eventf calls.
The race lies in the fact that when recording an Event, there might be
multiple calls updating the cache simultaneously. The lock period is
optimized so that after updating the cache with the new Event, the lock
is unlocked until the Event is recorded on the apiserver side and then
the cache is locked again to be updated with the new value returned by
the apiserver.
The are a few problem with the approach:
1. If two identical Events are emitted successively the changes of the
second Event will override the first one. In code the following
happen:
1. Eventf(ev1)
2. Eventf(ev2)
3. Lock cache
4. Set cache[getKey(ev1)] = &ev1
5. Unlock cache
6. Lock cache
7. Update cache[getKey(ev2)] = &ev1 + Series{Count: 1}
8. Unlock cache
9. Start attempting to record the first event &ev1 on the apiserver side.
This can be mitigated by recording a copy of the Event stored in
cache instead of reusing the pointer from the cache.
2. When the Event has been recorded on the apiserver the cache is
updated again with the value of the Event returned by the server.
This update will override any changes made to the cache entry when
attempting to record the new Event since the cache was unlocked at
that time. This might lead to some inconsistencies when dealing with
EventSeries since the count may be overriden or the client might even
try to record the first isomorphic Event multiple time.
This could be mitigated with a lock that has a larger scope, but we
shouldn't want to reflect Event returned by the apiserver in the
cache in the first place since mutation could mess with the
aggregation by either allowing users to manipulate values to update
a different cache entry or even having two cache entries for the same
Events.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Kubernetes-commit: cfdd40b569d7630b9b31ddbe0557159b1f8b0f9e
When attempting to record a new Event and a new Serie on the apiserver
at the same time, the patch of the Serie might happen before the Event
is actually created. In that case, we handle the error and try to create
the Event. But the Event might be created during that period of time and
it is treated as an error today. So in order to handle that scenario, we
need to retry when a Create call for a Serie results in an AlreadyExist
error.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Kubernetes-commit: 92c7042e640e148f54aa112591a4550d5450a132
odinuge: sorted out some function signature changes during
cherry-picking that caused conflicts.
(cherry picked from commit e76dff38cf74c3c8ad9ed4d3bc6e3641d9b64565)
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: a8d2bc0ff7537bcb17e0b85333615dafd7c1e9a9
Since the behavior is now changed, and the old behavior leaked objects,
this adds a new comment about how Replace works.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: cd7deae436c328085bcb50681b06e1cc275801db
This is useful to both reduce the code complexity, and to ensure clients
get the "newest" version of an object known when its deleted. This is
all best-effort, but for clients it makes more sense giving them the
newest object they observed rather than an old one.
This is especially useful when an object is recreated. eg.
Object A with key K is in the KnownObjects store;
- DELETE delta for A is queued with key K
- CREATE delta for B is queued with key K
- Replace without any object with key K in it.
In this situation its better to create a DELETE delta with
DeletedFinalStateUnknown with B (with this patch), than it is to give
the client an DeletedFinalStateUnknown with A (without this patch).
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: 4f55d416f2e6b566eb397670b451d96712e638f1
This fixes an issue where a relist could result in a DELETED delta
with an object wrapped in a DeletedFinalStateUnknown object; and then on
the next relist, it would wrap that object inside another
DeletedFinalStateUnknown, leaving the user with a "double" layer
of DeletedFinalStateUnknown's.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: 8509d70d3c33a038f0b5111a5e5696c833f6685b
This fixes a race condition when a "short lived" object
is created and the create event is still present on the queue
when a relist replaces the state. Previously that would lead in the
object being leaked.
The way this could happen is roughly;
1. new Object is added O, agent gets CREATED event for it
2. watch is terminated, and the agent runs a new list, L
3. CREATE event for O is still on the queue to be processed.
4. informer replaces the old data in store with L, and O is not in L
- Since O is not in the store, and not in the list L, no DELETED event
is queued
5. CREATE event for O is still on the queue to be processed.
6. CREATE event for O is processed
7. O is <leaked>; its present in the cache but not in k8s.
With this patch, on step 4. above it would create a DELETED event
ensuring that the object will be removed.
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>
Kubernetes-commit: bd4ec0acec8844bddc7780d322f8fc215d045046
* Add RedactSecrets function
* Move RedactSecrets method to existing RawBytesData case
* Update TestRedactSecrets to use new pattern of os.CreateTemp()
Kubernetes-commit: e721272d10dd6c4d85ff613182ba0eaddcec9272
Mark remotecommand.Executor as deprecated and related modifications.
Handle crash when streamer.stream panics
Add a test to verify if stream is closed after connection being closed
Remove blank line and update waiting time to 1s to avoid test flakes in CI.
Refine the tests of StreamExecutor according to comments.
Remove the comment of context controlling the negotiation progress and misc.
Signed-off-by: arkbriar <arkbriar@gmail.com>
Kubernetes-commit: 42808c8343671e6783ba4c901dcd619bed648c3d
for correctness. technically shouldnt be an issue since restarting a stopped processor is not supported
Kubernetes-commit: 3a81341cfa6f7e2ca1b9bfc195c567dcdfaa4dea
To be able to implement controllers that are dynamically deciding
on which resources to watch, it is required to get rid of
dedicated watches and event handlers again. This requires the
possibility to remove event handlers from SharedIndexInformers again.
Stopping an informer is not sufficient, because there might
be multiple controllers in a controller manager that independently
decide which resources to watch.
Unfortunately the ResourceEventHandler interface encourages to use
value objects for handlers (like the ResourceEventHandlerFuncs
struct, that uses value receivers to implement the interface).
Go does not support comparison of function pointers and therefore
the comparison of such structs is not possible, also. To be able
to remove all kinds of handlers and to solve the problem of
multi-registrations of handlers a registration handle is introduced.
It is returned when adding a handler and can later be used to remove
the registration again. This handle directly stores the created
listener to simplify the deletion.
Kubernetes-commit: 7436af3302088c979b431856c432b95dd230f847
The lock acquired by tryAcquireOrRenew is released when the leader ends
leadership. However, due to the cancellation of the context, the lock may
be set as an empty lock, so the Update cannot be run normally, resulting
in a failure to release the lock.
Signed-off-by: jackzhang <x_jackzhang@qq.com>
Kubernetes-commit: 8690ff6264cceb38bd81dec99bb8affcc40286a9
- Run hack/update-codegen.sh
- Run hack/update-generated-device-plugin.sh
- Run hack/update-generated-protobuf.sh
- Run hack/update-generated-runtime.sh
- Run hack/update-generated-swagger-docs.sh
- Run hack/update-openapi-spec.sh
- Run hack/update-gofmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Kubernetes-commit: a9593d634c6a053848413e600dadbf974627515f