Commit Graph

299 Commits

Author SHA1 Message Date
tao.yang
0447e1f9ce cleanup: omit comparison with bool constants
Signed-off-by: tao.yang <tao.yang@daocloud.io>

Kubernetes-commit: b35357b6c08f21ba0fd312536051394c2567ec79
2023-09-04 16:59:23 +08:00
Lukasz Szaszkiewicz
0f984dc7fc client-go/reflector: introduce a data consistency mechanism for the watch-list feature.
checkWatchListConsistencyIfRequested performs a data consistency check only when
the KUBE_WATCHLIST_INCONSISTENCY_DETECTOR environment variable was set during a binary startup.

The consistency check is meant to be enforced only in the CI, not in production.
The check ensures that data retrieved by the watch-list api call
is exactly the same as data received by the standard list api call.

Note that this function will panic when data inconsistency is detected.
This is intentional because we want to catch it in the CI.

Kubernetes-commit: b31e7793d0d873a71c90caf8455556aa905cf88d
2023-10-06 14:26:47 +02:00
Lukasz Szaszkiewicz
1e0855a7ac reflector: fallback to the previous mode on any error
originally we honored only apierrors.IsInvalid
but decided to fallback on every error
because it is better to make progress than deadlocking

Kubernetes-commit: 4b3915017950a114124a88c5d308bd8bfb9ec48e
2023-10-04 08:17:10 +02:00
Lukasz Szaszkiewicz
2c9d749027 reflector: close an established watcher when the StopCh was closed
Kubernetes-commit: 26f113be2fc71c7a41a59145eee5d4da9d062b9e
2023-10-03 13:49:21 +02:00
Lukasz Szaszkiewicz
d0ea06d597 cache/reflector: check the value of the initial-events-end annotation
Kubernetes-commit: 04668c00432cbd552b08eb94f829643facbd2061
2023-09-19 12:59:23 +02:00
Dr. Stefan Schimanski
00f8b3aa35 client-go: log proper 'caches populated' message, with type and source and only once
Signed-off-by: Dr. Stefan Schimanski <stefan.schimanski@gmail.com>

Kubernetes-commit: a1809ffae377f3abbae12b137073ca4c473743cd
2023-08-07 12:56:32 +02:00
John Howard
702d7378b6 informer: fix race against Run and SetTransform/SetWatchErrorHandler
`SetWatchErrorHandler` claims it will fail if Run() has already started.
But if they are called concurrently, it will actually trigger a data
race.

With this PR:
```
62702 runs so far, 0 failures (100.00% pass rate). 59.152682ms avg, 189.068387ms max, 26.623785ms min
```

Without this PR:
```
5012 runs so far, 38 failures (99.25% pass rate). 58.675502ms avg, 186.018084ms max, 29.468104ms min
```

Kubernetes-commit: 35d2431b3a89c5bd693846952e9d27ce4e3a0754
2023-05-08 10:11:54 -07:00
scott
6c7d1bc996 Add WithAlloc interface and stub implementations with base benchmarks
Kubernetes-commit: b8a3bd673dc61b4d7a0ad0f54fa423e0160078cf
2023-05-27 17:57:35 -04:00
Marcel Zieba
bae10246dd Improve backoff policy in reflector.
Before, we've used two separate backoff managers for List and Watch
calls, now they share single backoff manager.

Kubernetes-commit: 337728b02559dec8a613fdef174f732da9cae310
2023-05-19 14:28:31 +02:00
kkkkun
3195e36899 Should get ENABLE_CLIENT_GO_WATCH_LIST_ALPHA when new reflector
Signed-off-by: kkkkun <scuzk373x@gmail.com>

Kubernetes-commit: 2eed9b4143a9009b3327003ae50dbbebb44d2841
2023-05-24 21:09:53 +08:00
Daniel Smith
aa75d3bd59 lavalamp is taking a long break
Kubernetes-commit: 1ffe3f467e8b8033312b7c68943d58125fd27663
2023-05-11 16:43:38 +00:00
Mike Spreitzer
8301a60862 Added conversions to/from NamespacedName
Also renamed file to something more on-point.

Kubernetes-commit: 8d92cfb13163f596feddfe1b5e734fd1ba4f6323
2023-03-22 16:55:07 -04:00
Mike Spreitzer
42799afb70 Add structured alternatives to strings in client-go/tools/cache
Kubernetes-commit: ec9515a828ec3907ffc66913947adf4e8ee7b9d6
2023-03-22 16:21:17 -04:00
Daniel Smith
c3b84f0438 Change where transformers are called.
Kubernetes-commit: e76dff38cf74c3c8ad9ed4d3bc6e3641d9b64565
2023-03-14 23:05:20 +00:00
Tom Wieczorek
9e8c663097 Properly align synctrack.SingleFileTracker struct
count is used with atomic operations so it must be 64-bit aligned,
otherwise atomic operations will panic. Having it at the top of the
struct will guarantee that, even on 32-bit arches.

This fixes panics like that one observed in kube-apiserver:

    E0310 13:48:47.476124     676 runtime.go:77] Observed a panic: unaligned 64-bit atomic operation
    goroutine 141 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2482378, 0x2db2ff8})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x94
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x78
    panic({0x2482378, 0x2db2ff8})
    	/usr/local/go/src/runtime/panic.go:884 +0x218
    runtime/internal/atomic.panicUnaligned()
    	/usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
    runtime/internal/atomic.Load64(0x685f794)
    	/usr/local/go/src/runtime/internal/atomic/atomic_arm.s:280 +0x14
    k8s.io/client-go/tools/cache/synctrack.(*SingleFileTracker).HasSynced(0x685f790)
    	vendor/k8s.io/client-go/tools/cache/synctrack/synctrack.go:115 +0x3c
    k8s.io/client-go/tools/cache.(*processorListener).HasSynced(0x6013e60)
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:907 +0x20
    k8s.io/client-go/tools/cache.WaitForCacheSync.func1()
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:332 +0x50
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2dcf274, 0x607c600})
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2dcf274, 0x607c600}, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:262 +0x64
    k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x2dcf274, 0x607c600}, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:649 +0x11c
    k8s.io/apimachinery/pkg/util/wait.poll({0x2dcf274, 0x607c600}, 0x1, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:600 +0xc4
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2dcf274, 0x607c600}, 0x5f5e100, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:551 +0x60
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5f5e100, 0x6298020, 0x607c600)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x48
    k8s.io/client-go/tools/cache.WaitForCacheSync(0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:329 +0x80
    k8s.io/client-go/tools/cache.WaitForNamedCacheSync({0x283c5e1, 0xf}, 0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:316 +0xe8
    created by k8s.io/kubernetes/plugin/pkg/auth/authorizer/node.AddGraphEventHandlers
    	plugin/pkg/auth/authorizer/node/graph_populator.go:65 +0x5b0
    panic: unaligned 64-bit atomic operation [recovered]
    	panic: unaligned 64-bit atomic operation

    goroutine 141 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xf4
    panic({0x2482378, 0x2db2ff8})
    	/usr/local/go/src/runtime/panic.go:884 +0x218
    runtime/internal/atomic.panicUnaligned()
    	/usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
    runtime/internal/atomic.Load64(0x685f794)
    	/usr/local/go/src/runtime/internal/atomic/atomic_arm.s:280 +0x14
    k8s.io/client-go/tools/cache/synctrack.(*SingleFileTracker).HasSynced(0x685f790)
    	vendor/k8s.io/client-go/tools/cache/synctrack/synctrack.go:115 +0x3c
    k8s.io/client-go/tools/cache.(*processorListener).HasSynced(0x6013e60)
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:907 +0x20
    k8s.io/client-go/tools/cache.WaitForCacheSync.func1()
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:332 +0x50
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2dcf274, 0x607c600})
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2dcf274, 0x607c600}, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:262 +0x64
    k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x2dcf274, 0x607c600}, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:649 +0x11c
    k8s.io/apimachinery/pkg/util/wait.poll({0x2dcf274, 0x607c600}, 0x1, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:600 +0xc4
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2dcf274, 0x607c600}, 0x5f5e100, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:551 +0x60
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5f5e100, 0x6298020, 0x607c600)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x48
    k8s.io/client-go/tools/cache.WaitForCacheSync(0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:329 +0x80
    k8s.io/client-go/tools/cache.WaitForNamedCacheSync({0x283c5e1, 0xf}, 0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:316 +0xe8
    created by k8s.io/kubernetes/plugin/pkg/auth/authorizer/node.AddGraphEventHandlers
    	plugin/pkg/auth/authorizer/node/graph_populator.go:65 +0x5b0

Kubernetes-commit: ffcf653e0666366e6241c99d9418e830840afa0f
2023-03-13 08:37:13 +01:00
Patrick Ohly
29ad2bc1d4 client-go: shut down watch reflector as soon as stop channel closes
Without this change, sometimes leaked goroutines were reported for
test/integration/scheduler_perf. The one that caused the cleanup to get delayed
was this one:

    goleak.go:50: found unexpected goroutines:
        [Goroutine 2704 in state chan receive, 2 minutes, with k8s.io/client-go/tools/cache.(*Reflector).watch on top of the stack:
        goroutine 2704 [chan receive, 2 minutes]:
        k8s.io/client-go/tools/cache.(*Reflector).watch(0xc00453f590, {0x0, 0x0}, 0x1f?, 0xc00a128080?)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:388 +0x5b3
        k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc00453f590, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:324 +0x3bd
        k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:279 +0x45
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc007aafee0)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x49
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003e18150?, {0x75e37c0, 0xc00389c280}, 0x1, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xcf
        k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00453f590, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:278 +0x257
        k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:58 +0x3f
        k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:75 +0x74
        created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0xe5

watch() was stuck in an exponential backoff timeout. Logging confirmed that:

        I0309 21:14:21.756149 1572727 reflector.go:387] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://127.0.0.1:38269/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 127.0.0.1:38269: connect: connection refused - backing off

Kubernetes-commit: b4751a52d53d4acc6a5ce3e796938c9a12f81fcb
2023-03-09 21:19:17 +01:00
Lukasz Szaszkiewicz
262b98af03 cache/controller: Add ENABLE_CLIENT_GO_WATCH_LIST_ALPHA
Kubernetes-commit: 966b26d55c22f7fbf20841a3a993de4f984d88db
2023-03-07 12:34:11 +01:00
Lukasz Szaszkiewicz
ac0a428e7a reflector watchlist tests
Kubernetes-commit: 194ffd779e7699c3f756f12e604b479b73f9f66c
2023-03-06 09:24:36 +01:00
Lukasz Szaszkiewicz
b33e818785 reflector: use watchlist
Kubernetes-commit: 51ec7305aa9fc2541c46c4836f5e150c818341f9
2023-03-09 14:11:59 +01:00
Lukasz Szaszkiewicz
d1c260eddc reflector: introduce watchList
Kubernetes-commit: 2fed6b8a19dde62fb6824aa9d008baedf51c2466
2023-03-09 14:08:58 +01:00
Lukasz Szaszkiewicz
ac7598edb4 reflector: allow watch method to accept a watcher
Kubernetes-commit: f6161a51e93bfa70585f1797f447f13ec1fde68b
2023-03-08 12:52:32 +01:00
Lukasz Szaszkiewicz
f694a7978b reflector: extract watch and startResyncAsync methods
Kubernetes-commit: 34fe27355b8b5acc4d29d053ed4361b4f72e147b
2023-03-02 16:26:41 +01:00
Odin Ugedal
b2a4f028d6 client-go/cache: update Replace comment to be more clear
Since the behavior is now changed, and the old behavior leaked objects,
this adds a new comment about how Replace works.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 27f4bcae5c52a3bb88141f940ec23d907a15cde5
2023-02-13 11:23:50 +00:00
Odin Ugedal
a01b5a2a96 client-go/cache: rewrite Replace to check queue first
This is useful to both reduce the code complexity, and to ensure clients
get the "newest" version of an object known when its deleted. This is
all best-effort, but for clients it makes more sense giving them the
newest object they observed rather than an old one.

This is especially useful when an object is recreated. eg.

Object A with key K is in the KnownObjects store;
- DELETE delta for A is queued with key K
- CREATE delta for B is queued with key K
- Replace without any object with key K in it.

In this situation its better to create a DELETE delta with
DeletedFinalStateUnknown with B (with this patch), than it is to give
the client an DeletedFinalStateUnknown with A (without this patch).

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 7bcc3e00fc28b2548886d04639a2e352ab37fb55
2023-02-13 11:12:37 +00:00
Odin Ugedal
b250bf51ae client-go/cache: merge ReplaceMakesDeletionsForObjectsInQueue tests
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: cd3e98b65c1339a8adc157175630de099a057d3f
2023-02-10 14:30:10 +00:00
Odin Ugedal
8279635aa4 client-go/cache: fix missing delete event on replace without knownObjects
This fixes an issue where a relist could result in a DELETED delta
with an object wrapped in a DeletedFinalStateUnknown object; and then on
the next relist, it would wrap that object inside another
DeletedFinalStateUnknown, leaving the user with a "double" layer
of DeletedFinalStateUnknown's.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 0bf0546d9f75d92c801e81c9f7adf040bba64102
2023-02-10 14:16:26 +00:00
Odin Ugedal
2ded6b6eb8 client-go/cache: fix missing delete event on replace
This fixes a race condition when a "short lived" object
is created and the create event is still present on the queue
when a relist replaces the state. Previously that would lead in the
object being leaked.

The way this could happen is roughly;

1. new Object is added O, agent gets CREATED event for it
2. watch is terminated, and the agent runs a new list, L
3. CREATE event for O is still on the queue to be processed.
4. informer replaces the old data in store with L, and O is not in L
  - Since O is not in the store, and not in the list L, no DELETED event
    is queued
5. CREATE event for O is still on the queue to be processed.
6. CREATE event for O is processed
7. O is <leaked>; its present in the cache but not in k8s.

With this patch, on step 4. above it would create a DELETED event
ensuring that the object will be removed.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 25d77218acdac2f793071add9ea878b08c7d328b
2023-02-08 14:57:23 +00:00
xuzhenglun
7685b51912 Fix bug in reflector not detecting "Too large resource version" error before 1.17.0
Kubernetes-commit: 11e5e92dc65c1caaeb248a60d90ccc8e29eb9ff0
2023-01-16 11:36:28 +08:00
Clayton Coleman
08e22c4b64 cache: Reflector should have the same injected clock as its informer
While refactoring the backoff manager to simplify and unify the code
in wait a race condition was encountered in
TestSharedInformerWatchDisruption. The new implementation failed
because the fake clock was not propagated to the backoff managers
when the reflector was used in a controller. After ensuring the
mangaers, reflector, controller, and informer shared the same
clock the test needed was updated to avoid the race condition by
advancing the fake clock and adding real sleeps to wait for
asynchronous propagation of the various goroutines in the controller.

Due to the deep structure of informers it is difficult to inject
hooks to avoid having to perform sleeps. At a minimum the FakeClock
interface should allow a caller to determine the number of waiting
timers (to avoid the first sleep).

Kubernetes-commit: 91b3a81fbd916713afe215f7d701950e13a02869
2023-01-14 14:17:33 -05:00
Daniel Smith
cb28a0e57e Fix N^2 startup for webhook configurations
Add a "lazy" type to track when an update is needed. It uses a nested
locking technique to avoid extra evaluation calls.

Kubernetes-commit: 5a1091d88d95bd1dd5c27f2c72cee4ecb4219dda
2023-01-09 23:29:25 +00:00
Daniel Smith
5d70a118df Enable propagration of HasSynced
* Add tracker types and tests
* Modify ResourceEventHandler interface's OnAdd member
* Add additional ResourceEventHandlerDetailedFuncs struct
* Fix SharedInformer to let users track HasSynced for their handlers
* Fix in-tree controllers which weren't computing HasSynced correctly
* Deprecate the cache.Pop function

Kubernetes-commit: 8100efc7b3122ad119ee8fa4bbbedef3b90f2e0d
2022-11-18 00:12:50 +00:00
Andy Goldstein
fbb7f087d1 reflector: refactor setting typeDescription & expectedGVK
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>

Kubernetes-commit: 784ec157e67c86bc3383b326bbfe8ee70737aa4d
2022-12-02 12:39:58 -05:00
Andy Goldstein
37897aff8d Reflector: support logging Unstructured type
Add an annotation that can be added to the exampleType passed to
NewReflector to indicate the expected type for the Reflector. This is
useful for types such as unstuctured.Unstructured, which, when used with
a dynamic informer, do not have their TypeMeta filled in.

Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>

Kubernetes-commit: 474fc8c5234000bce666a6b02f7ffbb295ef135f
2022-08-17 15:49:26 -04:00
Wojciech Tyczyński
5e7ba1f8d7 Minor cleanup of thread safe store
Kubernetes-commit: 72194de96f0bb5c1c1e3247a129cf9e3f11972ef
2022-11-02 13:02:29 +01:00
Wojciech Tyczyński
b69a16cf38 Refactor store index into its structure
Kubernetes-commit: 7c94ce3076a96acab4c7e88489cd596f1aad40e0
2022-05-05 17:30:24 +02:00
astraw99
f549acf3ce Fix duplicate code block of ListAll function
Kubernetes-commit: d8db1e9ba3c110012def6faa5579ad3abb71a6a6
2022-03-24 14:47:20 +08:00
Nick Turner
5870c622c7 Remove log line from expiration cache
Kubernetes-commit: e7bfd5909f5540f884a8bb914a9230c9bf7c2803
2022-10-04 11:01:37 -07:00
Lurong
59765b8784 fix typo error
Kubernetes-commit: 0b2ad4cc42e727ec93011411033cac1abfb637ea
2022-09-22 14:42:41 +08:00
Alexander Zielenski
a300ae0dfe return when test is done
Kubernetes-commit: cc0b9ffbd5a4edfe7ce01293c2bf963d4335d8d6
2022-08-30 14:38:21 -07:00
Alexander Zielenski
93e5e0e8a0 hold listener lock while waiting for goroutines to finish
Kubernetes-commit: 7ce19b75a8dbca12837ed9f4c5c2828c38b82a03
2022-08-29 12:34:35 -07:00
Alexander Zielenski
e11a988e1c simplify control flow
Kubernetes-commit: ee24648300f8575f503156f45b44034054c2e49d
2022-08-29 11:52:35 -07:00
Alexander Zielenski
ac7f6579ff fix spelling
Kubernetes-commit: 7685b393c2ee8f8548d62f9dd8485df764a0c8c3
2022-08-22 17:34:02 -07:00
Alexander Zielenski
0f4a6cf319 reset listenersStarted
for correctness. technically shouldnt be an issue since restarting a stopped processor is not supported

Kubernetes-commit: 3a81341cfa6f7e2ca1b9bfc195c567dcdfaa4dea
2022-08-08 14:19:37 -07:00
Alexander Zielenski
449817f7b5 add multithreaded test to shared informer
Kubernetes-commit: 8af0a31a15079725e5910d538a6ace03df2e382a
2022-08-08 13:39:18 -07:00
Alexander Zielenski
de0b7671e3 remove duplicate test
became a duplicate while refactoring original PR

Kubernetes-commit: df9a7afa63164035f1e94c3278d470b347b697b9
2022-08-08 11:46:05 -07:00
Alexander Zielenski
0565962dd0 address review comments
asd

Kubernetes-commit: 5f372e1867305679d8f9d8a013c5500763e5c875
2022-08-08 11:44:37 -07:00
Alexander Zielenski
5a25eb0de8 switch listeners to use a map, adapt tests
Kubernetes-commit: 063ef090e7fb5823ca18a10a83e8847eedac9599
2022-06-16 10:54:03 -07:00
Uwe Krueger
90c6a46b32 active remove/add tests for event handlers
Kubernetes-commit: e9cd17170b1646672796e713dee6c4fb0e6693bb
2021-10-06 12:40:46 +02:00
Uwe Krueger
de4dd3aaf3 tests for invalid registration removals
Kubernetes-commit: 152b6e11af4c8c50b768630bf7724e0bb3414e9c
2021-08-22 13:27:36 +02:00
Uwe Krueger
33eff64a05 apply desired changes for handler registration
Kubernetes-commit: 92f04baac98186673c684bba419087d6285807b1
2021-08-22 13:03:18 +02:00