Commit Graph

322 Commits

Author SHA1 Message Date
Matthieu MOREL
acc5917341 fix: enable empty and len rules from testifylint on pkg package
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>

Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>

Kubernetes-commit: f014b754fb5925dfbca6e27a44d0c3968b157e14
2024-06-28 21:20:13 +02:00
Karl Isenberg
48d8fc7e2e Add details to watch interface method comments
The watch.Interface design is hard to change, because it would break
most client-go users that perform watches. So instead of changing the
interface to be more user friendly, this change updates the method
comments to explain the different responsibilities of the consumer
(client user) and the producer (interface implementer).

Kubernetes-commit: 1f35231a1d4f7b8586a7ec589c799729eeb4f7c4
2024-06-12 13:06:22 -07:00
Karl Isenberg
f29a36dfab Refactor Reflector ListAndWatch
- Extract watchWithResync to simplify ListAndWatch
- Wrap watchHandler with two variants, one for WatchList and one for
  just Watch.
- Replace a bool pointer arg with a bool arg and bool return, to
  improve readability.
- Use errors.Is to satisfy the linter
- Use %w to wrap the store.Replace error, to allow unwrapping.

Kubernetes-commit: 65fc1bb463c85a4c85e619bf7acac9503e23a253
2024-06-12 13:14:55 -07:00
Lukasz Szaszkiewicz
5347b09f1d client-go/reflector: remove reflector_data_consistency_detector_test.go
Kubernetes-commit: 0c96a00217d6e2a4c9cb1cf6c0bc719d570678fc
2024-06-11 10:42:07 +02:00
Lukasz Szaszkiewicz
b681e77bec client-go/reflector: use consistencydetector.IsDataConsistencyDetectionForWatchListEnabled
Kubernetes-commit: f6c68908ba37dbb7af602b8f88b5f395025d2384
2024-06-10 23:01:04 +02:00
Wojciech Tyczyński
b444e6c32e Implement ResilientWatchCacheInitialization
Kubernetes-commit: a8ef6e9f0104a44023162bb8229fb677ec80beb1
2024-04-29 14:19:46 +02:00
Karl Isenberg
8c9cb8838f Improve Reflector unit tests
- Add tests to confirm that Stop is always called.
- Add TODOs to show were Stop is not currently being called
  (to fix in a future PR)

Kubernetes-commit: ab5aa4762fd5206d0dbd8412d7c6f3b76533a122
2024-06-03 12:15:38 -07:00
Lukasz Szaszkiewicz
538b7809aa move checkWatchListDataConsistencyIfRequested back to client-go/tools/cache
Kubernetes-commit: cb44f83b3d500bb2ed29f7634095bc64c9b21729
2024-05-27 11:22:28 +02:00
Lukasz Szaszkiewicz
e428fc295c move client-go/tools/cache/reflector_data_consistency_detector to client-go/util/consistencydetector
Kubernetes-commit: 272dfc9d7e61481dfcdc4d6a021385d9cd85ba5f
2024-05-27 11:10:43 +02:00
Lukasz Szaszkiewicz
00036b79c4 client-go/tools/cache/reflector_data_consistency_detector: refactor unit tests
Kubernetes-commit: 18837d60ae12ac70380ce5cf21cfd8e0d221263a
2024-05-27 11:10:00 +02:00
Lukasz Szaszkiewicz
6bdde7723e client-go/consistency-detector: change the signature of checkWatchListConsistencyIfRequested
the signature of the method was tightly connected to the reflector,
making it difficult to use for anything other than a reflector.

this simple refactor makes the method more generic.

Kubernetes-commit: 83c7542abc8c542c01ecb67376f134b2071c5304
2024-04-22 14:01:22 +02:00
Lukasz Szaszkiewicz
4a34196022 replace ENABLE_CLIENT_GO_WATCH_LIST_ALPHA with WatchListClient gate
Kubernetes-commit: 9248cccc27fdd52a9a99fd9ad6e47ada9ee8b568
2024-04-30 08:24:53 +02:00
Lukasz Szaszkiewicz
0280901a4d client-go/reflector: warns when the bookmark event for initial events hasn't been received
Kubernetes-commit: 93960f489069a744afda1be42f82349e25d7e4d7
2024-04-29 15:18:54 +02:00
Wojciech Tyczyński
2fe05741c1 Fix race in informer transformers
Kubernetes-commit: e9f74597a8e7104b26640614e952d4453654453b
2024-04-17 11:19:08 +02:00
Stephen Kitt
841e997e33 Improve the lister function documentation
In particular, document that ListAllByNamespace delegates to ListAll
if no namespace is specified.

Signed-off-by: Stephen Kitt <skitt@redhat.com>

Kubernetes-commit: 54e899317ef46e3b70827cacee244717022db0ad
2024-04-19 17:58:02 +02:00
Wojciech Tyczyński
b9e952f4d7 Allow for configuring MinWatchTimeout in Reflector and Informer.
Kubernetes-commit: 29e38c19b853b6cc3950541b1727395acf5eb4d3
2024-04-09 09:12:54 +02:00
Wojciech Tyczyński
7da319745b Refactor informer constructors
Kubernetes-commit: 4da185a601e1f657a873dfd7e51efcc8a3b94c37
2024-04-09 09:06:38 +02:00
Lan Liang
fa185a21db cleanup: delete rand.Seed(time.Now().UnixNano()) and using global number generator.
see https://tip.golang.org/doc/go1.20

Signed-off-by: Lan Liang <gcslyp@gmail.com>

Kubernetes-commit: dc992adad385ab631e4a528ee6a342ea71e7a379
2024-03-18 08:10:12 +00:00
Lukasz Szaszkiewicz
45e17fede0 client-go/cache/reflector: use metav1.InitialEventsAnnotationKey
Kubernetes-commit: a953539fb57b2ee18337f323ef15425a4d6207ed
2024-03-11 12:56:40 +01:00
Nilekh Chaudhari
02f21344ac feat: implements svm controller
Signed-off-by: Nilekh Chaudhari <1626598+nilekhc@users.noreply.github.com>

Kubernetes-commit: 9161302e7fd3a5fb055b2f2572c6e1228240bb51
2024-01-04 19:34:05 +00:00
Mike Spreitzer
61be9f118e Add DeletionHandlingObjectToName
Signed-off-by: Mike Spreitzer <mspreitz@us.ibm.com>

Kubernetes-commit: d60a25b2db52d7e180b0962e23cc06e71f36fb29
2024-01-26 13:28:06 -05:00
Lukasz Szaszkiewicz
202c415847 client-go/reflector: make UseWatchList a pointer
until #115478(use streaming against the etcd storage)
is resolved the cacher need a way to disable the streaming.

Kubernetes-commit: 41e706600aea7468f486150d951d3b8948ce89d5
2024-01-19 13:48:29 +01:00
John Howard
785e19661f client-go: allow adding indexes after informer starts
Kubernetes-commit: d96a9858d396d7f418d24ea47bdc92ef8429f707
2023-03-31 15:57:18 -07:00
tao.yang
0447e1f9ce cleanup: omit comparison with bool constants
Signed-off-by: tao.yang <tao.yang@daocloud.io>

Kubernetes-commit: b35357b6c08f21ba0fd312536051394c2567ec79
2023-09-04 16:59:23 +08:00
Lukasz Szaszkiewicz
0f984dc7fc client-go/reflector: introduce a data consistency mechanism for the watch-list feature.
checkWatchListConsistencyIfRequested performs a data consistency check only when
the KUBE_WATCHLIST_INCONSISTENCY_DETECTOR environment variable was set during a binary startup.

The consistency check is meant to be enforced only in the CI, not in production.
The check ensures that data retrieved by the watch-list api call
is exactly the same as data received by the standard list api call.

Note that this function will panic when data inconsistency is detected.
This is intentional because we want to catch it in the CI.

Kubernetes-commit: b31e7793d0d873a71c90caf8455556aa905cf88d
2023-10-06 14:26:47 +02:00
Lukasz Szaszkiewicz
1e0855a7ac reflector: fallback to the previous mode on any error
originally we honored only apierrors.IsInvalid
but decided to fallback on every error
because it is better to make progress than deadlocking

Kubernetes-commit: 4b3915017950a114124a88c5d308bd8bfb9ec48e
2023-10-04 08:17:10 +02:00
Lukasz Szaszkiewicz
2c9d749027 reflector: close an established watcher when the StopCh was closed
Kubernetes-commit: 26f113be2fc71c7a41a59145eee5d4da9d062b9e
2023-10-03 13:49:21 +02:00
Lukasz Szaszkiewicz
d0ea06d597 cache/reflector: check the value of the initial-events-end annotation
Kubernetes-commit: 04668c00432cbd552b08eb94f829643facbd2061
2023-09-19 12:59:23 +02:00
Dr. Stefan Schimanski
00f8b3aa35 client-go: log proper 'caches populated' message, with type and source and only once
Signed-off-by: Dr. Stefan Schimanski <stefan.schimanski@gmail.com>

Kubernetes-commit: a1809ffae377f3abbae12b137073ca4c473743cd
2023-08-07 12:56:32 +02:00
John Howard
702d7378b6 informer: fix race against Run and SetTransform/SetWatchErrorHandler
`SetWatchErrorHandler` claims it will fail if Run() has already started.
But if they are called concurrently, it will actually trigger a data
race.

With this PR:
```
62702 runs so far, 0 failures (100.00% pass rate). 59.152682ms avg, 189.068387ms max, 26.623785ms min
```

Without this PR:
```
5012 runs so far, 38 failures (99.25% pass rate). 58.675502ms avg, 186.018084ms max, 29.468104ms min
```

Kubernetes-commit: 35d2431b3a89c5bd693846952e9d27ce4e3a0754
2023-05-08 10:11:54 -07:00
scott
6c7d1bc996 Add WithAlloc interface and stub implementations with base benchmarks
Kubernetes-commit: b8a3bd673dc61b4d7a0ad0f54fa423e0160078cf
2023-05-27 17:57:35 -04:00
Marcel Zieba
bae10246dd Improve backoff policy in reflector.
Before, we've used two separate backoff managers for List and Watch
calls, now they share single backoff manager.

Kubernetes-commit: 337728b02559dec8a613fdef174f732da9cae310
2023-05-19 14:28:31 +02:00
kkkkun
3195e36899 Should get ENABLE_CLIENT_GO_WATCH_LIST_ALPHA when new reflector
Signed-off-by: kkkkun <scuzk373x@gmail.com>

Kubernetes-commit: 2eed9b4143a9009b3327003ae50dbbebb44d2841
2023-05-24 21:09:53 +08:00
Daniel Smith
aa75d3bd59 lavalamp is taking a long break
Kubernetes-commit: 1ffe3f467e8b8033312b7c68943d58125fd27663
2023-05-11 16:43:38 +00:00
Mike Spreitzer
8301a60862 Added conversions to/from NamespacedName
Also renamed file to something more on-point.

Kubernetes-commit: 8d92cfb13163f596feddfe1b5e734fd1ba4f6323
2023-03-22 16:55:07 -04:00
Mike Spreitzer
42799afb70 Add structured alternatives to strings in client-go/tools/cache
Kubernetes-commit: ec9515a828ec3907ffc66913947adf4e8ee7b9d6
2023-03-22 16:21:17 -04:00
Daniel Smith
c3b84f0438 Change where transformers are called.
Kubernetes-commit: e76dff38cf74c3c8ad9ed4d3bc6e3641d9b64565
2023-03-14 23:05:20 +00:00
Tom Wieczorek
9e8c663097 Properly align synctrack.SingleFileTracker struct
count is used with atomic operations so it must be 64-bit aligned,
otherwise atomic operations will panic. Having it at the top of the
struct will guarantee that, even on 32-bit arches.

This fixes panics like that one observed in kube-apiserver:

    E0310 13:48:47.476124     676 runtime.go:77] Observed a panic: unaligned 64-bit atomic operation
    goroutine 141 [running]:
    k8s.io/apimachinery/pkg/util/runtime.logPanic({0x2482378, 0x2db2ff8})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x94
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x78
    panic({0x2482378, 0x2db2ff8})
    	/usr/local/go/src/runtime/panic.go:884 +0x218
    runtime/internal/atomic.panicUnaligned()
    	/usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
    runtime/internal/atomic.Load64(0x685f794)
    	/usr/local/go/src/runtime/internal/atomic/atomic_arm.s:280 +0x14
    k8s.io/client-go/tools/cache/synctrack.(*SingleFileTracker).HasSynced(0x685f790)
    	vendor/k8s.io/client-go/tools/cache/synctrack/synctrack.go:115 +0x3c
    k8s.io/client-go/tools/cache.(*processorListener).HasSynced(0x6013e60)
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:907 +0x20
    k8s.io/client-go/tools/cache.WaitForCacheSync.func1()
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:332 +0x50
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2dcf274, 0x607c600})
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2dcf274, 0x607c600}, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:262 +0x64
    k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x2dcf274, 0x607c600}, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:649 +0x11c
    k8s.io/apimachinery/pkg/util/wait.poll({0x2dcf274, 0x607c600}, 0x1, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:600 +0xc4
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2dcf274, 0x607c600}, 0x5f5e100, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:551 +0x60
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5f5e100, 0x6298020, 0x607c600)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x48
    k8s.io/client-go/tools/cache.WaitForCacheSync(0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:329 +0x80
    k8s.io/client-go/tools/cache.WaitForNamedCacheSync({0x283c5e1, 0xf}, 0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:316 +0xe8
    created by k8s.io/kubernetes/plugin/pkg/auth/authorizer/node.AddGraphEventHandlers
    	plugin/pkg/auth/authorizer/node/graph_populator.go:65 +0x5b0
    panic: unaligned 64-bit atomic operation [recovered]
    	panic: unaligned 64-bit atomic operation

    goroutine 141 [running]:
    k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0})
    	vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xf4
    panic({0x2482378, 0x2db2ff8})
    	/usr/local/go/src/runtime/panic.go:884 +0x218
    runtime/internal/atomic.panicUnaligned()
    	/usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
    runtime/internal/atomic.Load64(0x685f794)
    	/usr/local/go/src/runtime/internal/atomic/atomic_arm.s:280 +0x14
    k8s.io/client-go/tools/cache/synctrack.(*SingleFileTracker).HasSynced(0x685f790)
    	vendor/k8s.io/client-go/tools/cache/synctrack/synctrack.go:115 +0x3c
    k8s.io/client-go/tools/cache.(*processorListener).HasSynced(0x6013e60)
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:907 +0x20
    k8s.io/client-go/tools/cache.WaitForCacheSync.func1()
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:332 +0x50
    k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2dcf274, 0x607c600})
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1c
    k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x2dcf274, 0x607c600}, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:262 +0x64
    k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x2dcf274, 0x607c600}, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:649 +0x11c
    k8s.io/apimachinery/pkg/util/wait.poll({0x2dcf274, 0x607c600}, 0x1, 0x64a6060, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:600 +0xc4
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x2dcf274, 0x607c600}, 0x5f5e100, 0x6382050)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:551 +0x60
    k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x5f5e100, 0x6298020, 0x607c600)
    	vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x48
    k8s.io/client-go/tools/cache.WaitForCacheSync(0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:329 +0x80
    k8s.io/client-go/tools/cache.WaitForNamedCacheSync({0x283c5e1, 0xf}, 0x607c600, {0x6298000, 0x3, 0x3})
    	vendor/k8s.io/client-go/tools/cache/shared_informer.go:316 +0xe8
    created by k8s.io/kubernetes/plugin/pkg/auth/authorizer/node.AddGraphEventHandlers
    	plugin/pkg/auth/authorizer/node/graph_populator.go:65 +0x5b0

Kubernetes-commit: ffcf653e0666366e6241c99d9418e830840afa0f
2023-03-13 08:37:13 +01:00
Patrick Ohly
29ad2bc1d4 client-go: shut down watch reflector as soon as stop channel closes
Without this change, sometimes leaked goroutines were reported for
test/integration/scheduler_perf. The one that caused the cleanup to get delayed
was this one:

    goleak.go:50: found unexpected goroutines:
        [Goroutine 2704 in state chan receive, 2 minutes, with k8s.io/client-go/tools/cache.(*Reflector).watch on top of the stack:
        goroutine 2704 [chan receive, 2 minutes]:
        k8s.io/client-go/tools/cache.(*Reflector).watch(0xc00453f590, {0x0, 0x0}, 0x1f?, 0xc00a128080?)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:388 +0x5b3
        k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc00453f590, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:324 +0x3bd
        k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:279 +0x45
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc007aafee0)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x49
        k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003e18150?, {0x75e37c0, 0xc00389c280}, 0x1, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xcf
        k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00453f590, 0xc006e94900)
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:278 +0x257
        k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:58 +0x3f
        k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:75 +0x74
        created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
                /nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0xe5

watch() was stuck in an exponential backoff timeout. Logging confirmed that:

        I0309 21:14:21.756149 1572727 reflector.go:387] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://127.0.0.1:38269/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 127.0.0.1:38269: connect: connection refused - backing off

Kubernetes-commit: b4751a52d53d4acc6a5ce3e796938c9a12f81fcb
2023-03-09 21:19:17 +01:00
Lukasz Szaszkiewicz
262b98af03 cache/controller: Add ENABLE_CLIENT_GO_WATCH_LIST_ALPHA
Kubernetes-commit: 966b26d55c22f7fbf20841a3a993de4f984d88db
2023-03-07 12:34:11 +01:00
Lukasz Szaszkiewicz
ac0a428e7a reflector watchlist tests
Kubernetes-commit: 194ffd779e7699c3f756f12e604b479b73f9f66c
2023-03-06 09:24:36 +01:00
Lukasz Szaszkiewicz
b33e818785 reflector: use watchlist
Kubernetes-commit: 51ec7305aa9fc2541c46c4836f5e150c818341f9
2023-03-09 14:11:59 +01:00
Lukasz Szaszkiewicz
d1c260eddc reflector: introduce watchList
Kubernetes-commit: 2fed6b8a19dde62fb6824aa9d008baedf51c2466
2023-03-09 14:08:58 +01:00
Lukasz Szaszkiewicz
ac7598edb4 reflector: allow watch method to accept a watcher
Kubernetes-commit: f6161a51e93bfa70585f1797f447f13ec1fde68b
2023-03-08 12:52:32 +01:00
Lukasz Szaszkiewicz
f694a7978b reflector: extract watch and startResyncAsync methods
Kubernetes-commit: 34fe27355b8b5acc4d29d053ed4361b4f72e147b
2023-03-02 16:26:41 +01:00
Odin Ugedal
b2a4f028d6 client-go/cache: update Replace comment to be more clear
Since the behavior is now changed, and the old behavior leaked objects,
this adds a new comment about how Replace works.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 27f4bcae5c52a3bb88141f940ec23d907a15cde5
2023-02-13 11:23:50 +00:00
Odin Ugedal
a01b5a2a96 client-go/cache: rewrite Replace to check queue first
This is useful to both reduce the code complexity, and to ensure clients
get the "newest" version of an object known when its deleted. This is
all best-effort, but for clients it makes more sense giving them the
newest object they observed rather than an old one.

This is especially useful when an object is recreated. eg.

Object A with key K is in the KnownObjects store;
- DELETE delta for A is queued with key K
- CREATE delta for B is queued with key K
- Replace without any object with key K in it.

In this situation its better to create a DELETE delta with
DeletedFinalStateUnknown with B (with this patch), than it is to give
the client an DeletedFinalStateUnknown with A (without this patch).

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 7bcc3e00fc28b2548886d04639a2e352ab37fb55
2023-02-13 11:12:37 +00:00
Odin Ugedal
b250bf51ae client-go/cache: merge ReplaceMakesDeletionsForObjectsInQueue tests
Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: cd3e98b65c1339a8adc157175630de099a057d3f
2023-02-10 14:30:10 +00:00
Odin Ugedal
8279635aa4 client-go/cache: fix missing delete event on replace without knownObjects
This fixes an issue where a relist could result in a DELETED delta
with an object wrapped in a DeletedFinalStateUnknown object; and then on
the next relist, it would wrap that object inside another
DeletedFinalStateUnknown, leaving the user with a "double" layer
of DeletedFinalStateUnknown's.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 0bf0546d9f75d92c801e81c9f7adf040bba64102
2023-02-10 14:16:26 +00:00
Odin Ugedal
2ded6b6eb8 client-go/cache: fix missing delete event on replace
This fixes a race condition when a "short lived" object
is created and the create event is still present on the queue
when a relist replaces the state. Previously that would lead in the
object being leaked.

The way this could happen is roughly;

1. new Object is added O, agent gets CREATED event for it
2. watch is terminated, and the agent runs a new list, L
3. CREATE event for O is still on the queue to be processed.
4. informer replaces the old data in store with L, and O is not in L
  - Since O is not in the store, and not in the list L, no DELETED event
    is queued
5. CREATE event for O is still on the queue to be processed.
6. CREATE event for O is processed
7. O is <leaked>; its present in the cache but not in k8s.

With this patch, on step 4. above it would create a DELETED event
ensuring that the object will be removed.

Signed-off-by: Odin Ugedal <ougedal@palantir.com>
Signed-off-by: Odin Ugedal <odin@uged.al>

Kubernetes-commit: 25d77218acdac2f793071add9ea878b08c7d328b
2023-02-08 14:57:23 +00:00