After a CRD or an APIService was deleted, the corresponding group was
never unregistered. It caused a stale entry to remain in the root path
and could potentially lead to memory leak as the groupDiscoveryHandler
was never released and the handledGroups was never cleaned up.
The commit implements the cleanup. It tracks each group's usage and
unregister the a group when there is no version for this group.
Signed-off-by: Quan Tian <qtian@vmware.com>
* Job: Handle error returned from AddEventHandler function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Use the error message the similar to CronJob
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Clean up error messages
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the tesing.T on the second place in the args for the newControllerFromClient function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.T on the second place in the args for the newControllerFromClientWithClock function
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Call t.Helper()
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the createJobControllerWithSharedInformers function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Put the testing.TB on the second place in the args for the startJobControllerAndWaitForCaches function and call tb.Helper() there
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
* Adapt TestFinializerCleanup to the eventhandler error
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
---------
Signed-off-by: Yuki Iwai <yuki.iwai.tz@gmail.com>
TestSampleAPIServer tried to validate APIService deletion, but it used
an unmatched selector to delete and list APIServices, which essentially
validated nothing.
Signed-off-by: Quan Tian <qtian@vmware.com>
Some test cases can make nodes not ready and use DeferCleanup to bring
nodes back online. Checking if all nodes are online would fail
in such cases as AfterEach runs before DeferCleanup.
Scheduling nodes readines check to DeferCleanup should solve this
issue as nodes would be brought back to a `Ready` state before the
check.
Fixes https://github.com/kubernetes/kubernetes/issues/120941
GetNewerThan() call isn't blocking until the pod status/cache is updated and returning the empty pod status.
Hence, whenever the `SyncLoop ADD/UPDATE/RECONCILE` functions are called multiple times in a very less time interval,
Kubelet calls multiple `CreateContainer` CRI api that results in the creation of duplicate containers within a given pod.
The initially created conainer keeps `Running` and the later container keeps `Exiting` and hence resulting the pod in `CrashLoopBackOff` state forever
Signed-off-by: Sai Ramesh Vanka <svanka@redhat.com>