Bump k8s.io/kube-openapi to pick up kubernetes/kube-openapi#579 which
moved the last ginkgo/gomega tests to stdlib testing and ran go mod
tidy, removing ginkgo/gomega from kube-openapi's go.mod.
This drops ginkgo/gomega as indirect deps from apimachinery. It also
prunes Masterminds/semver, google/pprof, and golang.org/x/tools from
client-go and other staging modules where they were only needed
through kube-openapi's ginkgo/gomega chain.
Contributes to kubernetes/kubernetes#127888
Kubernetes-commit: 56cd74d879f1ba11aadcff95326f17a1cc2c82ef
KEP-5732: Add SchedulingConstraints to PodGroup API and use them in TopologyPlacement plugin
Kubernetes-commit: 299ab0d68a9d70b3c39d63210de47ac01d18e74b
The "Failed to update lease optimistically, falling back to slow path"
message was logged at Error level, but this is expected behavior during
normal leader election when the optimistic update encounters a conflict.
The system gracefully falls back to the slow path (Get + Update), so
this is not a real error. Downgrade to V(2) Info to reduce log noise.
Kubernetes-commit: 04977a0ea4592bfaa70d5095a4cfe99dd4b847e1
Add plugin to generate placements based on scheduling constraints
Co-authored-by: Antoni Zawodny <zawodny@google.com>
Kubernetes-commit: d9da8c7c4a25cee553720737fdec07006e063da1
cri streaming option a hardcut - add new staging repositories `streaming` and `cri-streaming`
Kubernetes-commit: 2bd6c7fe3cb8663804dc6e7672ff01aeebc97274
* Drop WorkloadRef field and introduce SchedulingGroup field in Pod API
* Introduce v1alpha2 Workload and PodGroup APIs, drop v1alpha1 Workload API
Co-authored-by: yongruilin <yongrlin@outlook.com>
* Run hack/update-codegen.sh
* Adjust kube-scheduler code and integration tests to v1alpha2 API
* Drop v1alpha1 scheduling API group and run make update
---------
Co-authored-by: yongruilin <yongrlin@outlook.com>
Kubernetes-commit: 3f094dc228318b89f1fef313543b960e35ca6e3e
klog hasn't been updated in Kubernetes for a few releases. Several
enhancements have accumulated that are worth having.
Kubernetes-commit: 56e0565c113107bdea398b075aba5bdef43489ed
Update google.golang.org/protobuf to v1.36.12-0.20260120151049-f2248ac996af to prevent file size explosion in go 1.26
Kubernetes-commit: 77c013637cb40e1b5d2b26664dc7b297f1ff2693
When watch.Broadcaster.Shutdown() is called it drains all queued events
then calls closeAll(), which closes every watcher's result channel.
eventBroadcasterImpl.Shutdown() calls Broadcaster.Shutdown() first,
then calls the cancellation context's cancel() function. Between those
two steps there is a window in which the result channel is closed while
the cancellation context is still live.
Without the two-value channel receive the goroutine in StartEventWatcher
would spin on the already-closed channel: each select iteration
immediately receives the zero-value watch.Event, the type assertion
fails (nil interface, ok == false), and the loop continues burning CPU
until the select scheduler eventually picks the cancelationCtx.Done()
case.
Guard against this by reading the ok boolean from the channel receive:
case watchEvent, ok := <-watcher.ResultChan():
if !ok {
return
}
This is the correct and idiomatic Go pattern for a channel that may be
closed by its producer. Note that when this return path is taken the
broadcaster has already delivered every queued event (Broadcaster.Shutdown
blocks until the distribute loop exits before closeAll runs), so no
events are silently dropped.
Add a regression test (TestStartEventWatcherExitsOnDirectShutdown) that
creates a broadcaster without an external context so Shutdown() is
fully synchronous, starts a watcher, and verifies the goroutine exits
cleanly via goleak.VerifyNone.
Signed-off-by: Rajneesh180 <rajneeshrehsaan48@gmail.com>
Kubernetes-commit: 95c15b54069922b0a66c198a064577ea0a160694
[Declarative Validation] Bring `k8s:maxLength` tag in line with OpenAPI `maxLength` validation semantics
Kubernetes-commit: e08e598df07bc929679ef046418992a8205da18f