Remove PortRange for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Refactor ClusterCIDR for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Consolidate SyncPeriod and MinSyncPeriod for internal configuration
of kube-proxy adhering to the v1alpha2 version specifications as
detailed in https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Split running the Wardle aggregated API into preparation and
running phase. This allows reusing the prepared options and
makes it possible for us to introduce additional hooks into
the server authorization flow.
Some of the E2E node tests were flaky. Their timeout apparently was chosen
under the assumption that kubelet would retry immediately after a failed gRPC
call, with a factor of 2 as safety margin. But according to
0449cef8fd,
kubelet has a different, higher retry period of 90 seconds, which was exactly
the test timeout. The test timeout has to be higher than that.
As the tests don't use the gRPC call timeout anymore, it can be made
private. While at it, the name and documentation gets updated.
This fixes the message (node name and "cluster-scoped" were switched) and
simplifies the VAP:
- a single matchCondition short circuits completely unless they're a user
we care about
- variables to extract the userNodeName and objectNodeName once
(using optionals to gracefully turn missing claims and fields into empty strings)
- leaves very tiny concise validations
Co-authored-by: Jordan Liggitt <liggitt@google.com>
In the API, the effect of the feature gate is that alpha fields get dropped on
create. They get preserved during updates if already set. The
PodSchedulingContext registration is *not* restricted by the feature gate.
This enables deleting stale PodSchedulingContext objects after disabling
the feature gate.
The scheduler checks the new feature gate before setting up an informer for
PodSchedulingContext objects and when deciding whether it can schedule a
pod. If any claim depends on a control plane controller, the scheduler bails
out, leading to:
Status: Pending
...
Warning FailedScheduling 73s default-scheduler 0/1 nodes are available: resourceclaim depends on disabled DRAControlPlaneController feature. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
The rest of the changes prepare for testing the new feature separately from
"structured parameters". The goal is to have base "dra" jobs which just enable
and test those, then "classic-dra" jobs which add DRAControlPlaneController.
The structured parameter allocation logic was written from scratch in
staging/src/k8s.io/dynamic-resource-allocation/structured where it might be
useful for out-of-tree components.
Besides the new features (amount, admin access) and API it now supports
backtracking when the initial device selection doesn't lead to a complete
allocation of all claims.
Co-authored-by: Ed Bartosh <eduard.bartosh@intel.com>
Co-authored-by: John Belamaric <jbelamaric@google.com>
The resource claim controller is completely agnostic to the claim spec. It
doesn't care about classes or devices, therefore it needs no changes in 1.31
besides the v1alpha2 -> v1alpha3 renaming from a previous commit.