Blocking API calls during a scheduling cycle like the DRA plugin is doing slow
down overall scheduling, i.e. also affecting pods which don't use DRA.
It is easy to move the blocking calls into a goroutine while the scheduling
cycle ends with "pod unschedulable". The hard part is handling an error when
those API calls then fail in the background. There is a solution for that
(see https://github.com/kubernetes/kubernetes/pull/120963), but it's complex.
Instead, publishing the modified PodSchedulingContext can also be done
later. In the more common case of a pod which is ready for binding except for
its claims, that'll be in PreBind, which runs in a separate goroutine already.
In the less common case that a pod cannot be scheduled, that'll be in
Unreserve which is still blocking.
This moves adding a pod to ReservedFor out of the main scheduling cycle into
PreBind. There it is done concurrently in different goroutines. For claims
which were specifically allocated for a pod (the most common case), that
usually makes no difference because the claim is already reserved.
It starts to matter when that pod then cannot be scheduled for other reasons,
because then the claim gets unreserved to allow deallocating it. It also
matters for claims that are created separately and then get used multiple times
by different pods.
Because multiple pods might get added to the same claim rapidly independently
from each other, it makes sense to do all claim status updates via patching:
then it is no longer necessary to have an up-to-date copy of the claim because
the patch operation will succeed if (and only if) the patched claim is valid.
Server-side-apply cannot be used for this because a client always has to send
the full list of all entries that it wants to be set, i.e. it cannot add one
entry unless it knows the full list.
* put var in local
Signed-off-by: husharp <jinhao.hu@pingcap.com>
* revert gomod
Signed-off-by: husharp <jinhao.hu@pingcap.com>
---------
Signed-off-by: husharp <jinhao.hu@pingcap.com>
The previous fix changed the behavior of
EnsureAdminClusterRoleBindingImpl under the assumption that the unit
test was correct and the real-world behavior was wrong, but in fact,
the real-world behavior was already correct, and the unit test was
expecting the wrong result because of the difference in behavior
between real and fake clients.
NFTables proxy will no longer install drop and reject rules for node
port services with no endpoints in chains associated with forward and
output hooks.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
NFTables proxy will now drop traffic directed towards unallocated
ClusterIPs and reject traffic directed towards invalid ports of
Cluster IPs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
The v1beta1 API had MetricsBindAddress and HealthzBindAddress fields
but they were removed in v1, and then never got removed from the
unversioned type when the v1beta1 API went away.