Refactor the code related to creating an internal type load balancer in the e2e tests for network load balancers. The modification removes the check for the "azure" provider and updates it to only check for "gke" and "gce" providers. This change ensures that the test only runs when the cluster is using "gke" or "gce" as the provider. The counterpart test is in the out-of-tree cloud provider azure.
The winkernel code was originally based on the iptables code but never
made use of some parts of it. (e.g., it logs a warning if you didn't
set `--cluster-cidr`, even though it doesn't actually use
`--cluster-cidr` if you do set it.)
Blocking API calls during a scheduling cycle like the DRA plugin is doing slow
down overall scheduling, i.e. also affecting pods which don't use DRA.
It is easy to move the blocking calls into a goroutine while the scheduling
cycle ends with "pod unschedulable". The hard part is handling an error when
those API calls then fail in the background. There is a solution for that
(see https://github.com/kubernetes/kubernetes/pull/120963), but it's complex.
Instead, publishing the modified PodSchedulingContext can also be done
later. In the more common case of a pod which is ready for binding except for
its claims, that'll be in PreBind, which runs in a separate goroutine already.
In the less common case that a pod cannot be scheduled, that'll be in
Unreserve which is still blocking.
This moves adding a pod to ReservedFor out of the main scheduling cycle into
PreBind. There it is done concurrently in different goroutines. For claims
which were specifically allocated for a pod (the most common case), that
usually makes no difference because the claim is already reserved.
It starts to matter when that pod then cannot be scheduled for other reasons,
because then the claim gets unreserved to allow deallocating it. It also
matters for claims that are created separately and then get used multiple times
by different pods.
Because multiple pods might get added to the same claim rapidly independently
from each other, it makes sense to do all claim status updates via patching:
then it is no longer necessary to have an up-to-date copy of the claim because
the patch operation will succeed if (and only if) the patched claim is valid.
Server-side-apply cannot be used for this because a client always has to send
the full list of all entries that it wants to be set, i.e. it cannot add one
entry unless it knows the full list.