By creating CSIStorageCapacity objects in advance, we get the
FailedScheduling pod event if (and only if!) the test is expected to
fail because of insufficient or missing capacity. We can use that as
indicator that waiting for pod start can be stopped early. However,
because we might not get to see the event under load, we still need
the timeout.
Setting testParameters.scName had no effect because
StorageClassTest.StorageClassName isn't used anywhere. Instead, the
storage class name is generated dynamically.
This path fixes region Regex for listing zones.
Compute client BasePath changed to compute.googleapis.com, but resource URI were left as www.googleapis.com
DeprecatedMightBeMasterNode() has been marked as deprecated and we need to
find alternative way for callers of the function.
In NewResourceUsageGatherer(), the function was called for distinguishing
the specified pods are running on master nodes, and the gatherer gathers
those pods' resource usage.
This adds nodeHasControlPlanePods() to gistinguish the specified pods
are running on nodes which are operating control plane pods (kube-scheduler
and kube-controller-manager) and replace callers of DeprecatedMightBeMasterNode()
with this new function as better way.
This commit adds the ability for users to specify an install hint for
their exec credential provider binary.
In the exec credential provider workflow, if the exec credential binary
does not exist, then the user will see some sort of ugly
exec: exec: "does-not-exist": executable file not found in $PATH
error message. If some user downloads a kubeconfig from somewhere, they
may not know that kubectl is trying to use a binary to obtain
credentials to auth to the API, and scratch their head when they see
this error message. Furthermore, even if a user does know that their
kubeconfig is trying to run a binary, they might not know how to obtain
the binary. This install hint seeks to ease the above 2 user pains.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
endpointSliceTracker creates a set of resource versions for each
service, the resource versions in the set could be deleted when
endpointslices are deleted, but the set and its key in the map is never
deleted, leading to memory leak.
This patch deletes the set if the service is deleted, and stops
initializing an empty set when "read-only" methods "Has" and "Stale" are
called.
And give ownership to pkg/scheduler/framework/plugins/volumebinding
Signed-off-by: Aldo Culquicondor <acondor@google.com>
Change-Id: I4bd89b1745a2be0e458601056ab905bdd6692195
remove vmodule support; add klog v test case;some refactor
update follow review comment
add enabled test case, and some nit
fix enabled func
fix as review comment
Currently kube-proxy defaults the min-sync-period for
iptables to 0. However, as explained by Dan Winship,
"With minSyncPeriod: 0, you run iptables-restore 100 times.
With minSyncPeriod: 1s , you run iptables-restore once.
With minSyncPeriod: 10s , you also run iptables-restore once,
but you might have to wait 10 seconds first"
Previously, we didn't check the contents of the result after calling out
to the plugin endpoint. This could have resulted in errors if the plugin
returned either 'nil' or an empty result. This patch fixes this.
Previously, we were passing the variable 'devices' to this function,
when we should have been passing 'allocated'. This bug crept in due to a
variable name change that didn't propogate its way through the entire
function. The tests added in the previous commit would have caught this.
If no potential victims could be found, there is no need to evaluate the node
again, since its state didn't change.
It's safe to return and thus prevent scheduling from running the filter plugins
again.
NOTE:
A node that is filtered out by filter plugins could pass the filter plugins if
there is a change on that node, i.e. pods termination on that node.
Previously, this could be either caught by the normal `schedule` or `preempt` (pods
are terminated when the preemption logic tries to find the nodes and re-evaluate
the filter plugins.)
Actually, this shouldn't be taken care by the preemption, consider the routine
of `schedule` is always running when the interval is "zero", let `schedule`
take care of it will release `preempt` from something irrelevant with the `preemption`.
Due to above reason, couple of testcase as well as the logic of checking the existence
of victim pods are removed as it will never happen after the change.
Signed-off-by: Dave Chen <dave.chen@arm.com>