None of the users of the functions passed anything other than nil or an empty
map and the implementation ignore the parameter - it seems like a candidate for
simplification.
None of the users of the functions passed anything other than nil or an empty
map and the implementation ignore the parameter - it seems like a candidate for
simplification.
All code must use the context from Ginkgo when doing API calls or polling for a
change, otherwise the code would not return immediately when the test gets
aborted.
ginkgo.DeferCleanup has multiple advantages:
- The cleanup operation can get registered if and only if needed.
- No need to return a cleanup function that the caller must invoke.
- Automatically determines whether a context is needed, which will
simplify the introduction of context parameters.
- Ginkgo's timeline shows when it executes the cleanup operation.
Every ginkgo callback should return immediately when a timeout occurs or the
test run manually gets aborted with CTRL-C. To do that, they must take a ctx
parameter and pass it through to all code which might block.
This is a first automated step towards that: the additional parameter got added
with
sed -i 's/\(framework.ConformanceIt\|ginkgo.It\)\(.*\)func() {$/\1\2func(ctx context.Context) {/' \
$(git grep -l -e framework.ConformanceIt -e ginkgo.It )
$GOPATH/bin/goimports -w $(git status | grep modified: | sed -e 's/.* //')
log_test.go was left unchanged.
- update all the import statements
- run hack/pin-dependency.sh to change pinned dependency versions
- run hack/update-vendor.sh to update go.mod files and the vendor directory
- update the method signatures for custom reporters
Signed-off-by: Dave Chen <dave.chen@arm.com>
Otherwise, nodeNameToPodList[nodeName] list will have all its references
identical (corresponding to the control variable reference).
Thus, making all the pods in the list identical.
The purpose of the pod created by `createBalancedPodForNodes()` is to ensure
that all nodes have equal resource requests (as seen by the scheduler). This
prevents the default scheduling behavior (which attempts to balance resource requests)
from interfering with e2e's which test other priorities/score plugins.
Because the scheduler only worries about requests, specifying `Limits` in this pod
is unnecessary. In fact, if the calculated "balancing" limit is too low, it can cause
the balancing pod to never start due to OOMKill errors, leading to flakes and failures.
The test is not cleaning all pods it created.
Memory balancing pods are deleted once the test namespace is.
Thus, leaving the pods running or in terminating state when a new test is run.
In case the next test is "[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run",
the test can fail.
WaitForStableCluster() checks all pods run on worker nodes, and the
function used to refer master nodes to skip checking controller plane
pods.
GetMasterAndWorkerNodes() was used for getting master nodes, but the
implementation is not good because it usesDeprecatedMightBeMasterNode().
This makes WaitForStableCluster() refer worker nodes directly to avoid
using GetMasterAndWorkerNodes().
WaitForPod*() are just wrapper functions for e2epod package, and they
made an invalid dependency to sub e2e framework from the core framework.
So this replaces WaitForPodRunning() with the e2epod function.
This PR moves functions from test/e2e/framework.util.go for making e2e
core framework small and simple:
- RestartKubeProxy: Moved to e2e network package
- CheckConnectivityToHost: Moved to e2e network package
- RemoveAvoidPodsOffNode: Move to e2e scheduling package
- AddOrUpdateAvoidPodOnNode: Move to e2e scheduling package
- UpdateDaemonSetWithRetries: Move to e2e apps package
- CheckForControllerManagerHealthy: Moved to e2e storage package
- ParseKVLines: Removed because of e9345ae5f0
- AddOrUpdateLabelOnNodeAndReturnOldValue: Removed because of ff7b07c43c
Includes the changes from #80922 (that were reverted), and updates the test to appropriately add intolerable taints to all nodes except the target node
I think, if a pod doesn't have any tolerations, we don't prefer node without taints to
the one which has taints in https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithm/priorities/taint_toleration.go#L29, so there is no point in testing that particular functionality. The side effect of the above is, since we're going round-robin in every scheduling cycle sometimes we're choosing first node and in the next cycle we'd move onto next node(where taints are not being applied), so it's causing problem unnecessarily