Commit Graph

494 Commits

Author SHA1 Message Date
Wei Huang
72863f65d6
Graduate PodSchedulingReadiness to beta 2023-02-17 18:45:20 -08:00
Patrick Ohly
136f89dfc5 e2e: use error wrapping with %w
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).

Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with

    sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)

This may be unnecessary in some cases, but it's not wrong.
2023-02-06 15:39:13 +01:00
Patrick Ohly
4d63e7d4d6 e2e: remove unused label filter from WaitForPodsRunningReady
None of the users of the functions passed anything other than nil or an empty
map and the implementation ignore the parameter - it seems like a candidate for
simplification.
2023-02-06 15:39:12 +01:00
Antonio Ojea
7f5ae1c0c1
Revert "e2e: wait for pods with gomega" 2023-02-06 12:08:22 +01:00
Kubernetes Prow Robot
85aa0057c6
Merge pull request #113298 from pohly/e2e-wait-for-pods-with-gomega
e2e: wait for pods with gomega
2023-02-04 05:26:29 -08:00
Patrick Ohly
222f655062 e2e: use error wrapping with %w
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).

Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with

    sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)

This may be unnecessary in some cases, but it's not wrong.
2023-01-31 13:01:39 +01:00
Patrick Ohly
3ebab68c8a e2e: remove unused label filter from WaitForPodsRunningReady
None of the users of the functions passed anything other than nil or an empty
map and the implementation ignore the parameter - it seems like a candidate for
simplification.
2023-01-31 07:52:26 +01:00
David Porter
71719a6036 test: Bump timeout for runPausePod
The `runPausePod` timeout was 1 minute previously which appears to be
too short and timing out in some tests.

Switch to `f.Timeouts.PodStartShort` which is the common timeout used to wait
for pods to start which defaults to 5min.

Also refactor to remove `runPausePodWithoutTimeout` and instead rely on
`runPausePod` since we do not make the timeout customizable directly
(it can be changed via the test framework if desired).

Signed-off-by: David Porter <david@porter.me>
2023-01-30 21:27:59 -08:00
Kubernetes Prow Robot
d863d04adc
Merge pull request #114580 from pohly/e2e-ginkgo-timeout-fixes
e2e ginkgo timeout fixes, III
2023-01-30 13:48:48 -08:00
Kubernetes Prow Robot
86455ae12e
Merge pull request #115094 from GCES-Kubernetes-2022-2/e2e-apps
E2e apps
2023-01-28 08:52:34 -08:00
Paulo Gonçalves Lima
d1278a0830 Fix: Improves the log for failing tests in e2e/apps.
Issue #105678
2023-01-28 02:50:32 -03:00
David Porter
b96290c08f e2e node: Update runtime class handler skip logic
There are two runtime class tests which required the container runtime
config to include explicit configuration for `test-handler`. The current
logic skips these tests in non GCE environments. This skip is too strict
since the test is skipped in node e2e environments and in other
environments such as kind, which support running the test and also
configure `test-handler`.

Instead of skipping based on provider, add a new function
`NodeSupportsPreconfiguredRuntimeClassHandler` which examines the
underlying container runtime config and checks if the config includes
`test-handler`. The check is a bit brittle since it assumes container
runtime config paths, but it is a net improvement over skipping the test
entirely on non GCE environments.

This results in the test working in the common test environments, namely
GCE kube-up, node e2e, and kind.

Signed-off-by: David Porter <david@porter.me>
2023-01-24 14:43:24 -08:00
Patrick Ohly
a2722ffa4a e2e: replace WithTimeout with NodeTimeout
The intend of timeout handling (for the entire "It" and not just a few calls)
becomes more obvious and simpler when using ginkgo.NodeTimeout as decorator.
2023-01-02 10:47:00 +01:00
Patrick Ohly
1bc24630da e2e scheduling: remove redundant stopCh
The ctx.Done() channel associated with the current test can be used
instead. This is a simplification, both approaches work.
2022-12-19 10:00:00 +01:00
Patrick Ohly
29d6d03a3c e2e scheduling: fix scope of context.WithTimeout
The original intend from v1.26.0 was to only apply the timeout to the following
cache.WaitForCacheSync.
2022-12-19 10:00:00 +01:00
Patrick Ohly
2f6c4f5eab e2e: use Ginkgo context
All code must use the context from Ginkgo when doing API calls or polling for a
change, otherwise the code would not return immediately when the test gets
aborted.
2022-12-16 20:14:04 +01:00
Patrick Ohly
d4729008ef e2e: simplify test cleanup
ginkgo.DeferCleanup has multiple advantages:
- The cleanup operation can get registered if and only if needed.
- No need to return a cleanup function that the caller must invoke.
- Automatically determines whether a context is needed, which will
  simplify the introduction of context parameters.
- Ginkgo's timeline shows when it executes the cleanup operation.
2022-12-13 08:09:01 +01:00
Patrick Ohly
0d73c0d0e5 e2e: fix linter errors
Adding "ctx" as parameter in the previous commit led to some linter errors
about code that overwrites "ctx" without using it.

This gets fixed by replacing context.Background or context.TODO in those code
lines with the new ctx parameter.

Two context.WithCancel calls can get removed completely because the context
automatically gets cancelled by Ginkgo when the test returns.
2022-12-10 20:35:46 +01:00
Patrick Ohly
df5d84ae81 e2e: accept context from Ginkgo
Every ginkgo callback should return immediately when a timeout occurs or the
test run manually gets aborted with CTRL-C. To do that, they must take a ctx
parameter and pass it through to all code which might block.

This is a first automated step towards that: the additional parameter got added
with

    sed -i 's/\(framework.ConformanceIt\|ginkgo.It\)\(.*\)func() {$/\1\2func(ctx context.Context) {/' \
        $(git grep -l -e framework.ConformanceIt -e ginkgo.It )
    $GOPATH/bin/goimports -w $(git status | grep modified: | sed -e 's/.* //')

log_test.go was left unchanged.
2022-12-10 19:50:18 +01:00
Wei Huang
a75c0709d0
Deflake a preemption test that may patch Node incorrectly 2022-12-07 14:30:34 -08:00
Michal Wozniak
41285a7c91 Add e2e test for job pod failure policy used to match pod disruption 2022-11-10 15:50:02 +01:00
Michal Wozniak
818e180300 Add e2e test for adding DisruptionTarget condition to the preemption victim pod 2022-11-09 09:02:40 +01:00
Wei Huang
abe0c5d5b4
E2E test for KEP Scheduling Readiness Gates 2022-11-08 12:38:21 -08:00
Stephen Heywood
bf22ad1a15 Promote limitRange e2e test to Conformance 2022-10-10 11:37:57 +13:00
Patrick Ohly
dfdf88d4fa e2e: adapt to moved code
This is the result of automatically editing source files like this:

    go install golang.org/x/tools/cmd/goimports@latest
    find ./test/e2e* -name "*.go" | xargs env PATH=$GOPATH/bin:$PATH ./e2e-framework-sed.sh

with e2e-framework-sed.sh containing this:

sed -i \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecCommandInContainer(/e2epod.ExecCommandInContainer(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecCommandInContainerWithFullOutput(/e2epod.ExecCommandInContainerWithFullOutput(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecShellInContainer(/e2epod.ExecShellInContainer(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecShellInPod(/e2epod.ExecShellInPod(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecShellInPodWithFullOutput(/e2epod.ExecShellInPodWithFullOutput(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.ExecWithOptions(/e2epod.ExecWithOptions(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.MatchContainerOutput(/e2eoutput.MatchContainerOutput(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.PodClient(/e2epod.NewPodClient(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.PodClientNS(/e2epod.PodClientNS(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.TestContainerOutput(/e2eoutput.TestContainerOutput(\1, /" \
    -e "s/\(f\|fr\|\w\w*\.[fF]\w*\)\.TestContainerOutputRegexp(/e2eoutput.TestContainerOutputRegexp(\1, /" \
    -e "s/framework.AddOrUpdateLabelOnNode\b/e2enode.AddOrUpdateLabelOnNode/" \
    -e "s/framework.AllNodes\b/e2edebug.AllNodes/" \
    -e "s/framework.AllNodesReady\b/e2enode.AllNodesReady/" \
    -e "s/framework.ContainerResourceGatherer\b/e2edebug.ContainerResourceGatherer/" \
    -e "s/framework.ContainerResourceUsage\b/e2edebug.ContainerResourceUsage/" \
    -e "s/framework.CreateEmptyFileOnPod\b/e2eoutput.CreateEmptyFileOnPod/" \
    -e "s/framework.DefaultPodDeletionTimeout\b/e2epod.DefaultPodDeletionTimeout/" \
    -e "s/framework.DumpAllNamespaceInfo\b/e2edebug.DumpAllNamespaceInfo/" \
    -e "s/framework.DumpDebugInfo\b/e2eoutput.DumpDebugInfo/" \
    -e "s/framework.DumpNodeDebugInfo\b/e2edebug.DumpNodeDebugInfo/" \
    -e "s/framework.EtcdUpgrade\b/e2eproviders.EtcdUpgrade/" \
    -e "s/framework.EventsLister\b/e2edebug.EventsLister/" \
    -e "s/framework.ExecOptions\b/e2epod.ExecOptions/" \
    -e "s/framework.ExpectNodeHasLabel\b/e2enode.ExpectNodeHasLabel/" \
    -e "s/framework.ExpectNodeHasTaint\b/e2enode.ExpectNodeHasTaint/" \
    -e "s/framework.GCEUpgradeScript\b/e2eproviders.GCEUpgradeScript/" \
    -e "s/framework.ImagePrePullList\b/e2epod.ImagePrePullList/" \
    -e "s/framework.KubectlBuilder\b/e2ekubectl.KubectlBuilder/" \
    -e "s/framework.LocationParamGKE\b/e2eproviders.LocationParamGKE/" \
    -e "s/framework.LogSizeDataTimeseries\b/e2edebug.LogSizeDataTimeseries/" \
    -e "s/framework.LogSizeGatherer\b/e2edebug.LogSizeGatherer/" \
    -e "s/framework.LogsSizeData\b/e2edebug.LogsSizeData/" \
    -e "s/framework.LogsSizeDataSummary\b/e2edebug.LogsSizeDataSummary/" \
    -e "s/framework.LogsSizeVerifier\b/e2edebug.LogsSizeVerifier/" \
    -e "s/framework.LookForStringInLog\b/e2eoutput.LookForStringInLog/" \
    -e "s/framework.LookForStringInPodExec\b/e2eoutput.LookForStringInPodExec/" \
    -e "s/framework.LookForStringInPodExecToContainer\b/e2eoutput.LookForStringInPodExecToContainer/" \
    -e "s/framework.MasterAndDNSNodes\b/e2edebug.MasterAndDNSNodes/" \
    -e "s/framework.MasterNodes\b/e2edebug.MasterNodes/" \
    -e "s/framework.MasterUpgradeGKE\b/e2eproviders.MasterUpgradeGKE/" \
    -e "s/framework.NewKubectlCommand\b/e2ekubectl.NewKubectlCommand/" \
    -e "s/framework.NewLogsVerifier\b/e2edebug.NewLogsVerifier/" \
    -e "s/framework.NewNodeKiller\b/e2enode.NewNodeKiller/" \
    -e "s/framework.NewResourceUsageGatherer\b/e2edebug.NewResourceUsageGatherer/" \
    -e "s/framework.NodeHasTaint\b/e2enode.NodeHasTaint/" \
    -e "s/framework.NodeKiller\b/e2enode.NodeKiller/" \
    -e "s/framework.NodesSet\b/e2edebug.NodesSet/" \
    -e "s/framework.PodClient\b/e2epod.PodClient/" \
    -e "s/framework.RemoveLabelOffNode\b/e2enode.RemoveLabelOffNode/" \
    -e "s/framework.ResourceConstraint\b/e2edebug.ResourceConstraint/" \
    -e "s/framework.ResourceGathererOptions\b/e2edebug.ResourceGathererOptions/" \
    -e "s/framework.ResourceUsagePerContainer\b/e2edebug.ResourceUsagePerContainer/" \
    -e "s/framework.ResourceUsageSummary\b/e2edebug.ResourceUsageSummary/" \
    -e "s/framework.RunHostCmd\b/e2eoutput.RunHostCmd/" \
    -e "s/framework.RunHostCmdOrDie\b/e2eoutput.RunHostCmdOrDie/" \
    -e "s/framework.RunHostCmdWithFullOutput\b/e2eoutput.RunHostCmdWithFullOutput/" \
    -e "s/framework.RunHostCmdWithRetries\b/e2eoutput.RunHostCmdWithRetries/" \
    -e "s/framework.RunKubectl\b/e2ekubectl.RunKubectl/" \
    -e "s/framework.RunKubectlInput\b/e2ekubectl.RunKubectlInput/" \
    -e "s/framework.RunKubectlOrDie\b/e2ekubectl.RunKubectlOrDie/" \
    -e "s/framework.RunKubectlOrDieInput\b/e2ekubectl.RunKubectlOrDieInput/" \
    -e "s/framework.RunKubectlWithFullOutput\b/e2ekubectl.RunKubectlWithFullOutput/" \
    -e "s/framework.RunKubemciCmd\b/e2ekubectl.RunKubemciCmd/" \
    -e "s/framework.RunKubemciWithKubeconfig\b/e2ekubectl.RunKubemciWithKubeconfig/" \
    -e "s/framework.SingleContainerSummary\b/e2edebug.SingleContainerSummary/" \
    -e "s/framework.SingleLogSummary\b/e2edebug.SingleLogSummary/" \
    -e "s/framework.TimestampedSize\b/e2edebug.TimestampedSize/" \
    -e "s/framework.WaitForAllNodesSchedulable\b/e2enode.WaitForAllNodesSchedulable/" \
    -e "s/framework.WaitForSSHTunnels\b/e2enode.WaitForSSHTunnels/" \
    -e "s/framework.WorkItem\b/e2edebug.WorkItem/" \
    "$@"

for i in "$@"; do
    # Import all sub packages and let goimports figure out which of those
    # are redundant (= already imported) or not needed.
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2edebug "k8s.io/kubernetes/test/e2e/framework/debug"' "$i"
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2ekubectl "k8s.io/kubernetes/test/e2e/framework/kubectl"' "$i"
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2enode "k8s.io/kubernetes/test/e2e/framework/node"' "$i"
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2eoutput "k8s.io/kubernetes/test/e2e/framework/pod/output"' "$i"
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2epod "k8s.io/kubernetes/test/e2e/framework/pod"' "$i"
    sed -i -e '/"k8s.io.kubernetes.test.e2e.framework"/a e2eproviders "k8s.io/kubernetes/test/e2e/framework/providers"' "$i"
    goimports -w "$i"
done
2022-10-06 08:19:47 +02:00
Stephen Heywood
12fe011f73 Create e2e test for LimitRange endpoints
e2e test validates the following 3 endpoints
- listCoreV1LimitRangeForAllNamespaces
- patchCoreV1NamespacedLimitRange
- deleteCoreV1CollectionNamespacedLimitRange
2022-09-30 07:51:05 +13:00
Stanislav Laznicka
682ee2908a
Make scheduling e2e tests run PSa-restricted pods
The "pause" pods that are being run in the scheduling tests are
sometimes launched in system namespaces. Therefore even if a test
is considered to be running on a "baseline" Pod Security admission
level, its "baseline" pods would fail to run if the global PSa
enforcement policy is set to "restricted" - the system namespaces
have no PSa labels.

The "pause" pods run by this test can actually easily run with
"restricted" security context, and so this patch turns them
into just that.
2022-07-26 14:32:51 +02:00
Dave Chen
fd4b5b629b Stop using the deprecated method CurrentGinkgoTestDescription
Besides, the using of method might lead to a `concurrent map writes`
issue per the discussion here: https://github.com/onsi/ginkgo/issues/970

Signed-off-by: Dave Chen <dave.chen@arm.com>
2022-07-08 10:46:11 +08:00
Dave Chen
857458cfa5 update ginkgo from v1 to v2 and gomega to 1.19.0
- update all the import statements
- run hack/pin-dependency.sh to change pinned dependency versions
- run hack/update-vendor.sh to update go.mod files and the vendor directory
- update the method signatures for custom reporters

Signed-off-by: Dave Chen <dave.chen@arm.com>
2022-07-08 10:44:46 +08:00
Zhecheng Li
b8b18a0a05 [e2e] Should spread Pods to schedulable cluster zones
Some Nodes like control plane ones should not be considered
to spread Pods.

Signed-off-by: Zhecheng Li <zhechengli@microsoft.com>
2022-06-10 16:40:56 +08:00
Lukasz Szaszkiewicz
4a7845b485 users of watchtools.NewIndexerInformerWatcher should wait for the informer to sync
previously users of this method were relying on the fact that a call to LIST was made.
Instead, users should use the dedicated `HasSynced` method.
2022-05-24 13:05:40 +02:00
Sergiusz Urbaniak
1495c9f2cd
test/e2e/*: default existing tests to privileged pod security policy
This is to ensure that all existing tests don't break when defaulting
the pod security policy to restricted in the e2e test framework.
2022-04-05 08:41:12 +02:00
Sergiusz Urbaniak
373c08e0c7
test/e2e/framework: configure pod security admission level for e2e tests 2022-03-28 15:42:10 +02:00
Kubernetes Prow Robot
2c91952fcf
Merge pull request #106486 from Ahmed-Aghadi/codeEnhanceNode
test/e2e/node + test/e2e/scheduling: improve checks
2022-02-28 11:17:46 -08:00
AHMED AGHADI
ff0a3009db Improve checks for test/e2e/node and test/e2e/scheduling 2022-02-28 23:44:21 +05:30
Aldo Culquicondor
3f0de6b80e
Remove skip Multi-AZ test based on provider
The test only cares if there are multiple zones and that is independent of the provider
2022-01-11 10:38:41 -05:00
Ciprian Hacman
a0abe5aa33 Clean up dockershim in tests
Signed-off-by: Ciprian Hacman <ciprian@hakman.dev>
2021-12-22 13:05:34 +02:00
Davanum Srinivas
9405e9b55e
Check in OWNERS modified by update-yamlfmt.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2021-12-09 21:31:26 -05:00
Tim Hockin
11a25bfeb6
De-share the Handler struct in core API (#105979)
* De-share the Handler struct in core API

An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.

This never should have been shared.  Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.

In the future I can also see adding lifecycle hooks that don't make
sense as probes.  E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.

* Run update scripts
2021-10-29 13:15:11 -07:00
Jan Chaloupka
b3249a1b39 e2e scheduling priorities: do not reference control loop variable
Otherwise, nodeNameToPodList[nodeName] list will have all its references
identical (corresponding to the control variable reference).
Thus, making all the pods in the list identical.
2021-09-23 13:08:03 +02:00
Aldo Culquicondor
be34dc95b5 Remove E2E test for NodePreferAvoidPods scheduling Score
The feature is now disabled by default. The annotation never graduated from alpha. The same behavior can be achieved with PreferNoSchedule taints.
2021-07-15 14:55:09 -04:00
Davanum Srinivas
75748c185e
enable verify-golangci-lint.sh
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2021-07-14 08:53:33 -04:00
Davanum Srinivas
07332ad398
fix ineffassign and varcheck
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2021-07-14 08:41:22 -04:00
Antonio Ojea
6d3fd8353c don't panic if nodeIPs are not found 2021-06-21 10:59:09 +02:00
Kubernetes Prow Robot
6bac142190
Merge pull request #102138 from damemi/balance-pods-parallel
(scheduler e2e) Create balanced pods in parallel
2021-05-27 14:04:23 -07:00
Mike Dame
36cdb72eb6 (scheduler e2e) Create balanced pods in parallel 2021-05-27 16:01:18 -04:00
Konstantin Misyutin
351f4e9c9c cleanup: remove TODO at e2e scheduling preemption test
Signed-off-by: Konstantin Misyutin <konstantin.misyutin@huawei.com>
2021-05-19 17:34:50 +08:00
Mike Dame
07029c941a Remove Limits from scheduling e2e balanced pod resources
The purpose of the pod created by `createBalancedPodForNodes()` is to ensure
that all nodes have equal resource requests (as seen by the scheduler). This
prevents the default scheduling behavior (which attempts to balance resource requests)
from interfering with e2e's which test other priorities/score plugins.

Because the scheduler only worries about requests, specifying `Limits` in this pod
is unnecessary. In fact, if the calculated "balancing" limit is too low, it can cause
the balancing pod to never start due to OOMKill errors, leading to flakes and failures.
2021-04-21 15:58:00 -04:00
Kubernetes Prow Robot
2147937c41
Merge pull request #100128 from ingvagabund/sig-scheduling-single-node-e2e
[sig-scheduling] SchedulerPreemption|SchedulerPredicates|SchedulerPriorities: adjust some e2e tests to run in a single node cluster scenario
2021-04-13 10:31:09 -07:00
Jan Chaloupka
bf2fc250a4 validates basic preemption works|validates lower priority pod preemption by critical pod: allocate 4/5 instead of 2/3
To run the tests in a single node cluster, create two pods consuming 2/5 of the extended resource instead of one consuming 2/3.
The low priority pod will be consuming 2/5 of the extended resource instead so in case there's only a single node,
a high priority pod consuming 2/5 of the extended resource can be still scheduled. Thus, making sure only the low priority pod
gets preempted once the preemptor pod consuming 2/5 of the extended resource gets scheduled while keeping the high priority pod untouched.
2021-04-13 09:47:28 +02:00