Repeating the same "Failed" message text doesn't add any
information. Separating with a blank line is more readable.
Before:
<failure message="Failed; Failed; Failed" type="">
...
--- FAIL: TestFrontProxyConfig/WithoutUID (64.89s) ; === RUN TestFrontProxyConfig/WithUID
After:
<failure message="Failed" type="">
...
--- FAIL: TestFrontProxyConfig/WithoutUID (64.89s)
=== RUN TestFrontProxyConfig/WithUID
The kubernetes repository contains some internal golang modules that are
not part of the golang global workspace. Because apidiff is currently
run from the root of the repository, it does not work against this
internal modules.
Instead of executing apidiff from the root we can just cd into the
passed path of the module to avoid this limitation.
When `kubeadm init phase bootstrap-token` gets invoked, it reads
the kubeconfig from disk repeatedly. This is wasteful, but, more
importantly, it blocks the use of `/dev/stdin` and other sources
of data that cannot be read repeatedly.
This change introduces a new field that caches a parsed kubeconfig
and when a new clientset is requested, it is converted from
this pre-parsed kubeconfig, the code no longer reaches out to disk.
Previously, ValidateNodeSelector did not check that labels are valid. Now it
does for resource.k8s.io, regardless whether an object already was created with
invalid labels in an earlier Kubernetes release. Theoretically this is a
breaking change and could cause problems during an upgrade, but that is highly
unlikely in practice.
In contrast to node affinity, DRA does not ignore parse errors
(= uses NewNodeSelector, not NewLazyErrorNodeSelector), so invalid labels would
have been found instead of being silently ignored.
Even if some object has invalid labels, this only affects an alpha -> beta
upgrade which isn't guaranteed to work seamlessly.
Note that these tests will take now more time to run as they are relying on the scale up and scale down to prepare the test case and restore the cluster state.
Remove all the gke and gce specific tests including:
- GPUs
- volumes (no way to provision volumes without provider specific
infrastructure)
- scale up/down from/to 0
- tests checking what happens after breaking nodes (no way to simulate
temporary network failure without provider assumptions)
Remove the scalability tests that were not run and unmaintained.
Update the autoscaler version that is used by the tests.
Update the autoscaler status parsing logic for the tests to pass with
newer version of autoscaler.
We have zero flake policy for a long time now (> 1 year) https://github.com/kubernetes/community/pull/7538, however , there are some places that are still tolerating flakes and retrying
Flakes does not help, to the point that when we have to take a hard decision it creates more iuncertainty.
It does not matter how, we should be always able to deal with flakes:
- if the software or algorithm is racy, we need to work to make deterministic
- if is deterministic, the test must be deterministic
- if the test is determinist but it depends on the environment, then we work on making the environment deterministci