When stripping out log messages from the failure text, the original text gets
stored as <system-out>. That part then got lost when reducing tests. Instead of
dropping it, it needs to be joined from all failed tests. Same for
<system-err>, although that isn't used yet.
Repeating the same "Failed" message text doesn't add any
information. Separating with a blank line is more readable.
Before:
<failure message="Failed; Failed; Failed" type="">
...
--- FAIL: TestFrontProxyConfig/WithoutUID (64.89s) ; === RUN TestFrontProxyConfig/WithUID
After:
<failure message="Failed" type="">
...
--- FAIL: TestFrontProxyConfig/WithoutUID (64.89s)
=== RUN TestFrontProxyConfig/WithUID
The kubernetes repository contains some internal golang modules that are
not part of the golang global workspace. Because apidiff is currently
run from the root of the repository, it does not work against this
internal modules.
Instead of executing apidiff from the root we can just cd into the
passed path of the module to avoid this limitation.
When `kubeadm init phase bootstrap-token` gets invoked, it reads
the kubeconfig from disk repeatedly. This is wasteful, but, more
importantly, it blocks the use of `/dev/stdin` and other sources
of data that cannot be read repeatedly.
This change introduces a new field that caches a parsed kubeconfig
and when a new clientset is requested, it is converted from
this pre-parsed kubeconfig, the code no longer reaches out to disk.
Previously, ValidateNodeSelector did not check that labels are valid. Now it
does for resource.k8s.io, regardless whether an object already was created with
invalid labels in an earlier Kubernetes release. Theoretically this is a
breaking change and could cause problems during an upgrade, but that is highly
unlikely in practice.
In contrast to node affinity, DRA does not ignore parse errors
(= uses NewNodeSelector, not NewLazyErrorNodeSelector), so invalid labels would
have been found instead of being silently ignored.
Even if some object has invalid labels, this only affects an alpha -> beta
upgrade which isn't guaranteed to work seamlessly.
We have zero flake policy for a long time now (> 1 year) https://github.com/kubernetes/community/pull/7538, however , there are some places that are still tolerating flakes and retrying
Flakes does not help, to the point that when we have to take a hard decision it creates more iuncertainty.
It does not matter how, we should be always able to deal with flakes:
- if the software or algorithm is racy, we need to work to make deterministic
- if is deterministic, the test must be deterministic
- if the test is determinist but it depends on the environment, then we work on making the environment deterministci