The WaitFor* refactoring in 07c34eb400 had an oversight what timeout parameter
is used for calling WaitForAllPodsCondition() in WaitForPodsWithLabelRunningReady()
so the calls to WaitForPodsWithLabelRunningReady() ended up ignoring the user
provided timeout. Fix that.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
Use the group resource instead of objectType in watch cache metrics,
because all CustomResources are grouped together as
*unstructured.Unstructured, instead of 1 entry per type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
All CustomResources are treated as *unstructured.Unstructured, leading
the watch cache to log anything related to CRs as Unstructured. This
change uses the schema.GroupResource instead of object type for all type
related log messages in the watch cache, resulting in distinct output
for each CR type.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Followup on https://github.com/kubernetes/kubernetes/pull/111846. This
particular test was left out from that PR because once it was enabled it
started failing. It was desired to merge
https://github.com/kubernetes/kubernetes/pull/111846 irrespective of
this particular test.
The failure in the test was caused due to the
`createFSGroupRequestPreHook` mock CSI driver hook function assuming
that the request object passed to it is an instance of the respective
struct, but it's actually a pointer instead. This resulted in the hook
function not fulfilling its purpose, and the so the test failed.
If the user passes "--proxy-mode ipvs", and it is not possible to use
IPVS, then error out rather than falling back to iptables.
There was never any good reason to be doing fallback; this was
presumably erroneously added to parallel the iptables-to-userspace
fallback (which only existed because we had wanted iptables to be the
default but not all systems could support it).
In particular, if the user passed configuration options for ipvs, then
they presumably *didn't* pass configuration options for iptables, and
so even if the iptables proxy is able to run, it is likely to be
misconfigured.
Back when iptables was first made the default, there were
theoretically some users who wouldn't have been able to support it due
to having an old /sbin/iptables. But kube-proxy no longer does the
things that didn't work with old iptables, and we removed that check a
long time ago. There is also a check for a new-enough kernel version,
but it's checking for a feature which was added in kernel 3.6, and no
one could possibly be running Kubernetes with a kernel that old. So
the fallback code now never actually falls back, so it should just be
removed.
This was implemented partly in server.go and partly in
server_others.go even though even the parts in server.go were totally
linux-specific. Simplify things by putting it all in server_others.go
and get rid of some unnecessary abstraction.
Fixes instances of #98213 (to ultimately complete #98213 linting is
required).
This commit fixes a few instances of a common mistake done when writing
parallel subtests or Ginkgo tests (basically any test in which the test
closure is dynamically created in a loop and the loop doesn't wait for
the test closure to complete).
I'm developing a very specific linter that detects this king of mistake
and these are the only violations of it it found in this repo (it's not
airtight so there may be more).
In the case of Ginkgo tests, without this fix, only the last entry in
the loop iteratee is actually tested. In the case of Parallel tests I
think it's the same problem but maybe a bit different, iiuc it depends
on the execution speed.
Waiting for the CI to confirm the tests are still passing, even after
this fix - since it's likely it's the first time those test cases are
executed - they may be buggy or testing code that is buggy.
Another instance of this is in `test/e2e/storage/csi_mock_volume.go` and
is still failing so it has been left out of this commit and will be
addressed in a separate one