Fixes instances of #98213 (to ultimately complete #98213 linting is
required).
This commit fixes a few instances of a common mistake done when writing
parallel subtests or Ginkgo tests (basically any test in which the test
closure is dynamically created in a loop and the loop doesn't wait for
the test closure to complete).
I'm developing a very specific linter that detects this king of mistake
and these are the only violations of it it found in this repo (it's not
airtight so there may be more).
In the case of Ginkgo tests, without this fix, only the last entry in
the loop iteratee is actually tested. In the case of Parallel tests I
think it's the same problem but maybe a bit different, iiuc it depends
on the execution speed.
Waiting for the CI to confirm the tests are still passing, even after
this fix - since it's likely it's the first time those test cases are
executed - they may be buggy or testing code that is buggy.
Another instance of this is in `test/e2e/storage/csi_mock_volume.go` and
is still failing so it has been left out of this commit and will be
addressed in a separate one
To preserve loose coupling, it is needed to pass `RESTClientGetter`
instead `cmdutil.Factory` for all kubectl commands.
This PR removes `cmdutil.Factory` usage in `cluster-info` command and
instead passes `RESTClientGetter`.
The functionality provided by the finalURLTemplate is still used by
certain external projects to track the request latency for requests
performed to kube-apiserver.
Using a template of the URL, instead of the URL itself, prevents the
explosion of label cardinality in exposed metrics since it aggregates
the URLs in a way that common URLs requests are reported as being the
same.
This reverts commit bebf5a608f.
Signed-off-by: André Martins <aanm90@gmail.com>
Currently `kubectl apply` determines correct patch type for given
GVKs by trying to register schema and if it succeeds, it uses
strategic-merge-patch.
But OpenAPI endpoint already stores which patch types are supported
by GVKs. This PR checks OpenAPI endpoint to retrieve patch type,
if OpenAPI is enabled. If it is not enabled, patch type determination
will be done as conventional registration method.
Services which fail to be successfully synced as a cause by a triggered node event
are actually never retried. The commit before this one gave an example of when such
services used to be retried before, but that was not really efficient nor fully
correct. Moving to a workqueue for node events is a more modern approach to syncing
nodes, and placing all service keys that have failed on the service workqueue, in
case they do, fixes the re-sync problem
Also, now that we are using a node workqueue and use one go-routine to service items
from that queue, we don't need the `nodeSyncLock` anymore. So further clean that up
from the controller.
It dawned on me that `needsFullSync` can never be false. `needsFullSync` was used
to compare the set of nodes that were existing last time the node event handler was
triggered, with the current set of node for this run. However, if `triggerNodeSync`
gets called it's always because the set of nodes have changed due to a condition
changing on one node, or a new node being added/removed. If `needsFullSync` can
never be false then a lot of things in the service sync path was just spurious, for
ex: `servicesToRetry`, `knownHosts`. Essentially: if we ever need to `triggerNodeSync`
then the set of nodes have somehow changed and we always need to re-sync all services.
Before this patch series there was a possibility for `needsFullSync` to be set to false.
`shouldSyncNode` and the predicates used to list nodes were not aligned, specifically
for Unschedulable nodes. This means that we could have been triggered by a change to
the schedulable state but not actually computed any diffs between the old vs. new nodes.
Meaning, whenever there was a change in schedulable state we would just try to re-sync
all service updates that might have failed when we synced last time. But I believe this
to be an overlooked coincidence, rather than something actually intended.
To preserve loose coupling, it is needed to pass `RESTClientGetter`
instead `cmdutil.Factory` for all kubectl commands.
This PR removes `cmdutil.Factory` usage and instead
passes `RESTClientGetter` as well as required changes in unit tests.
The Priority is determined as follows:
P0: ClusterCIDR with higher number of matching labels has highest
priority.
P1: ClusterCIDR having cidrSet with fewer allocatable Pod CIDRs has
higher priority.
P2: ClusterCIDR with a PerNodeMaskSize having fewer IPs has higher
priority.
P3: ClusterCIDR having label with lower alphanumeric value has higher
priority.
P4: ClusterCIDR with a cidrSet having a smaller IP address value has
higher priority.