sig-storage tests that delete pods need to wait for owned resources to
also be cleaned up before returning in the case that resources such as
ephemeral inline volumes are being used. This was previously implemented
by modifying the pod delete call of the e2e framework, which negatively
impacted other tests. This was reverted and now the logic has been moved
to StopPodAndDependents, which is local to the sig-storage tests.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
Azure's cloud provider VMSS VMs API accesses are mediated through
a cache holding and refreshing all VMSS together.
Due to that we hit VMSSVM.List API more often than we could: an
instance's cache miss or expiration should only require a single
VMSS re-list, while it's currently O(n) relative to the number of
attached Scale Sets.
Under hard pressure (clusters with many attached VMSS that can't all
be listed in one sequence of successive API calls) the controller
manager might be stuck trying to re-list everything from scratch,
then aborting the whole operation; then re-trying and re-triggering
API rate-limits, affecting the whole Subscription.
This patch replaces the global VMSS VMs cache by per-VMSS VMs caches.
Refreshes (VMSS VMs lists) are scoped to the single relevant VMSS; under
severe throttling the various caches can be incrementally refreshed.
Signed-off-by: Benjamin Pineau <benjamin.pineau@datadoghq.com>
clamp the max cpu.shares to the maximum value allowed by the kernel.
It is not an issue when using cgroupfs, as the kernel will
anyway make sure the value is not out of range and automatically clamp
it, systemd has an additional check that prevents the cgroup creation.
Closes: https://github.com/kubernetes/kubernetes/issues/92855
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Previously, it was possible for reusable CPUs and reusable devices (i.e.
those previously consumed by init containers) to not be reused by
subsequent init containers or app containers if the TopologyManager was
enabled. This would happen because hint generation for the
TopologyManager was not considering the reusable devices when it made
its hint calculation.
As such, it would sometimes:
1) Generate a hint for a differnent NUMA node, causing the CPUs and
devices to be allocated from that node instead of the one where the
reusable devices live; or
2) End up thinking there were not enough CPUs or devices to allocate and
throw a TopologyAffinity admission error
This patch fixes this by ensuring that reusable CPUs and devices are
considered as part of TopologyHint generation. This frunctionality is
difficult to unit test since it spans multiple components, but an e2e
test will be added in a subsequent patch to test this functionality.