Calling WaitForPodTerminatedInNamespace after testFlexVolume is useless because
the client pod that it waits for always gets deleted by testVolumeClient:
0fcc3dbd55/test/e2e/framework/volume/fixtures.go (L541-L546)
Worse, because WaitForPodTerminatedInNamespace treats "not found" as "must keep
polling", these two tests always kept waiting for 5 minutes:
Kubernetes e2e suite: [It] [sig-storage] Flexvolumes should be mountable
when non-attachable 6m4s
The only reason why these tests passed is that WaitForPodTerminatedInNamespace
used to return the "not found" API error. That is not guaranteed and about to
change.
The `runPausePod` timeout was 1 minute previously which appears to be
too short and timing out in some tests.
Switch to `f.Timeouts.PodStartShort` which is the common timeout used to wait
for pods to start which defaults to 5min.
Also refactor to remove `runPausePodWithoutTimeout` and instead rely on
`runPausePod` since we do not make the timeout customizable directly
(it can be changed via the test framework if desired).
Signed-off-by: David Porter <david@porter.me>
Adds integration tests for the following scenarios with
MultiCIDRRangeAllocator enabled:
- ClusterCIDR is released when an associated node is deleted.
- ClusterCIDR delete when a node is associated, validate the finalizer
behavior, make sure that deleted ClusterCIDR is cleaned up after the
associated node is deleted.
- ClusterCIDR marked as terminating due to deletion must not be used for
allocating Pod CIDRs to new nodes.
- Tie break behavior when multiple ClusterCIDRs are eligible to
allocate Pod CIDRs to a node.
Fixes the deletion of ClusterCIDR object, when a Node is associated(has
Pod CIDRs allocated from this ClusterCIDR) with it. Currently the
ClusterCIDR finalizer is never cleaned up as there is no reconciliation
happening after the associated Node has been deleted. This commit fixes
the issue by adding workitems from all events to a worker queue and
reconcile until the delete is successful.
Using a `sync.Pool` to re-use the buffers that have to escape in the
predicate significantly reduces the number of allocations needed to run
the SchemaHas method, as shown in the following example:
```
> benchstat old.bench new.bench
name old time/op new time/op delta
SchemaHas-8 11.0ms ± 0% 2.1ms ± 1% -80.60% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
SchemaHas-8 37.5MB ± 0% 0.0MB ± 0% -100.00% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
SchemaHas-8 73.1k ± 0% 0.0k -100.00% (p=0.008 n=5+5)
```
SchemaHas needs to escape the apiextensions.JSONSchemaProps pointer
since it's calling an arbitrary predicate function that can keep a copy
of the pointer (even though it shouldn't, but the compiler can't know
that). The leak is resulting in a fair amount of memory allocation when
installing many CRDs in a row (about 3% of the memory allocated happens
in this method).