misterikkit committed on Oct 4
govmomi is the vSphere client library used by the vSphere cloud provider
and storage plugin. A bug in the SOAP client prevented storage classes
that use vSphere storage policies (aka SPBM) from working.
This bumps our dependency on vmware/govmomi from v0.20.1 to v0.20.3 to
pick up the fix in vmware/govmomi#1498
Here are all changes in the release:
https://github.com/vmware/govmomi/compare/v0.20.1...v0.20.3
Sometimes, volume creation can succeed right as the request times out,
causing k8s to interpret it as a failure. When the request is retried,
we want it to succeed. When trying this in vSphere, the second create
request failed with "already exists" and it never recovered.
This adds a check to the in-tree vsphere storage plugin that checks if a
VMDK exists before trying to create it. The check is done BEFORE create.
Tested: manual only )-:
When I describe network policies, it often tells me that pods are
isolated for egress connectivity because the policy that applies to them
has no egress rules. However, this would only lead to isolation if there
is an explicitly set egress policy type. Otherwise, the policy allows
egress traffic. The same applies if you have explicitly set an egress
type only, describe will incorrectly report isolated ingress traffic.
This PR fixes this by inferring the applicable direction for the policy
based on the PolicyTypes, and then if a policy doesn't apply eg to
egress, we print 'Not affecting egress traffic'
- Added thread_safe_store_test exercising new index backing string set delete at 0 functionality.
- TestThreadSafeStoreDeleteRemovesEmptySetsFromIndex logic nesting inverted.
- Added test case for usage of an index where post element delete there is non-zero count of elements and expect the set to still exist.
- Fixed date.
- Fixed awprice nits.
- Fix bazel.
b.N is adjusted by pkg/testing using an internal heuristic:
> The benchmark function must run the target code b.N times. During
> benchmark execution, b.N is adjusted until the benchmark function
> lasts long enough to be timed reliably.
Using b.N to seed other parameters makes the benchmark behavior
difficult to reason about. Before this change, thread count in the
CachedTokenAuthenticator benchmark is always 5000, and batch size is
almost always 1 when I run this locally. SimpleCache and StripedCache
benchmarks had similarly strange scaling.
After modifying CachedTokenAuthenticator to only adjust iterations based
on b.N, the batch chan was an point of contention and I wasn't able to
see any significant CPU consumption. This was fixed by using
ParallelBench to do the batching, rather than using a chan.