Fixes the argument order used when calling testing.NewUpdateSubresourceAction
within the fake scale client. This was causing the generated action to swap the values
of the Namespace and Subresource in the Action.
The test steps are as follows:
1. Write some data
2. Take a snapshot
3. Write more data
4. Create a new volume from snapshot
5. Validate data is the old data
1. Use ginkgo before each to do common setup
2. Use volume resource to create SC, PV, PVC and handle cleanup
3. Add SnapshotResource to handle creating and cleanup of VS, VSC, VSClass
4. Add test pattern for deletion policy: Delete vs Retain
5. Use test pattern to determine test behaviour
6. Add test pattern for preprovisioned snapshot (not implemented)
These changes are made to consolidate common setup steps and stop resource
leaks by waiting for objects to be deleted.
the iptables monitor was using iptables -L to list the chains,
without the -n option, so it was trying to do reverse DNS lookups.
A side effect is that it was holding the lock, so other components
could not use it.
We can use -S instead of -L -n to avoid this, since we only want
to check the chain exists.
By creating CSIStorageCapacity objects in advance, we get the
FailedScheduling pod event if (and only if!) the test is expected to
fail because of insufficient or missing capacity. We can use that as
indicator that waiting for pod start can be stopped early. However,
because we might not get to see the event under load, we still need
the timeout.
Setting testParameters.scName had no effect because
StorageClassTest.StorageClassName isn't used anywhere. Instead, the
storage class name is generated dynamically.
This path fixes region Regex for listing zones.
Compute client BasePath changed to compute.googleapis.com, but resource URI were left as www.googleapis.com
DeprecatedMightBeMasterNode() has been marked as deprecated and we need to
find alternative way for callers of the function.
In NewResourceUsageGatherer(), the function was called for distinguishing
the specified pods are running on master nodes, and the gatherer gathers
those pods' resource usage.
This adds nodeHasControlPlanePods() to gistinguish the specified pods
are running on nodes which are operating control plane pods (kube-scheduler
and kube-controller-manager) and replace callers of DeprecatedMightBeMasterNode()
with this new function as better way.