TestDynamicProvisioning had multiple ways of choosing additional
checks:
- the PvCheck callback
- the builtin write/read check controlled by a boolean
- the snapshot testing
Complicating matters further, that builtin write/read test had been
more customizable with new fields `NodeSelector` and
`ExpectUnschedulable` which were only set by one particular test (see
https://github.com/kubernetes/kubernetes/pull/70941).
That is confusing and will only get more confusing when adding more
checks in the future. Therefore the write/read check is now a separate
function that must be enabled explicitly by tests that want to run it.
The snapshot checking is also defined only for the snapshot test.
The test that expects unschedulable pods now also checks for that
particular situation itself. Instead of testing it with two pods (the
behavior from the write/read check) that both fail to start, only a
single unschedulable pod is created.
Because node name, node selector and the `ExpectUnschedulable` were
only used for checking, it is possible to simplify `StorageClassTest`
by removing all of these fields.
Expect(err).NotTo(HaveOccurred()) is an anti-pattern in Ginkgo testing
because a test failure doesn't explain what failed (see
https://github.com/kubernetes/kubernetes/issues/34059). We avoid it
now by making the check function itself responsible for checking
errors and including more information in those checks.
When the provisioning test gets stuck, the log fills up with messages
about waiting for a certain pod to run. Now the pod names are
pvc-[volume-tester|snapshot]-[writer|reader] plus the random
number appended by Kubernetes. This makes it easier to see where the
test is stuck.
There is no need to check for empty strings, we can also directly
initialize structs with the value. The end result is the same when the
value is empty (empty string in the struct).
This addresses the two remaining change requests from
https://github.com/kubernetes/kubernetes/pull/69036:
- replace "csi-hostpath-v0" name check with capability
check (cleaner that way)
- add feature tag to "should create snapshot with defaults" because
that is an alpha feature
Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
Even if snapshots are supported by the driver interface, the driver or
suite might still want to skip a particular test, so those checks
still need to be executed.
Currently JoinConfigFileAndDefaultsToInternalConfig is doing a couple of
different things depending on its parameters. It:
- loads a versioned JoinConfiguration from an YAML file.
- returns defaulted JoinConfiguration allowing for some overrides.
In order to make code more manageable, the following steps are taken:
- Introduce LoadJoinConfigurationFromFile, which loads a versioned
JoinConfiguration from an YAML file, defaults it (both dynamically and
statically), converts it to internal JoinConfiguration and validates it.
- Introduce DefaultedJoinConfiguration, which returns defaulted (both
dynamically and statically) and verified internal JoinConfiguration.
The possibility of overwriting defaults via versioned JoinConfiguration is
retained.
- Re-implement JoinConfigFileAndDefaultsToInternalConfig to use
LoadJoinConfigurationFromFile and DefaultedJoinConfiguration.
- Replace some calls to JoinConfigFileAndDefaultsToInternalConfig with calls to
either LoadJoinConfigurationFromFile or DefaultedJoinConfiguration where
appropriate.
- Rename JoinConfigFileAndDefaultsToInternalConfig to the more appropriate name
LoadOrDefaultJoinConfiguration.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
The previous version of CudaVectorAdd test image can still be used
in our testing. A later change will extend the existing gpu e2e tests
to run pods with two containers. One with CudaVectorAdd version1 and
the other with CudaVectorAdd version2 so that we can test both
Cuda versions.
`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
Error adding tags after modifying load balancer targets:
"ValidationError: Only one resource can be tagged at a time"
This can happen when using AWS NLB with multiple listeners pointing
to different node ports.
When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.
Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.
This small changes assigns tags one resource at a time instead of
batching them as before.
Signed-off-by: Brice Figureau <brice@daysofwonder.com>