A bug was discovered in the `enforceRequirements` func for `upgrade plan`.
If a command line argument that specifies the target Kubernetes version is
supplied, the returned `ClusterConfiguration` by `enforceRequirements` will
have its `KubernetesVersion` field set to the new version.
If no version was specified, the returned `KubernetesVersion` points to the
currently installed one.
This remained undetected for a couple of reasons
- It's only `upgrade plan` that allows for the version command line argument to
be optional (in `upgrade plan` it's mandatory)
- Prior to 1.19, the implementation of `upgrade plan` did not make use of the
`KubernetesVersion` returned by `enforceRequirements`.
`upgrade plan` supports this optional command line argument to enable
air-gapped setups (as not specifying a version on the command line will end up
looking for the latest version over the Interned).
Hence, the only option is to make `enforceRequirements` consistent in the
`upgrade plan` case and always return the currently installed version in the
`KubernetesVersion` field.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Current FakeClock::Reset only successfully resets the timer and
returns true when the timer has neither fired nore been stopped.
This is incorrect behavior as real clocks allow for reseting
in both of those situations (as long has the channel has been drained).
This is useful in tests that use a fake clock to test
wait.BackofManager, as the timer resets after each iteration of
Backoff() after the timer has already fired.
A check that verifies that kubeadm does not "upgrade" to an older release was
overly optimized by skipping upgrade if the new version is the same as the old
one. This somewhat makes sense, but that way changes in any of the etcd fields
in the ClusterConfiguration won't be applied if the etcd version is not
changed.
Hence, this simple change ensures that the upgrade is done even when no version
change takes place.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
When trying to delete a loadbalancer after all resources in azure
including the resource group have been deleted we currently get an an
error when trying to delete the loadbalancer afterwards because the
resource group hasn't been found.
Change EnsureLoadBalancerDeleted function to not fail when the resource
group has already been deleted.
Annotates the optional field `StabilizationWindowSeconds` with
`omitempty` such that it will be omitted when creating resources where
the field is not set.
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
Since the SCTP module verification tests were added, their result may be affected by
running the SCTPConnectivity tests. For this reason, they are now marked as disruptive.
Signed-off-by: Federico Paolinelli <fpaoline@redhat.com>