This PR removes the driver letter assignment during volume format and
mount because driver letter might run out and cause issues during mount.
Intead, it uses volume id to mount the target dir.
Introduce optional sensitiveOptions parameter to allow sensitive mount
options to be passed in a separate parameter from the normal mount
options and ensures the sensitiveOptions are never logged.
Mount can fail for a variety of reasons and caller might want to know
why mount failed. Using untyped string based error does not provide
enough granularity to make that verification.
Current FakeClock::Reset only successfully resets the timer and
returns true when the timer has neither fired nore been stopped.
This is incorrect behavior as real clocks allow for reseting
in both of those situations (as long has the channel has been drained).
This is useful in tests that use a fake clock to test
wait.BackofManager, as the timer resets after each iteration of
Backoff() after the timer has already fired.
Pinning the kube-controller-manager and kube-scheduler kubeconfig files
to point to the control-plane-endpoint can be problematic during
immutable upgrades if one of these components ends up contacting an N-1
kube-apiserver:
https://kubernetes.io/docs/setup/release/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager
For example, the components can send a request for a non-existing API
version.
Instead of using the CPE for these components, use the LocalAPIEndpoint.
This guarantees that the components would talk to the local
kube-apiserver, which should be the same version, unless the user
explicitly patched manifests.
A check that verifies that kubeadm does not "upgrade" to an older release was
overly optimized by skipping upgrade if the new version is the same as the old
one. This somewhat makes sense, but that way changes in any of the etcd fields
in the ClusterConfiguration won't be applied if the etcd version is not
changed.
Hence, this simple change ensures that the upgrade is done even when no version
change takes place.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
When trying to delete a loadbalancer after all resources in azure
including the resource group have been deleted we currently get an an
error when trying to delete the loadbalancer afterwards because the
resource group hasn't been found.
Change EnsureLoadBalancerDeleted function to not fail when the resource
group has already been deleted.