When the client misses a delete event from the watcher, it will use the last state of the pod in the informer cache to produce a delete event. At that point, it's not clear if the pod was in the queues or the cache, so we should issue a deletion in both.
The pod could be assumed, so deletion of assumed pods from the cache should work.
Change-Id: I11ce9785de603924fc121fe2fa6ed5cb1e16922f
The addition of output/v1alpha2 made the converter-gen require
an explicit converter for:
kubeadm/v1beta2.BootstrapToken -> bootstraptoken/v1.BootstrapToken.
Add this converter under kubeadm/v1beta.
Use the converter in output/v1alpha1.
It looks like it tests two pods sharing the same volume, but the goal is
actually the opposite - two pods with the same inline volume definition
should get separate volumes.
In various places log messages where emitted as part of validation or even
before it (for example, cli.PrintFlags). Those log messages did not use the
final logging configuration, for example text output instead of JSON or not the
final verbosity. The last point became more obvious after moving the setup of
verbosity into logs.Options.Apply because PrintFlags never printed anything
anymore.
In order to force applications to deal with logging as soon as possible, the
Options.Validate and Options.Apply methods are now private. Applications should
use the new Options.ValidateAndApply directly after parsing.
These three options are the ones from logs.AddFlags which are not deprecated.
Therefore it makes sense to make them available also via the configuration file
support in the one command which currently supports that (kubelet).
Long-term, all commands should use LoggingConfiguration, either with a
configuration file (as in kubelet) or via flags (kube-scheduler,
kube-apiserver, kube-controller-manager).
Short-term, both approaches have to be supported. As the majority of the
commands only use logs.AddFlags, that function by default continues to register
the flags and only leaves that to Options.AddFlags when explicitly requested.
A drive-by bug fix is done for log flushing: the periodic flushing called
klog.Flush and therefore missed explicit flushing of the newer logr
backend. This bug was never present in any release Kubernetes and therefore the
fix is not submitted in a separate PR.
This feature has graduated to GA in v1.11 and will always be
enabled. So no longe need to check if enabled.
Signed-off-by: Konstantin Misyutin <konstantin.misyutin@huawei.com>
With Kubernetes 1.19.X no longer supported, there are no supported branches using bazel. Developers should not have errant bazel files, this will be left to reviewers to catch.