This improves performance of the text formatting and ktesting.
Because ktesting no longer buffers messages by default, one unit
test needs to ask for that explicitly.
The code as it stands now works, but it is still complicated and previous
versions had race
conditions (https://github.com/kubernetes/kubernetes/issues/108040). Now the
test works without modifying global state. The individual test cases could run
in parallel, this just isn't done because they complete quickly already (2
seconds).
A number of race conditions exist when pods are terminated early in
their lifecycle because components in the kubelet need to know "no
running containers" or "containers can't be started from now on" but
were relying on outdated state.
Only the pod worker knows whether containers are being started for
a given pod, which is required to know when a pod is "terminated"
(no running containers, none coming). Move that responsibility and
podKiller function into the pod workers, and have everything that
was killing the pod go into the UpdatePod loop. Split syncPod into
three phases - setup, terminate containers, and cleanup pod - and
have transitions between those methods be visible to other
components. After this change, to kill a pod you tell the pod worker
to UpdatePod({UpdateType: SyncPodKill, Pod: pod}).
Several places in the kubelet were incorrect about whether they
were handling terminating (should stop running, might have
containers) or terminated (no running containers) pods. The pod worker
exposes methods that allow other loops to know when to set up or tear
down resources based on the state of the pod - these methods remove
the possibility of race conditions by ensuring a single component is
responsible for knowing each pod's allowed state and other components
simply delegate to checking whether they are in the window by UID.
Removing containers now no longer blocks final pod deletion in the
API server and are handled as background cleanup. Node shutdown
no longer marks pods as failed as they can be restarted in the
next step.
See https://docs.google.com/document/d/1Pic5TPntdJnYfIpBeZndDelM-AbS4FN9H2GTLFhoJ04/edit# for details
- Change the feature gate from alpha to beta and enable it by default
- Update a few of the unit tests due to feature gate being enabled by
default
- Small refactor in `nodeshutdown_manager` which adds `featureEnabled`
function (which checks that feature gate and that
`kubeletConfig.ShutdownGracePeriod > 0`).
- Use `featureEnabled()` to exit early from shutdown manager in the case
that the feature is disabled
- Update kubelet config defaulting to be explicit that
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` default to
zero and update the godoc comments.
- Update defaults and add featureGate tag in api config godoc.
With this feature now in beta and the feature gate enabled by default,
to enable graceful shutdown all that will be required is to configure
`ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods` in the
kubelet config. If not configured, they will be defaulted to zero, and
graceful shutdown will effectively be disabled.
Implements KEP 2000, Graceful Node Shutdown:
https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2000-graceful-node-shutdown
* Add new FeatureGate `GracefulNodeShutdown` to control
enabling/disabling the feature
* Add two new KubeletConfiguration options
* `ShutdownGracePeriod` and `ShutdownGracePeriodCriticalPods`
* Add new package, `nodeshutdown` that implements the Node shutdown
manager
* The node shutdown manager uses the systemd inhibit package, to
create an system inhibitor, monitor for node shutdown events, and
gracefully terminate pods upon a node shutdown.