From 822b2d791c6ffe4ff41e32a34d60402526caf804 Mon Sep 17 00:00:00 2001 From: Mayank Kumar Date: Tue, 20 Sep 2016 21:26:04 -0700 Subject: [PATCH] fix drain help and make consistent with doc --- pkg/kubectl/cmd/drain.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/pkg/kubectl/cmd/drain.go b/pkg/kubectl/cmd/drain.go index 4b3e362f89c..a8c34f99231 100644 --- a/pkg/kubectl/cmd/drain.go +++ b/pkg/kubectl/cmd/drain.go @@ -127,13 +127,13 @@ var ( Drain node in preparation for maintenance. The given node will be marked unschedulable to prevent new pods from arriving. - Then drain deletes all pods except mirror pods (which cannot be deleted through + The 'drain' deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any - pods that are neither mirror pods nor managed--by ReplicationController, - ReplicaSet, DaemonSet or Job--, then drain will not delete any pods unless you + pods that are neither mirror pods nor managed by ReplicationController, + ReplicaSet, DaemonSet or Job, then drain will not delete any pods unless you use --force. When you are ready to put the node back into service, use kubectl uncordon, which