From 1291b1697cb019042c3953eaa32e90456417db18 Mon Sep 17 00:00:00 2001 From: Chao Xu Date: Mon, 18 May 2015 10:47:47 -0700 Subject: [PATCH] update docs/cluster_management.md to v1beta3 --- docs/cluster_management.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/cluster_management.md b/docs/cluster_management.md index 7b11fd7643c..59d78a29857 100644 --- a/docs/cluster_management.md +++ b/docs/cluster_management.md @@ -44,7 +44,7 @@ pods are replicated, upgrades can be done without special coordination. If you want more control over the upgrading process, you may use the following workflow: 1. Mark the node to be rebooted as unschedulable: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": true}'`. + `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta3", "spec": {"unschedulable": true}}'`. This keeps new pods from landing on the node while you are trying to get them off. 1. Get the pods off the machine, via any of the following strategies: 1. wait for finite-duration pods to complete @@ -53,7 +53,7 @@ If you want more control over the upgrading process, you may use the following w 1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. 1. Work on the node 1. Make the node schedulable again: - `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": false}'`. + `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta3", "spec": {"unschedulable": false}}'`. If you deleted the node's VM instance and created a new one, then a new schedulable node resource will be created automatically when you create a new VM instance (if you're using a cloud provider that supports node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).