mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-24 20:24:09 +00:00
Merge pull request #16704 from janetkuo/doc-kubectl-edit
Auto commit by PR queue bot
This commit is contained in:
commit
20a99af00e
@ -44,9 +44,13 @@ You’ve deployed your application and exposed it via a service. Now what? Kuber
|
||||
- [Using labels effectively](#using-labels-effectively)
|
||||
- [Canary deployments](#canary-deployments)
|
||||
- [Updating labels](#updating-labels)
|
||||
- [Updating annotations](#updating-annotations)
|
||||
- [Scaling your application](#scaling-your-application)
|
||||
- [Updating your application without a service outage](#updating-your-application-without-a-service-outage)
|
||||
- [In-place updates of resources](#in-place-updates-of-resources)
|
||||
- [kubectl patch](#kubectl-patch)
|
||||
- [kubectl edit](#kubectl-edit)
|
||||
- [Using configuration files](#using-configuration-files)
|
||||
- [Disruptive updates](#disruptive-updates)
|
||||
- [What's next?](#whats-next)
|
||||
|
||||
@ -127,8 +131,8 @@ Resource creation isn’t the only operation that `kubectl` can perform in bulk.
|
||||
|
||||
```console
|
||||
$ kubectl delete -f ./nginx/
|
||||
replicationcontrollers/my-nginx
|
||||
services/my-nginx-svc
|
||||
replicationcontrollers "my-nginx" deleted
|
||||
services "my-nginx-svc" deleted
|
||||
```
|
||||
|
||||
In the case of just two resources, it’s also easy to specify both on the command line using the resource/name syntax:
|
||||
@ -141,14 +145,14 @@ For larger numbers of resources, one can use labels to filter resources. The sel
|
||||
|
||||
```console
|
||||
$ kubectl delete all -lapp=nginx
|
||||
replicationcontrollers/my-nginx
|
||||
services/my-nginx-svc
|
||||
replicationcontrollers "my-nginx" deleted
|
||||
services "my-nginx-svc" deleted
|
||||
```
|
||||
|
||||
Because `kubectl` outputs resource names in the same syntax it accepts, it’s easy to chain operations using `$()` or `xargs`:
|
||||
|
||||
```console
|
||||
$ kubectl get $(kubectl create -f ./nginx/ | grep my-nginx)
|
||||
$ kubectl get $(kubectl create -f ./nginx/ -o name | grep my-nginx)
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
my-nginx nginx nginx app=nginx 2
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@ -189,19 +193,19 @@ The labels allow us to slice and dice our resources along any dimension specifie
|
||||
|
||||
```console
|
||||
$ kubectl create -f ./guestbook-fe.yaml -f ./redis-master.yaml -f ./redis-slave.yaml
|
||||
replicationcontrollers/guestbook-fe
|
||||
replicationcontrollers/guestbook-redis-master
|
||||
replicationcontrollers/guestbook-redis-slave
|
||||
replicationcontrollers "guestbook-fe" created
|
||||
replicationcontrollers "guestbook-redis-master" created
|
||||
replicationcontrollers "guestbook-redis-slave" created
|
||||
$ kubectl get pods -Lapp -Ltier -Lrole
|
||||
NAME READY STATUS RESTARTS AGE APP TIER ROLE
|
||||
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <n/a>
|
||||
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <none>
|
||||
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
|
||||
guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave
|
||||
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
|
||||
my-nginx-divi2 1/1 Running 0 29m nginx <n/a> <n/a>
|
||||
my-nginx-o0ef1 1/1 Running 0 29m nginx <n/a> <n/a>
|
||||
my-nginx-divi2 1/1 Running 0 29m nginx <none> <none>
|
||||
my-nginx-o0ef1 1/1 Running 0 29m nginx <none> <none>
|
||||
$ kubectl get pods -lapp=guestbook,role=slave
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
|
||||
@ -242,16 +246,11 @@ Sometimes existing pods and other resources need to be relabeled before creating
|
||||
|
||||
```console
|
||||
$ kubectl label pods -lapp=nginx tier=fe
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-9gw19 1/1 Running 0 14m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-hayza 1/1 Running 0 13m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-mde6m 1/1 Running 0 17m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-sh6m8 1/1 Running 0 18m
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-v4-wfof4 1/1 Running 0 16m
|
||||
pod "my-nginx-v4-9gw19" labeled
|
||||
pod "my-nginx-v4-hayza" labeled
|
||||
pod "my-nginx-v4-mde6m" labeled
|
||||
pod "my-nginx-v4-sh6m8" labeled
|
||||
pod "my-nginx-v4-wfof4" labeled
|
||||
$ kubectl get pods -lapp=nginx -Ltier
|
||||
NAME READY STATUS RESTARTS AGE TIER
|
||||
my-nginx-v4-9gw19 1/1 Running 0 15m fe
|
||||
@ -261,13 +260,32 @@ my-nginx-v4-sh6m8 1/1 Running 0 19m fe
|
||||
my-nginx-v4-wfof4 1/1 Running 0 16m fe
|
||||
```
|
||||
|
||||
For more information, please see [labels](labels.md) and [kubectl label](kubectl/kubectl_label.md) document.
|
||||
|
||||
## Updating annotations
|
||||
|
||||
Sometimes you want to attach annotations to resources. Annotations are arbitrary non-identifying metadata for retrieval by API clients such as tools, libraries, etc. This can be done with `kubectl annotate`. For example:
|
||||
|
||||
```console
|
||||
$ kubectl annotate pods my-nginx-v4-9gw19 decscription='my frontend running nginx'
|
||||
$ kubectl get pods my-nginx-v4-9gw19 -o yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
metadata:
|
||||
annotations:
|
||||
description: my frontend running nginx
|
||||
...
|
||||
```
|
||||
|
||||
For more information, please see [annotations](annotations.md) and [kubectl annotate](kubectl/kubectl_annotate.md) document.
|
||||
|
||||
## Scaling your application
|
||||
|
||||
When load on your application grows or shrinks, it’s easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
|
||||
|
||||
```console
|
||||
$ kubectl scale rc my-nginx --replicas=3
|
||||
scaled
|
||||
replicationcontroller "my-nginx" scaled
|
||||
$ kubectl get pods -lapp=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-1jgkf 1/1 Running 0 3m
|
||||
@ -275,6 +293,22 @@ my-nginx-divi2 1/1 Running 0 1h
|
||||
my-nginx-o0ef1 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
To have the system automatically choose the number of nginx replicas as needed, range from 1 to 3, do:
|
||||
|
||||
```console
|
||||
$ kubectl autoscale rc my-nginx --min=1 --max=3
|
||||
replicationcontroller "my-nginx" autoscaled
|
||||
$ kubectl get pods -lapp=nginx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-1jgkf 1/1 Running 0 3m
|
||||
my-nginx-divi2 1/1 Running 0 3m
|
||||
$ kubectl get horizontalpodautoscaler
|
||||
NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE
|
||||
nginx ReplicationController/nginx/scale 80% <waiting> 1 3 1m
|
||||
```
|
||||
|
||||
For more information, please see [kubectl scale](kubectl/kubectl_scale.md), [kubectl autoscale](kubectl/kubectl_autoscale.md) and [horizontal pod autoscaler](horizontal-pod-autoscaling/README.md) document.
|
||||
|
||||
## Updating your application without a service outage
|
||||
|
||||
At some point, you’ll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
|
||||
@ -306,7 +340,7 @@ To update to version 1.9.1, you can use [`kubectl rolling-update --image`](../..
|
||||
|
||||
```console
|
||||
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
|
||||
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
```
|
||||
|
||||
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
|
||||
@ -314,7 +348,6 @@ In another window, you can see that `kubectl` added a `deployment` label to the
|
||||
```console
|
||||
$ kubectl get pods -lapp=nginx -Ldeployment
|
||||
NAME READY STATUS RESTARTS AGE DEPLOYMENT
|
||||
my-nginx-1jgkf 1/1 Running 0 1h 2d1d7a8f682934a254002b56404b813e
|
||||
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
|
||||
@ -325,34 +358,28 @@ my-nginx-q6all 1/1 Running 0
|
||||
`kubectl rolling-update` reports progress as it progresses:
|
||||
|
||||
```console
|
||||
Updating my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
At end of loop: my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
|
||||
At beginning of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
Updating my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
At end of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
|
||||
At beginning of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
Updating my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
At end of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
|
||||
At beginning of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
Updating my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
At end of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
|
||||
At beginning of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
Updating my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
At end of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
|
||||
Scaling up my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 0 to 3, scaling down my-nginx from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 1
|
||||
Scaling my-nginx down to 2
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 2
|
||||
Scaling my-nginx down to 1
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 3
|
||||
Scaling my-nginx down to 0
|
||||
Update succeeded. Deleting old controller: my-nginx
|
||||
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
|
||||
my-nginx
|
||||
replicationcontroller "my-nginx" rolling updated
|
||||
```
|
||||
|
||||
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
|
||||
|
||||
```console
|
||||
$ kubectl kubectl rolling-update my-nginx --image=nginx:1.9.1 --rollback
|
||||
Found existing update in progress (my-nginx-ccba8fbd8cc8160970f63f9a2696fc46), resuming.
|
||||
Found desired replicas.Continuing update with existing controller my-nginx.
|
||||
Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
|
||||
$ kubectl rolling-update my-nginx --rollback
|
||||
Setting "my-nginx" replicas to 1
|
||||
Continuing update with existing controller my-nginx.
|
||||
Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
|
||||
Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 down to 0
|
||||
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
|
||||
my-nginx
|
||||
replicationcontroller "my-nginx" rolling updated
|
||||
```
|
||||
|
||||
This is one example where the immutability of containers is a huge asset.
|
||||
@ -387,57 +414,105 @@ and roll it out:
|
||||
|
||||
```console
|
||||
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml
|
||||
Creating my-nginx-v4
|
||||
At beginning of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
Updating my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
At end of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
|
||||
At beginning of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
Updating my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
At end of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
|
||||
At beginning of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
Updating my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
At end of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
|
||||
At beginning of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
Updating my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
At end of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
|
||||
At beginning of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
Updating my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
At end of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
|
||||
Update succeeded. Deleting my-nginx
|
||||
my-nginx-v4
|
||||
Created my-nginx-v4
|
||||
Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods)
|
||||
Scaling my-nginx-v4 up to 1
|
||||
Scaling my-nginx down to 3
|
||||
Scaling my-nginx-v4 up to 2
|
||||
Scaling my-nginx down to 2
|
||||
Scaling my-nginx-v4 up to 3
|
||||
Scaling my-nginx down to 1
|
||||
Scaling my-nginx-v4 up to 4
|
||||
Scaling my-nginx down to 0
|
||||
Scaling my-nginx-v4 up to 5
|
||||
Update succeeded. Deleting old controller: my-nginx
|
||||
replicationcontroller "my-nginx-v4" rolling updated
|
||||
```
|
||||
|
||||
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
|
||||
|
||||
## In-place updates of resources
|
||||
|
||||
Sometimes it’s necessary to make narrow, non-disruptive updates to resources you’ve created. For instance, you might want to add an [annotation](annotations.md) with a description of your object. That’s easiest to do with `kubectl patch`:
|
||||
Sometimes it’s necessary to make narrow, non-disruptive updates to resources you’ve created. For instance, you might want to update the container's image of your pod.
|
||||
|
||||
### kubectl patch
|
||||
|
||||
Suppose you want to fix a typo of the container's image of a pod. One way to do that is with `kubectl patch`:
|
||||
|
||||
```console
|
||||
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
|
||||
my-nginx-v4
|
||||
$ kubectl get rc my-nginx-v4 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
annotations:
|
||||
description: my frontend running nginx
|
||||
# Suppose you have a pod with a container named "nginx" and its image "nignx" (typo),
|
||||
# use container name "nginx" as a key to update the image from "nignx" (typo) to "nginx"
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
-image: nignx
|
||||
name: nginx
|
||||
...
|
||||
$ kubectl patch pod my-nginx-1jgkf -p '{"spec":{"containers":[{"name":"nginx","image":"nginx"}]}}'
|
||||
"my-nginx-1jgkf" patched
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml
|
||||
apiversion: v1
|
||||
kind: pod
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
-image: nginx
|
||||
name: nginx
|
||||
...
|
||||
```
|
||||
|
||||
The patch is specified using json.
|
||||
|
||||
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
|
||||
The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state.
|
||||
|
||||
For more information, please see [kubectl patch](kubectl/kubectl_patch.md) document.
|
||||
|
||||
### kubectl edit
|
||||
|
||||
Alternatively, you may also update resources with `kubectl edit`:
|
||||
|
||||
```console
|
||||
$ kubectl get rc my-nginx-v4 -o yaml > /tmp/nginx.yaml
|
||||
$ vi /tmp/nginx.yaml
|
||||
$ kubectl replace -f /tmp/nginx.yaml
|
||||
replicationcontrollers/my-nginx-v4
|
||||
$ rm $TMP
|
||||
$ kubectl edit pod my-nginx-1jgkf
|
||||
```
|
||||
|
||||
The system ensures that you don’t clobber changes made by other users or components by confirming that the `resourceVersion` doesn’t differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, don’t use your original configuration file as the source since additional fields most likely were set in the live state.
|
||||
This is equivalent to first `get` the resource, edit it in text editor, and then `replace` the resource with the updated version:
|
||||
|
||||
```console
|
||||
$ kubectl get pod my-nginx-1jgkf -o yaml > /tmp/nginx.yaml
|
||||
$ vi /tmp/nginx.yaml
|
||||
# do some edit, and then save the file
|
||||
$ kubectl replace -f /tmp/nginx.yaml
|
||||
pod "my-nginx-1jgkf" replaced
|
||||
$ rm /tmp/nginx.yaml
|
||||
```
|
||||
|
||||
This allows you to do more significant changes more easily. Note that you can specify the editor with your `EDITOR` or `KUBE_EDITOR` environment variables.
|
||||
|
||||
For more information, please see [kubectl edit](kubectl/kubectl_edit.md) document.
|
||||
|
||||
## Using configuration files
|
||||
|
||||
A more disciplined alternative to patch and edit is `kubectl apply`.
|
||||
|
||||
With apply, you can keep a set of configuration files in source control, where they can be maintained and versioned along with the code for the resources they configure. Then, when you're ready to push configuration changes to the cluster, you can run `kubectl apply`.
|
||||
|
||||
This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
|
||||
|
||||
```console
|
||||
$ kubectl apply -f ./nginx-rc.yaml
|
||||
replicationcontroller "my-nginx-v4" configured
|
||||
```
|
||||
|
||||
As shown in the example above, the configuration used with `kubectl apply` is the same as the one used with `kubectl replace`. However, instead of deleting the existing resource and replacing it with a new one, `kubectl apply` modifies the configuration of the existing resource.
|
||||
|
||||
Note that `kubectl apply` attaches an annotation to the resource in order to determine the changes to the configuration since the previous invocation. When it's invoked, `kubectl apply` does a three-way diff between the previous configuration, the provided input and the current configuration of the resource, in order to determine how to modify the resource.
|
||||
|
||||
Currently, resources are created without this annotation, so the first invocation of `kubectl apply` will fall back to a two-way diff between the provided input and the current configuration of the resource. During this first invocation, it cannot detect the deletion of properties set when the resource was created. For this reason, it will not remove them.
|
||||
|
||||
All subsequent calls to `kubectl apply`, and other commands that modify the configuration, such as `kubectl replace` and `kubectl edit`, will update the annotation, allowing subsequent calls to `kubectl apply` to detect and perform deletions using a three-way diff.
|
||||
|
||||
## Disruptive updates
|
||||
|
||||
@ -445,8 +520,7 @@ In some cases, you may need to update resource fields that cannot be updated onc
|
||||
|
||||
```console
|
||||
$ kubectl replace -f ./nginx-rc.yaml --force
|
||||
replicationcontrollers/my-nginx-v4
|
||||
replicationcontrollers/my-nginx-v4
|
||||
replicationcontrollers "my-nginx-v4" replaced
|
||||
```
|
||||
|
||||
## What's next?
|
||||
|
Loading…
Reference in New Issue
Block a user