Merge pull request #11071 from bgrant0607/bugfix1

First draft of deployment-oriented sections of the user guide.
This commit is contained in:
Rohit Jnagal 2015-07-10 14:54:03 -07:00
commit 03fc86044e
6 changed files with 805 additions and 0 deletions

View File

@ -0,0 +1,135 @@
# Kubernetes User Guide: Managing Applications: Configuring and launching containers
## Configuration in Kubernetes
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](quick-start.md), Kubernetes supports declarative configuration. Often times, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
In the declarative style, all configuration is stored in YAML or JSON configuration files, using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently “v1”), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
## Launching a container using a configuration file
Kubernetes executes containers in [*Pods*](../../docs/pods.md). A pod containing a simple Hello World container can be specified in YAML as follows:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo","hello”,”world"]
```
The value of `metadata.name`, `hello-world`, will be the name of the pod resource created, and must be unique within the cluster, whereas `containers[0].name` is just a nickname for the container within that pod. `image` is the name of the Docker image, which Kubernetes expects to be able to pull from a registry, the [Docker Hub](https://registry.hub.docker.com/) by default.
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
The [`command`](../../docs/containers.md#containers-and-commands) overrides the Docker containers `Entrypoint`. Command arguments (corresponding to Dockers `Cmd`) may be specified using `args`, as follows:
```yaml
command: ["/bin/echo"]
args: ["hello","world"]
```
This pod can be created using the `create` command:
```bash
$ kubectl create -f hello-world.yaml
pods/hello-world
```
`kubectl` prints the resource type and name of the resource created when successful.
## Validating configuration
If youre not sure you specified the resource correctly, you can ask `kubectl` to validate it for you:
```bash
$ kubectl create -f hello-world.yaml --validate
```
Lets say you specified `entrypoint` instead of `command`. Youd see output as follows:
```
I0709 06:33:05.600829 14160 schema.go:126] unknown field: entrypoint
I0709 06:33:05.600988 14160 schema.go:129] this may be a false alarm, see https://github.com/GoogleCloudPlatform/kubernetes/issues/6842
pods/hello-world
```
`kubectl create --validate` currently warns about problems it detects, but creates the resource anyway, unless a required field is absent or a field value is invalid. Unknown API fields are ignored, so be careful. This pod was created, but with no `command`, which is an optional field, since the image may specify an `Entrypoint`.
## Environment variables and variable expansion
Kubernetes [does not automatically run commands in a shell](https://github.com/GoogleCloudPlatform/kubernetes/wiki/User-FAQ#use-of-environment-variables-on-the-command-line) (not all images contain shells). If you would like to run your command in a shell, such as to expand environment variables (specified using `env`), you could do the following:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pods contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
```
However, a shell isnt necessary just to expand environment variables. Kubernetes will do it for you if you use [`$(ENVVAR)` syntax](../../docs/design/expansion.md):
```yaml
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
```
## Viewing pod status
You can see the pod you created (actually all of your cluster's pods) using the `get` command.
If youre quick, it will look as follows:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 0s
```
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldnt see pods in an unscheduled state unless theres a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadnt be pulled already. After a few seconds, you should see the container running:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 1/1 Running 0 5s
```
The `READY` column shows how many containers in the pod are running.
Almost immediately after it starts running, this command will terminate. `kubectl` shows that the container is no longer running and displays the exit status:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 ExitCode:0 0 15s
```
## Viewing pod output
You probably want to see the output of the command you ran. As with [`docker logs`](https://docs.docker.com/userguide/usingdocker/), `kubectl logs` will show you the output:
```bash
$ kubectl logs hello-world
hello world
```
## Deleting pods
When youre done looking at the output, you should delete the pod:
```bash
$ kubectl delete pod hello-world
pods/hello-world
```
As with `create`, `kubectl` prints the resource type and name of the resource deleted when successful.
You can also use the resource/name format to specify the pod:
```bash
$ kubectl delete pods/hello-world
pods/hello-world
```
Terminated pods arent currently automatically deleted, so that you can observe their final status, so be sure to clean up your dead pods.
On the other hand, containers and their logs are eventually deleted automatically in order to free up disk space on the nodes.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/configuring-containers.md?pixel)]()

View File

@ -0,0 +1,95 @@
# Kubernetes User Guide: Managing Applications: Deploying continuously running applications
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start.md) and how to configure and launch single-run containers using pods (configuring-containers.md). Here, youll use the configuration-based approach to deploy a continuously running, replicated application.
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](../../docs/pods.md)) using [*Replication Controllers*](../../docs/replication-controller.md).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Its analogous to Google Compute Engines [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWSs [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
The replication controller created to run nginx by `kubctl run` in the [Quick start](quick-start.md) could be specified using YAML as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
Some differences compared to specifying just a pod are that the `kind` is `ReplicationController`, the number of `replicas` desired is specified, and the pod specification is under the `template` field. The names of the pods dont need to be specified explicitly because they are generated from the name of the replication controller.
This replication controller can be created using `create`, just as with pods:
```bash
$ kubectl create -f nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status
You can view the replication controller you created using `get`:
```bash
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
```
This tells you that your controller will ensure that you have two nginx replicas.
You can see those replicas using `get`, just as with pods you created directly:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-065jq 1/1 Running 0 51s
my-nginx-buaiq 1/1 Running 0 51s
```
## Deleting replication controllers
When you want to kill your application, delete your replication controller, as in the [Quick start](quick-start.md):
```bash
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
```
By default, this will also cause the pods managed by the replication controller to be deleted. If there were a large number of pods, this may take a while to complete. If you want to leave the pods running, specify `--cascade=false`.
If you try to delete the pods before deleting the replication controller, it will just replace them, as it is supposed to do.
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](../../docs/labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```bash
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
my-nginx-afv12 0/1 Running 0 3s nginx
my-nginx-lg99z 0/1 Running 0 3s nginx
```
The labels from the pod template are copied to the replication controllers labels by default, as well -- all resources in Kubernetes support labels:
```bash
$ kubectl get rc my-nginx -L app
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod templates labels are used to create a [`selector`](../../docs/labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](../../docs/kubectl_get.md):
```bash
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]
```
You could also specify the `selector` explicitly, such as if you wanted to specify labels in the pod template that you didnt want to select on, but you should ensure that the selector will match the labels of the pods created from the pod template, and that it wont match pods created by other replication controllers. The most straightforward way to ensure the latter is to create a unique label value for the replication controller, and to specify it in both the pod templates labels and in the selector.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/deploying-applications.md?pixel)]()

View File

@ -0,0 +1,361 @@
# Kubernetes User Guide: Managing Applications: Managing deployments
Youve deployed your application and exposed it via a service. Now what? Kubernetes provides a number of tools to help you manage your application deployment, including scaling and updating. Among the features well discuss in more depth are [configuration files](configuring-containers.md#configuration-in-kubernetes) and [labels](deploying-applications.md#labels).
## Organizing resource configurations
Many applications require multiple resources to be created, such as a Replication Controller and a Service. Management of multiple resources can be simplified by grouping them together in the same file (separated by `---` in YAML). For example:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
```
Multiple resources can be created the same way as a single resource:
```bash
$ kubectl create -f nginx-app.yaml
replicationcontrollers/my-nginx
services/my-nginx-svc
```
`kubectl create` also accepts multiple `-f` arguments:
```bash
$ kubectl create -f nginx-rc.yaml -f nginx-svc.yaml
```
And a directory can be specified rather than or in addition to individual files:
```bash
$ kubectl create -f nginx/
```
`kubectl` will read any files with suffixes `.yaml`, `.yml`, or `.json`.
It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory. If the tiers of your application bind to each other using DNS, then you can then simply deploy all of the components of your stack en masse.
A URL can also be specified as a configuration source, which is handy for deploying directly from configuration files checked into github:
```bash
$ kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/examples/replication.yaml
replicationcontrollers/nginx
```
## Bulk operations in kubectl
Resource creation isnt the only operation that `kubectl` can perform in bulk. It can also extract resource names from configuration files in order to perform other operations, in particular to delete the same resources you created:
```bash
$ kubectl delete -f nginx/
replicationcontrollers/my-nginx
services/my-nginx-svc
```
In the case of just two resources, its also easy to specify both on the command line using the resource/name syntax:
```bash
$ kubectl delete replicationcontrollers/my-nginx services/my-nginx-svc
```
For larger numbers of resources, one can use labels to filter resources. The selector is specified using `-l`:
```bash
$ kubectl delete all -lapp=nginx
replicationcontrollers/my-nginx
services/my-nginx-svc
```
Because `kubectl` outputs resource names in the same syntax it accepts, its easy to chain operations using `$()` or `xargs`:
```bash
$ kubectl get $(kubectl create -f nginx/ | grep my-nginx)
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx nginx nginx app=nginx 2
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx-svc app=nginx app=nginx 10.0.152.174 80/TCP
```
## Using labels effectively
The examples weve used so far apply at most a single label to any resource. There are many scenarios where multiple labels should be used to distinguish sets from one another.
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](../../examples/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
```yaml
labels:
app: guestbook
tier: frontend
```
while the Redis master and slave would have different `tier` labels, and perhaps even an additional `role` label:
```yaml
labels:
app: guestbook
tier: backend
role: master
```
and
```yaml
labels:
app: guestbook
tier: backend
role: slave
```
The labels allow us to slice and dice our resources along any dimension specified by a label:
```bash
$ kubectl create -f guestbook-fe.yaml -f redis-master.yaml -f redis-slave.yaml
replicationcontrollers/guestbook-fe
replicationcontrollers/guestbook-redis-master
replicationcontrollers/guestbook-redis-slave
$ kubectl get pods -Lapp -Ltier -Lrole
NAME READY STATUS RESTARTS AGE APP TIER ROLE
guestbook-fe-4nlpb 1/1 Running 0 1m guestbook frontend <n/a>
guestbook-fe-ght6d 1/1 Running 0 1m guestbook frontend <n/a>
guestbook-fe-jpy62 1/1 Running 0 1m guestbook frontend <n/a>
guestbook-redis-master-5pg3b 1/1 Running 0 1m guestbook backend master
guestbook-redis-slave-2q2yf 1/1 Running 0 1m guestbook backend slave
guestbook-redis-slave-qgazl 1/1 Running 0 1m guestbook backend slave
my-nginx-divi2 1/1 Running 0 29m nginx <n/a> <n/a>
my-nginx-o0ef1 1/1 Running 0 29m nginx <n/a> <n/a>
$ kubectl get pods -lapp=guestbook,role=slave
NAME READY STATUS RESTARTS AGE
guestbook-redis-slave-2q2yf 1/1 Running 0 3m
guestbook-redis-slave-qgazl 1/1 Running 0 3m
```
## Canary deployments
Another scenario where multiple labels are needed is to distinguish deployments of different releases or configurations of the same component. For example, it is common practice to deploy a *canary* of a new application release (specified via image tag) side by side with the previous release so that the new release can receive live production traffic before fully rolling it out. For instance, a new release of the guestbook frontend might carry the following labels:
```yaml
labels:
app: guestbook
tier: frontend
track: canary
```
and the primary, stable release would have a different value of the `track` label, so that the sets of pods controlled by the two replication controllers would not overlap:
```yaml
labels:
app: guestbook
tier: frontend
track: stable
```
The frontend service would span both sets of replicas by selecting the common subset of their labels, omitting the `track` label:
```yaml
selector:
app: guestbook
tier: frontend
```
## Updating labels
Sometimes existing pods and other resources need to be relabeled before creating new resources. This can be done with `kubectl label`. For example:
```bash
kubectl label pods -lapp=nginx tier=fe
NAME READY STATUS RESTARTS AGE
my-nginx-v4-9gw19 1/1 Running 0 14m
NAME READY STATUS RESTARTS AGE
my-nginx-v4-hayza 1/1 Running 0 13m
NAME READY STATUS RESTARTS AGE
my-nginx-v4-mde6m 1/1 Running 0 17m
NAME READY STATUS RESTARTS AGE
my-nginx-v4-sh6m8 1/1 Running 0 18m
NAME READY STATUS RESTARTS AGE
my-nginx-v4-wfof4 1/1 Running 0 16m
$ kubectl get pods -lapp=nginx -Ltier
NAME READY STATUS RESTARTS AGE TIER
my-nginx-v4-9gw19 1/1 Running 0 15m fe
my-nginx-v4-hayza 1/1 Running 0 14m fe
my-nginx-v4-mde6m 1/1 Running 0 18m fe
my-nginx-v4-sh6m8 1/1 Running 0 19m fe
my-nginx-v4-wfof4 1/1 Running 0 16m fe
```
## Scaling your application
When load on your application grows or shrinks, its easy to scale with `kubectl`. For instance, to increase the number of nginx replicas from 2 to 3, do:
```bash
$ kubectl scale rc my-nginx --replicas=3
scaled
$ kubectl get pods -lapp=nginx
NAME READY STATUS RESTARTS AGE
my-nginx-1jgkf 1/1 Running 0 3m
my-nginx-divi2 1/1 Running 0 1h
my-nginx-o0ef1 1/1 Running 0 1h
```
## Updating your application without a service outage
At some point, youll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](../../docs/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time.
Lets say you were running version 1.7.9 of nginx:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
```
To update to version 1.9.1, you can use [`kubectl rolling-update --image`](../../docs/design/simple-rolling-update.md):
```bash
$ kubectl rolling-update my-nginx --image=nginx:1.9.1
Creating my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
```
In another window, you can see that `kubectl` added a `deployment` label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
```bash
$ kubectl get pods -lapp=nginx -Ldeployment
NAME READY STATUS RESTARTS AGE DEPLOYMENT
my-nginx-1jgkf 1/1 Running 0 1h 2d1d7a8f682934a254002b56404b813e
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46
my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46
my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e
my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e
```
`kubectl rolling-update` reports progress as it progresses:
```bash
Updating my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
At end of loop: my-nginx replicas: 4, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 1
At beginning of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
Updating my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
At end of loop: my-nginx replicas: 3, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 2
At beginning of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
Updating my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
At end of loop: my-nginx replicas: 2, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 3
At beginning of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
Updating my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
At end of loop: my-nginx replicas: 1, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 4
At beginning of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
Updating my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
At end of loop: my-nginx replicas: 0, my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicas: 5
Update succeeded. Deleting old controller: my-nginx
Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx
my-nginx
```
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using `--rollback`:
```bash
$ kubectl kubectl rolling-update my-nginx --image=nginx:1.9.1 --rollback
Found existing update in progress (my-nginx-ccba8fbd8cc8160970f63f9a2696fc46), resuming.
Found desired replicas.Continuing update with existing controller my-nginx.
Stopping my-nginx-02ca3e87d8685813dbe1f8c164a46f02 replicas: 1 -> 0
Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
my-nginx
```
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx-v4
spec:
replicas: 5
selector:
app: nginx
deployment: v4
template:
metadata:
labels:
app: nginx
deployment: v4
spec:
containers:
- name: nginx
image: nginx:1.9.2
args: [“nginx”,”-T”]
ports:
- containerPort: 80
```
and roll it out:
```bash
$ kubectl rolling-update my-nginx -f nginx-rc.yaml
Creating my-nginx-v4
At beginning of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
Updating my-nginx replicas: 4, my-nginx-v4 replicas: 1
At end of loop: my-nginx replicas: 4, my-nginx-v4 replicas: 1
At beginning of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
Updating my-nginx replicas: 3, my-nginx-v4 replicas: 2
At end of loop: my-nginx replicas: 3, my-nginx-v4 replicas: 2
At beginning of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
Updating my-nginx replicas: 2, my-nginx-v4 replicas: 3
At end of loop: my-nginx replicas: 2, my-nginx-v4 replicas: 3
At beginning of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
Updating my-nginx replicas: 1, my-nginx-v4 replicas: 4
At end of loop: my-nginx replicas: 1, my-nginx-v4 replicas: 4
At beginning of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
Updating my-nginx replicas: 0, my-nginx-v4 replicas: 5
At end of loop: my-nginx replicas: 0, my-nginx-v4 replicas: 5
Update succeeded. Deleting my-nginx
my-nginx-v4
```
You can also run the [update demo](../../examples/update-demo/) to see a visual representation of the rolling update process.
## In-place updates of resources
Sometimes its necessary to make narrow, non-disruptive updates to resources youve created. For instance, you might want to add an [annotation](../../docs/annotations.md) with a description of your object. Thats easiest to do with `kubectl patch`:
```bash
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
my-nginx-v4
$ kubectl get rc my-nginx-v4 -o yaml
apiVersion: v1
kind: ReplicationController
metadata:
annotations:
description: my frontend running nginx
...
```
The patch is specified using json.
For more significant changes, you can `get` the resource, edit it, and then `replace` the resource with the updated version:
```bash
$ export TMP=/tmp/nginx.yaml
$ kubectl get rc my-nginx-v4 -o yaml > $TMP
$ emacs $TMP
$ kubectl replace -f $TMP
replicationcontrollers/my-nginx-v4
$ rm $TMP
```
The system ensures that you dont clobber changes made by other users or components by confirming that the `resourceVersion` doesnt differ from the version you edited. If you want to update regardless of other changes, remove the `resourceVersion` field when you edit the resource. However, if you do this, dont use your original configuration file as the source since additional fields most likely were set in the live state.
## Disruptive updates
In some cases, you may need to update resource fields that cannot be updated once initialized, or you may just want to make a recursive change immediately, such as to fix broken pods created by a replication controller. To change such fields, use `replace --force`, which deletes and re-creates the resource. In this case, you can simply modify your original configuration file:
```bash
$ kubectl replace -f nginx-rc.yaml --force
replicationcontrollers/my-nginx-v4
replicationcontrollers/my-nginx-v4
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/managing-deployments.md?pixel)]()

View File

@ -0,0 +1,8 @@
# Kubernetes User Guide: Managing Applications: Prerequisites
To deploy and manage applications on Kubernetes, youll use the Kubernetes command-line tool, [kubectl](../../docs/kubectl.md). It can be found in the release tar bundle, or can be built from source from github. Ensure that it is executable and in your path.
In order for kubectl to find and access the Kubernetes cluster, it needs a [kubeconfig file](../../docs/kubeconfig-file.md), which is created automatically when creating a cluster using kube-up.sh (see the [getting started guides](../../docs/getting-started-guides/) for more about creating clusters). If you need access to a cluster you didnt create, see the [Sharing Cluster Access document](../../docs/sharing-clusters.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/prereqs.md?pixel)]()

View File

@ -0,0 +1,150 @@
# Kubernetes User Guide: Managing Applications: Working with pods and containers in production
Youve seen [how to configure and deploy pods and containers](configuring-containers.md), using some of the most common configuration parameters. This section dives into additional features that are especially useful for running applications in production.
## Persistent storage
The container file system only lives as long as the container does, so when a container crashes and restarts, changes to the filesystem will be lost and the container will restart from a clean slate. To access more-persistent storage, outside the container file system, you need a [*volume*](../../docs/volumes.md). This is especially important to stateful applications, such as key-value stores and databases.
For example, [Redis](http://redis.io/) is a key-value cache and store, which we use in the [guestbook](../../examples/guestbook/) and other examples. We can add a volume to it to store persistent data as follows:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
# Provision a fresh volume for the pod
volumes:
- name: data
emptyDir: {}
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
# Mount the volume into the pod
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
```
`emptyDir` volumes live for the lifespan of the [pod](../../docs/pods.md), which is longer than the lifespan of any one container, so if the container fails and is restarted, our storage will live on.
In addition to the local disk storage provided by `emptyDir`, Kubernetes supports many different network-attached storage solutions, including PD on GCE and EBS on EC2, which are preferred for critical data, and will handle details such as mounting and unmounting the devices on the nodes. See [the volumes doc](../../docs/volumes.md) for more details.
## Distributing credentials
Many applications need credentials, such as passwords, OAuth tokens, and TLS keys, to authenticate with other applications, databases, and services. Storing these credentials in container images or environment variables is less than ideal, since the credentials can then be copied by anyone with access to the image, pod/container specification, host file system, or host Docker daemon.
Kubernetes provides a mechanism, called [*secrets*](../../docs/secrets.md), that facilitates delivery of sensitive credentials to applications. A `Secret` is a simple resource containing a map of data. For instance, a simple secret with a username and password might look as follows:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: dmFsdWUtMg0K
username: dmFsdWUtMQ0K
```
As with other resources, this secret can be instantiated using `create` and can be viewed with `get`:
```bash
$ kubectl create -f secret.yaml
secrets/mysecret
$ kubectl get secrets
NAME TYPE DATA
default-token-v9pyz kubernetes.io/service-account-token 2
mysecret Opaque 2
```
To use the secret, you need to reference it in a pod or pod template. The `secret` volume source enables you to mount it as an in-memory directory into your containers.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
spec:
volumes:
- name: data
emptyDir: {}
- name: supersecret
secret:
secretName: mysecret
containers:
- name: redis
image: kubernetes/redis:v1
ports:
- containerPort: 6379
# Mount the volume into the pod
volumeMounts:
- mountPath: /redis-master-data
name: data # must match the name of the volume, above
- mountPath: /var/run/secrets/super
name: supersecret
```
For more details, see the [secrets document](../../docs/secrets.md), [example](../../examples/secrets/) and [design doc](../../docs/design/secrets.md).
## Authenticating with a private image registry
Secrets can also be used to pass [image registry credentials](../../docs/images.md#using-a-private-registry).
First, create a `.dockercfg` file, such as running `docker login <registry.domain>`.
Then put the resulting `.dockercfg` file into a [secret resource](../../docs/secrets.md). For example:
```
$ docker login
Username: janedoe
Password: ●●●●●●●●●●●
Email: jdoe@example.com
WARNING: login credentials saved in /Users/jdoe/.dockercfg.
Login Succeeded
$ echo $(cat ~/.dockercfg)
{ "https://index.docker.io/v1/": { "auth": "ZmFrZXBhc3N3b3JkMTIK", "email": "jdoe@example.com" } }
$ cat ~/.dockercfg | base64
eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
$ cat > image-pull-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
data:
.dockercfg: eyAiaHR0cHM6Ly9pbmRleC5kb2NrZXIuaW8vdjEvIjogeyAiYXV0aCI6ICJabUZyWlhCaGMzTjNiM0prTVRJSyIsICJlbWFpbCI6ICJqZG9lQGV4YW1wbGUuY29tIiB9IH0K
type: kubernetes.io/dockercfg
EOF
$ kubectl create -f image-pull-secret.yaml
secrets/myregistrykey
```
Now, you can create pods which reference that secret by adding an `imagePullSecrets`
section to a pod definition.
```
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/production-pods.md?pixel)]()

View File

@ -0,0 +1,56 @@
# Kubernetes User Guide: Managing Applications: Quick start
This guide will help you get oriented to Kubernetes and running your first containers on the cluster.
## Launching a simple application
Once your application is packaged into a container and pushed to an image registry, youre ready to deploy it to Kubernetes.
For example, [nginx](http://wiki.nginx.org/Main) is a popular HTTP server, with a [pre-built container on Docker hub](https://registry.hub.docker.com/_/nginx/). The [`kubectl run`](../../docs/kubectl_run.md) command below will create two nginx replicas, listening on port 80.
```bash
$ kubectl run my-nginx --image=nginx --replicas=2 --port=80
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
my-nginx my-nginx nginx run=my-nginx 2
```
You can see that they are running by:
```bash
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-nginx-l8n3i 1/1 Running 0 29m
my-nginx-q7jo3 1/1 Running 0 29m
```
Kubernetes will ensure that your application keeps running, by automatically restarting containers that fail, spreading containers across nodes, and recreating containers on new nodes when nodes fail.
## Exposing your application to the Internet
Through integration with some cloud providers (for example Google Compute Engine and AWS EC2), Kubernetes enables you to request that it provision a public IP address for your application. To do this run:
```bash
$ kubectl expose rc my-nginx --port=80 --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
my-nginx run=my-nginx run=my-nginx 80/TCP
```
To find the public IP address assigned to your application, execute:
```bash
$ kubectl get svc my-nginx -o json | grep \"ip\"
"ip": "130.111.122.213"
```
In order to access your nginx landing page, you also have to make sure that traffic from external IPs is allowed. Do this by opening a [firewall to allow traffic on port 80](../../docs/services-firewalls.md).
## Killing the application
To kill the application and delete its containers and public IP address, do:
```bash
$ kubectl delete rc my-nginx
replicationcontrollers/my-nginx
$ kubectl delete svc my-nginx
services/my-nginx
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/quick-start.md?pixel)]()