Remove all docs which are moving to http://kubernetes.github.io

All .md files now are only a pointer to where they likely are on the new
site.

All other files are untouched.
This commit is contained in:
Eric Paris
2016-03-04 12:49:17 -05:00
parent a20258efae
commit f334fc4179
134 changed files with 136 additions and 23433 deletions

View File

@@ -32,198 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes 101 - Kubectl CLI and Pods
For Kubernetes 101, we will cover kubectl, pods, volumes, and multiple containers
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes 101 - Kubectl CLI and Pods](#kubernetes-101---kubectl-cli-and-pods)
- [Kubectl CLI](#kubectl-cli)
- [Pods](#pods)
- [Pod Definition](#pod-definition)
- [Pod Management](#pod-management)
- [Volumes](#volumes)
- [Volume Types](#volume-types)
- [Multiple Containers](#multiple-containers)
- [What's Next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Kubectl CLI
The easiest way to interact with Kubernetes is via the [kubectl](../kubectl/kubectl.md) command-line interface.
For more info about kubectl, including its usage, commands, and parameters, see the [kubectl CLI reference](../kubectl/kubectl.md).
If you haven't installed and configured kubectl, finish the [prerequisites](../prereqs.md) before continuing.
## Pods
In Kubernetes, a group of one or more containers is called a _pod_. Containers in a pod are deployed together, and are started, stopped, and replicated as a group.
See [pods](../../../docs/user-guide/pods.md) for more details.
#### Pod Definition
The simplest pod definition describes the deployment of a single container. For example, an nginx web server pod might be defined as such:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
A pod definition is a declaration of a _desired state_. Desired state is a very important concept in the Kubernetes model. Many things present a desired state to the system, and it is Kubernetes' responsibility to make sure that the current state matches the desired state. For example, when you create a Pod, you declare that you want the containers in it to be running. If the containers happen to not be running (e.g. program failure, ...), Kubernetes will continue to (re-)create them for you in order to drive them to the desired state. This process continues until the Pod is deleted.
See the [design document](../../design/README.md) for more details.
#### Pod Management
Create a pod containing an nginx server ([pod-nginx.yaml](pod-nginx.yaml)):
```sh
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx.yaml
```
List all pods:
```sh
$ kubectl get pods
```
On most providers, the pod IPs are not externally accessible. The easiest way to test that the pod is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details.
Provided the pod IP is accessible, you should be able to access its http endpoint with curl on port 80:
```sh
$ curl http://$(kubectl get pod nginx -o go-template={{.status.podIP}})
```
Delete the pod by name:
```sh
$ kubectl delete pod nginx
```
#### Volumes
That's great for a simple static web server, but what about persistent storage?
The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage.
For this example we'll be creating a Redis pod with a named volume and volume mount that defines the path to mount the volume.
1. Define a volume:
```yaml
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
2. Define a volume mount within a container definition:
```yaml
volumeMounts:
# name must match the volume name below
- name: redis-persistent-storage
# mount path within the container
mountPath: /data/redis
```
Example Redis pod definition with a persistent storage volume ([pod-redis.yaml](pod-redis.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-redis.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: redis-persistent-storage
mountPath: /data/redis
volumes:
- name: redis-persistent-storage
emptyDir: {}
```
[Download example](pod-redis.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-redis.yaml -->
Notes:
- The `volumeMounts` `name` is a reference to a specific `volumes` `name`.
- The `volumeMounts` `mountPath` is the path to mount the volume within the container.
##### Volume Types
- **EmptyDir**: Creates a new directory that will exist as long as the Pod is running on the node, but it can persist across container failures and restarts.
- **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).
See [volumes](../../../docs/user-guide/volumes.md) for more details.
#### Multiple Containers
_Note:
The examples below are syntactically correct, but some of the images (e.g. kubernetes/git-monitor) don't exist yet. We're working on turning these into working examples._
However, often you want to have two different containers that work together. An example of this would be a web server, and a helper job that polls a git repository for new updates:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
```
Note that we have also added a volume here. In this case, the volume is mounted into both containers. It is marked `readOnly` in the web server's case, since it doesn't need to write to the directory.
Finally, we have also introduced an environment variable to the `git-monitor` container, which allows us to parameterize that container with the particular git repository that we want to track.
## What's Next?
Continue on to [Kubernetes 201](k8s201.md) or
for a complete application see the [guestbook example](../../../examples/guestbook/README.md)
This file has moved to: http://kubernetes.github.io/docs/user-guide/walkthrough/README/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@@ -32,295 +32,7 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking
If you went through [Kubernetes 101](README.md), you learned about kubectl, pods, volumes, and multiple containers.
For Kubernetes 201, we will pick up where 101 left off and cover some slightly more advanced topics in Kubernetes, related to application productionization, deployment and
scaling.
In order for the kubectl usage examples to work, make sure you have an examples directory locally, either from [a release](https://github.com/kubernetes/kubernetes/releases) or [the source](https://github.com/kubernetes/kubernetes).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Kubernetes 201 - Labels, Replication Controllers, Services and Health Checking](#kubernetes-201---labels-replication-controllers-services-and-health-checking)
- [Labels](#labels)
- [Replication Controllers](#replication-controllers)
- [Replication Controller Management](#replication-controller-management)
- [Services](#services)
- [Service Management](#service-management)
- [Health Checking](#health-checking)
- [Process Health Checking](#process-health-checking)
- [Application Health Checking](#application-health-checking)
- [What's Next?](#whats-next)
<!-- END MUNGE: GENERATED_TOC -->
## Labels
Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods. Please do! But eventually you will need a system to organize these pods into groups. The system for achieving this in Kubernetes is Labels. Labels are key-value pairs that are attached to each object in Kubernetes. Label selectors can be passed along with a RESTful `list` request to the apiserver to retrieve a list of objects which match that label selector.
To add a label, add a labels section under metadata in the pod definition:
```yaml
labels:
app: nginx
```
For example, here is the nginx pod definition with labels ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
[Download example](pod-nginx-with-label.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-nginx-with-label.yaml -->
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
```
List all pods with the label `app=nginx`:
```console
$ kubectl get pods -l app=nginx
```
For more information, see [Labels](../labels.md).
They are a core concept used by two additional Kubernetes building blocks: Replication Controllers and Services.
## Replication Controllers
OK, now you know how to make awesome, multi-container, labeled pods and you want to use them to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogeneous?
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single Kubernetes object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
For example, here is a replication controller that instantiates two nginx pods ([replication-controller.yaml](replication-controller.yaml)):
<!-- BEGIN MUNGE: EXAMPLE replication-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
# selector identifies the set of Pods that this
# replication controller is responsible for managing
selector:
app: nginx
# podTemplate defines the 'cookie cutter' used for creating
# new pods when necessary
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
[Download example](replication-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE replication-controller.yaml -->
#### Replication Controller Management
Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
```
List all replication controllers:
```console
$ kubectl get rc
```
Delete the replication controller by name:
```console
$ kubectl delete rc nginx-controller
```
For more information, see [Replication Controllers](../replication-controller.md).
## Services
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes, the service abstraction achieves these goals. A service provides a way to refer to a set of pods (selected by labels) with a single static IP address. It may also provide load balancing, if supported by the provider.
For example, here is a service that balances across the pods created in the previous nginx replication controller example ([service.yaml](service.yaml)):
<!-- BEGIN MUNGE: EXAMPLE service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 8000 # the port that this service should serve on
# the container on each pod to connect to, can be a name
# (e.g. 'www') or a number (e.g. 80)
targetPort: 80
protocol: TCP
# just like the selector in the replication controller,
# but this time it identifies the set of pods to load balance
# traffic to.
selector:
app: nginx
```
[Download example](service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE service.yaml -->
#### Service Management
Create an nginx service ([service.yaml](service.yaml)):
```console
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
```
List all services:
```console
$ kubectl get services
```
On most providers, the service IPs are not externally accessible. The easiest way to test that the service is working is to create a busybox pod and exec commands on it remotely. See the [command execution documentation](../kubectl/kubectl_exec.md) for details.
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
```console
$ export SERVICE_IP=$(kubectl get service nginx-service -o go-template={{.spec.clusterIP}})
$ export SERVICE_PORT=$(kubectl get service nginx-service -o go-template'={{(index .spec.ports 0).port}}')
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
```
To delete the service by name:
```console
$ kubectl delete service nginx-service
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service.
For more information, see [Services](../services.md).
## Health Checking
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/kubernetes/kubernetes/issues) indicates otherwise...
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
and repair of your application. That way a system outside of your application itself is responsible for monitoring the
application and taking action to fix it. It's important that the system be outside of the application, since if
your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
In Kubernetes, the health check monitor is the Kubelet agent.
#### Process Health Checking
The simplest form of health-checking is just process level health checking. The Kubelet constantly asks the Docker daemon
if the container process is still running, and if not, the container process is restarted. In all of the Kubernetes examples
you have run so far, this health checking was actually already enabled. It's on for every single container that runs in
Kubernetes.
#### Application Health Checking
However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
```go
lockOne := sync.Mutex{}
lockTwo := sync.Mutex{}
go func() {
lockOne.Lock();
lockTwo.Lock();
...
}()
lockTwo.Lock();
lockOne.Lock();
```
This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
Currently, there are three types of application health checks that you can choose from:
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
In all cases, if the Kubelet discovers a failure the container is restarted.
The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)):
<!-- BEGIN MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
# defines the health checking
livenessProbe:
# an http probe
httpGet:
path: /_status/healthz
port: 80
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
timeoutSeconds: 1
ports:
- containerPort: 80
```
[Download example](pod-with-http-healthcheck.yaml?raw=true)
<!-- END MUNGE: EXAMPLE pod-with-http-healthcheck.yaml -->
For more information about health checking, see [Container Probes](../pod-states.md#container-probes).
## What's Next?
For a complete application see the [guestbook example](../../../examples/guestbook/).
This file has moved to: http://kubernetes.github.io/docs/user-guide/walkthrough/k8s201/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->