Merge pull request #6505 from brendandburns/hyperkube

Docker multi-node
This commit is contained in:
Vish Kannan 2015-04-08 11:08:39 -07:00
commit ffffbb7edf
11 changed files with 431 additions and 3 deletions

View File

@ -5,5 +5,6 @@ RUN apt-get -yy -q install iptables
COPY hyperkube /hyperkube COPY hyperkube /hyperkube
RUN chmod a+rx /hyperkube RUN chmod a+rx /hyperkube
COPY master.json /etc/kubernetes/manifests/master.json
COPY master-multi.json /etc/kubernetes/manifests-multi/master.json
COPY master.json /etc/kubernetes/manifests/master.json

View File

@ -0,0 +1,45 @@
{
"apiVersion": "v1beta3",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--machines=127.0.0.1",
"--sync_nodes=true",
"--v=2"
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"apiserver",
"--portal_net=10.0.0.1/24",
"--address=0.0.0.0",
"--etcd_servers=http://127.0.0.1:4001",
"--cluster_name=kubernetes",
"--v=2"
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube:v0.14.1",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
}
]
}
}

View File

@ -1,7 +1,7 @@
{ {
"apiVersion": "v1beta3", "apiVersion": "v1beta3",
"kind": "Pod", "kind": "Pod",
"metadata": {"name":"nginx"}, "metadata": {"name":"k8s-master"},
"spec":{ "spec":{
"hostNetwork": true, "hostNetwork": true,
"containers":[ "containers":[

View File

@ -25,7 +25,8 @@ Vmware | CoreOS | CoreOS | flannel | [docs](../../docs/getting
Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | Community (@jeffmendoza) | Azure | Saltstack | Ubuntu | OpenVPN | [docs](../../docs/getting-started-guides/azure.md) | Community (@jeffmendoza) |
Bare-metal | custom | Ubuntu | _none_ | [docs](../../docs/getting-started-guides/ubuntu_single_node.md) | Community (@jainvipin) | Bare-metal | custom | Ubuntu | _none_ | [docs](../../docs/getting-started-guides/ubuntu_single_node.md) | Community (@jainvipin) |
Bare-metal | custom | Ubuntu Cluster | flannel | [docs](../../docs/getting-started-guides/ubuntu_multinodes_cluster.md) | Community (@resouer @WIZARD-CXY) | use k8s version 0.12.0 Bare-metal | custom | Ubuntu Cluster | flannel | [docs](../../docs/getting-started-guides/ubuntu_multinodes_cluster.md) | Community (@resouer @WIZARD-CXY) | use k8s version 0.12.0
Docker | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 | Docker Single Node | custom | N/A | local | [docs](docker.md) | Project (@brendandburns) | Tested @ 0.14.1 |
Docker Multi Node | Flannel| N/A | local | [docs](docker-multinode.md) | Project (@brendandburns) | Tested @ 0.14.1 |
Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) | Local | | | _none_ | [docs](../../docs/getting-started-guides/locally.md) | Community (@preillyme) |
Ovirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Inactive | Ovirt | | | | [docs](../../docs/getting-started-guides/ovirt.md) | Inactive |
Rackspace | CoreOS | CoreOS | Rackspace | [docs](../../docs/getting-started-guides/rackspace.md) | Inactive | Rackspace | CoreOS | CoreOS | Rackspace | [docs](../../docs/getting-started-guides/rackspace.md) | Inactive |

View File

@ -0,0 +1,43 @@
### Running Multi-Node Kubernetes Using Docker
_Note_:
These instructions are somewhat significantly more advanced than the [single node](docker.md) instructions. If you are
interested in just starting to explore Kubernetes, we recommend that you start there.
## Table of Contents
* [Overview](#overview)
* [Installing the master node](#master-node)
* [Installing a worker node](#adding-a-worker-node)
* [Testing your cluster](#testing-your-cluster)
## Overview
This guide will set up a 2-node kubernetes cluster, consisting of a _master_ node which hosts the API server and orchestrates work
and a _worker_ node which receives work from the master. You can repeat the process of adding worker nodes an arbitrary number of
times to create larger clusters.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-docker.png)
### Bootstrap Docker
This guide also uses a pattern of running two instances of the Docker daemon
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
## Master Node
The first step in the process is to initialize the master node.
See [here](docker-multinode/master.md) for detailed instructions.
## Adding a worker node
Once your master is up and running you can add one or more workers on different machines.
See [here](docker-multinode/worker.md) for detailed instructions.
## Testing your cluster
Once your cluster has been created you can [test it out](docker-multinode/testing.md)

View File

@ -0,0 +1,143 @@
## Installing a Kubernetes Master Node via Docker
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```
There are two main phases to installing the master:
* [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd)
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
## Setting up flanneld and etcd
### Setup Docker-Bootstrap
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
### Startup etcd for flannel and the API server to use
Run:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
```
Next, you need to set a CIDR range for flannel. This CIDR should be chosen to be non-overlapping with any existing network you are using:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host kubernetes/etcd:2.0.5.1 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
```
### Set up Flannel on the master node
Flannel is a network abstraction layer build by CoreOS, we will use it to provide simplfied networking between our Pods of containers.
Flannel re-configures the bridge that Docker uses for networking. As a result we need to stop Docker, reconfigure its networking, and then restart Docker.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself:
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker comamnd line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
## Starting the Kubernetes Master
Ok, now that your networking is set up, you can startup Kubernetes, this is the same as the single-node case, we will use the "main" instance of the Docker daemon for the Kubernetes components.
```sh
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests-multi
```
### Also run the service proxy
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
```
### Test it out
At this point, you should have a functioning 1-node cluster. Let's test it out!
Download the kubectl binary
([OS X](http://storage.googleapis.com/kubernetes-release/release/v0.14.1/bin/darwin/amd64/kubectl))
([linux](http://storage.googleapis.com/kubernetes-release/release/v0.14.1/bin/linux/amd64/kubectl))
List the nodes
```sh
kubectl get nodes
```
This should print:
```
NAME LABELS STATUS
127.0.0.1 <none> Ready
```
If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
If all else fails, ask questions on IRC at #google-containers.
### Next steps
Move on to [adding one or more workers](worker.md)

View File

@ -0,0 +1,58 @@
## Testing your Kubernetes cluster.
To validate that your node(s) have been added, run:
```sh
kubectl get nodes
```
That should show something like:
```
NAME LABELS STATUS
10.240.99.26 <none> Ready
127.0.0.1 <none> Ready
```
If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
```#google-containers``` for advice.
### Run an application
```sh
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
```
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
### Expose it as a service:
```sh
kubectl expose rc nginx --port=80
```
This should print:
```
NAME LABELS SELECTOR IP PORT(S)
nginx <none> run-container=nginx <ip-addr> 80/TCP
```
Hit the webserver:
```sh
curl <insert-ip-from-above-here>
```
Note that you will need run this curl command on your boot2docker VM if you are running on OS X.
### Scaling
Now try to scale up the nginx you created before:
```sh
kubectl resize rc nginx --replicas=3
```
And list the pods
```sh
kubectl get pods
```
You should see pods landing on the newly added machine.

View File

@ -0,0 +1,132 @@
## Adding a Kubernetes worker node via Docker.
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
You need to repeat these instructions for each node you want to join the cluster.
We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md).
For each worker node, there are three steps:
* [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node)
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
### Set up Flanneld on the worker node
As before, the Flannel daemon is going to provide network connectivity.
#### Set up a bootstrap docker:
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
Run:
```sh
sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
```
_Important Note_:
If you are running this on a long running system, rather than experimenting, you should run the bootstrap Docker instance under something like SysV init, upstart or systemd so that it is restarted
across reboots and failures.
#### Bring down Docker
To re-configure Docker to use flannel, we need to take docker down, run flannel and then restart Docker.
Turning down Docker is system dependent, it may be:
```sh
sudo /etc/init.d/docker stop
```
or
```sh
sudo systemctl stop docker
```
or it may be something else.
#### Run flannel
Now run flanneld itself, this call is slightly different from the above, since we point it at the etcd instance on the master.
```sh
sudo docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net quay.io/coreos/flannel:0.3.0 /opt/bin/flanneld --etcd-endpoints=http://${MASTER_IP}:4001
```
The previous command should have printed a really long hash, copy this hash.
Now get the subnet settings from flannel:
```
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
```
#### Edit the docker configuration
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
Regardless, you need to add the following to the docker comamnd line:
```sh
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
```
#### Remove the existing Docker bridge
Docker creates a bridge named ```docker0``` by default. You need to remove this:
```sh
sudo /sbin/ifconfig docker0 down
sudo brctl delbr docker0
```
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
#### Restart Docker
Again this is system dependent, it may be:
```sh
sudo /etc/init.d/docker start
```
it may be:
```sh
systemctl start docker
```
### Start Kubernetes on the worker node
#### Run the kubelet
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
```sh
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
```
#### Run the service proxy
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
```sh
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.14.1 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
```
### Add the node to the cluster
On the master you created above, create a file named ```node.yaml``` make it's contents:
```yaml
apiVersion: v1beta1
externalID: ${NODE_IP}
hostIP: ${NODE_IP}
id: ${NODE_IP}
kind: Node
resources:
capacity:
# Adjust these to match your node
cpu: "1"
memory: 3892043776
```
Make the API call to add the node, you should do this on the master node that you created above. Otherwise you need to add ```-s=http://${MASTER_IP}:8080``` to point ```kubectl``` at the master.
```sh
./kubectl create -f node.yaml
```
### Next steps
Move on to [testing your cluster](testing.md) or [add another node](#adding-a-kubernetes-worker-node-via-docker)

View File

@ -2,6 +2,9 @@
The following instructions show you how to set up a simple, single node kubernetes cluster using Docker. The following instructions show you how to set up a simple, single node kubernetes cluster using Docker.
Here's a diagram of what the final result will look like:
![Kubernetes Single Node on Docker](k8s-singlenode-docker.png)
### Step One: Run etcd ### Step One: Run etcd
```sh ```sh
docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data docker run --net=host -d kubernetes/etcd:2.0.5.1 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
@ -74,3 +77,5 @@ Note that you will need run this curl command on your boot2docker VM if you are
### A note on turning down your cluster ### A note on turning down your cluster
Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
the cluster, you need to first kill the kubelet container, and then any other containers. the cluster, you need to first kill the kubelet container, and then any other containers.
You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution.

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB