mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-31 15:25:57 +00:00
Merge pull request #16204 from caseydavenport/docs-update-v1.1
Auto commit by PR queue bot
This commit is contained in:
commit
79c2d5541a
@ -30,137 +30,134 @@ Documentation for other releases can be found at
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
Kubernetes Deployment On Bare-metal Ubuntu Nodes with Calico Networking
|
||||
Bare Metal Kubernetes with Calico Networking
|
||||
------------------------------------------------
|
||||
|
||||
## Introduction
|
||||
|
||||
This document describes how to deploy Kubernetes on Ubuntu bare metal nodes with Calico Networking plugin. See [projectcalico.org](http://projectcalico.org) for more information on what Calico is, and [the calicoctl github](https://github.com/projectcalico/calico-docker) for more information on the command-line tool, `calicoctl`.
|
||||
This document describes how to deploy Kubernetes with Calico networking on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-docker repository](https://github.com/projectcalico/calico-docker).
|
||||
|
||||
This guide will set up a simple Kubernetes cluster with a master and two nodes. We will start the following processes with systemd:
|
||||
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-docker/tree/master/docs/kubernetes).
|
||||
|
||||
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We will start the following processes with systemd:
|
||||
|
||||
On the Master:
|
||||
- `etcd`
|
||||
- `kube-apiserver`
|
||||
- `kube-controller-manager`
|
||||
- `kube-scheduler`
|
||||
- `kubelet`
|
||||
- `calico-node`
|
||||
|
||||
On each Node:
|
||||
- `kubelet`
|
||||
- `kube-proxy`
|
||||
- `kube-kubelet`
|
||||
- `calico-node`
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. This guide uses `systemd` and thus uses Ubuntu 15.04 which supports systemd natively.
|
||||
2. All machines should have the latest docker stable version installed. At the time of writing, that is Docker 1.7.0.
|
||||
- To install docker, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
|
||||
3. All hosts should be able to communicate with each other, as well as the internet, to download the necessary files.
|
||||
4. This demo assumes that none of the hosts have been configured with any Kubernetes or Calico software yet.
|
||||
1. This guide uses `systemd` for process management. Ubuntu 15.04 supports systemd natively as do a number of other Linux distributions.
|
||||
2. All machines should have Docker >= 1.7.0 installed.
|
||||
- To install Docker on Ubuntu, follow [these instructions](https://docs.docker.com/installation/ubuntulinux/)
|
||||
3. All machines should have connectivity to each other and the internet.
|
||||
4. This demo assumes that none of the hosts have been configured with any Kubernetes or Calico software.
|
||||
|
||||
## Setup Master
|
||||
|
||||
First, get the sample configurations for this tutorial
|
||||
Download the `calico-kubernetes` repository, which contains the necessary configuration for this guide.
|
||||
|
||||
```
|
||||
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
|
||||
wget https://github.com/projectcalico/calico-kubernetes/archive/master.tar.gz
|
||||
tar -xvf master.tar.gz
|
||||
```
|
||||
|
||||
### Setup environment variables for systemd services on Master
|
||||
|
||||
Many of the sample systemd services provided rely on environment variables on a per-node basis. Here we'll edit those environment variables and move them into place.
|
||||
|
||||
1.) Copy the network-environment-template from the `master` directory for editing.
|
||||
|
||||
```
|
||||
cp calico-kubernetes-ubuntu-demo-master/master/network-environment-template network-environment
|
||||
```
|
||||
|
||||
2.) Edit `network-environment` to represent your current host's settings.
|
||||
|
||||
3.) Move the `network-environment` into `/etc`
|
||||
|
||||
```
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
### Install Kubernetes on Master
|
||||
|
||||
1.) Build & Install Kubernetes binaries
|
||||
We'll use the `kubelet` to bootstrap the Kubernetes master processes as containers.
|
||||
|
||||
1.) Download and install the `kubelet` and `kubectl` binaries.
|
||||
|
||||
```
|
||||
# Get the Kubernetes Source
|
||||
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
|
||||
# Get the Kubernetes Release.
|
||||
wget https://github.com/kubernetes/kubernetes/releases/download/v1.1.0/kubernetes.tar.gz
|
||||
|
||||
# Untar it
|
||||
# Extract the Kubernetes binaries.
|
||||
tar -xf kubernetes.tar.gz
|
||||
tar -xf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
|
||||
kubernetes/cluster/ubuntu/build.sh
|
||||
|
||||
# Add binaries to /usr/bin
|
||||
sudo cp -f binaries/master/* /usr/bin
|
||||
sudo cp -f binaries/kubectl /usr/bin
|
||||
# Install the `kubelet` and `kubectl` binaries.
|
||||
sudo cp -f kubernetes/server/bin/kubelet /usr/bin
|
||||
sudo cp -f kubernetes/server/bin/kubectl /usr/bin
|
||||
```
|
||||
|
||||
2.) Install the sample systemd processes settings for launching kubernetes services
|
||||
2.) Install the `kubelet` systemd unit file and start the `kubelet`.
|
||||
|
||||
```
|
||||
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/*.service /etc/systemd
|
||||
sudo systemctl enable /etc/systemd/etcd.service
|
||||
sudo systemctl enable /etc/systemd/kube-apiserver.service
|
||||
sudo systemctl enable /etc/systemd/kube-controller-manager.service
|
||||
sudo systemctl enable /etc/systemd/kube-scheduler.service
|
||||
# Install the unit file
|
||||
sudo cp -f calico-kubernetes-master/config/master/kubelet.service /etc/systemd
|
||||
|
||||
# Enable the unit file so that it runs on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
|
||||
# Start the `kubelet` service
|
||||
sudo systemctl start kubelet.service
|
||||
```
|
||||
|
||||
3.) Launch the processes.
|
||||
3.) Start the other Kubernetes master services using the provided manifest.
|
||||
|
||||
```
|
||||
sudo systemctl start etcd.service
|
||||
sudo systemctl start kube-apiserver.service
|
||||
sudo systemctl start kube-controller-manager.service
|
||||
sudo systemctl start kube-scheduler.service
|
||||
# Install the provided manifest
|
||||
sudo mkdir -p /etc/kubernetes/manifests
|
||||
sudo cp -f calico-kubernetes-master/config/master/kubernetes-master.manifest /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
You should see the `apiserver`, `controller-manager` and `scheduler` containers running. It may take some time to download the docker images - you can check if the containers are running using `docker ps`.
|
||||
|
||||
### Install Calico on Master
|
||||
|
||||
In order to allow the master to route to pods on our nodes, we will launch the calico-node daemon on our master. This will allow it to learn routes over BGP from the other calico-node daemons in the cluster. The docker daemon should already be running before calico is started.
|
||||
We need to install Calico on the master so that the master can route to the pods in our Kubernetes cluster.
|
||||
|
||||
First, start the etcd instance used by Calico. We'll run this as a static Kubernetes pod. Before we install it, we'll need to edit the manifest. Open `calico-kubernetes-master/config/master/calico-etcd.manifest` and replace all instances of `<PRIVATE_IPV4>` with your master's IP address. Then, copy the file to the `/etc/kubernetes/manifests` directory.
|
||||
|
||||
```
|
||||
# Install the calicoctl binary, which will be used to launch calico
|
||||
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo cp -f calicoctl /usr/bin
|
||||
sudo cp -f calico-kubernetes-master/config/master/calico-etcd.manifest /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
# Install and start the calico service
|
||||
sudo cp -f calico-kubernetes-ubuntu-demo-master/master/calico-node.service /etc/systemd
|
||||
> Note: For simplicity, in this demonstration we are using a single instance of etcd. In a production deployment a distributed etcd cluster is recommended for redundancy.
|
||||
|
||||
Now, install Calico. We'll need the `calicoctl` tool to do this.
|
||||
|
||||
```
|
||||
# Install the `calicoctl` binary
|
||||
wget https://github.com/projectcalico/calico-docker/releases/download/v0.9.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo mv calicoctl /usr/bin
|
||||
|
||||
# Fetch the calico/node container
|
||||
sudo docker pull calico/node:v0.9.0
|
||||
|
||||
# Install, enable, and start the Calico service
|
||||
sudo cp -f calico-kubernetes-master/config/master/calico-node.service /etc/systemd
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
>Note: calico-node may take a few minutes on first boot while it downloads the calico-node docker image.
|
||||
|
||||
## Setup Nodes
|
||||
|
||||
Perform these steps **once on each node**, ensuring you appropriately set the environment variables on each node
|
||||
The following steps should be run on each Kubernetes node.
|
||||
|
||||
### Setup environment variables for systemd services on the Node
|
||||
### Configure environment variables for `kubelet` process
|
||||
|
||||
1.) Get the sample configurations for this tutorial
|
||||
1.) Download the `calico-kubernetes` repository, which contains the necessary configuration for this guide.
|
||||
|
||||
```
|
||||
wget https://github.com/Metaswitch/calico-kubernetes-ubuntu-demo/archive/master.tar.gz
|
||||
wget https://github.com/projectcalico/calico-kubernetes/archive/master.tar.gz
|
||||
tar -xvf master.tar.gz
|
||||
```
|
||||
|
||||
2.) Copy the network-environment-template from the `node` directory
|
||||
|
||||
```
|
||||
cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template network-environment
|
||||
cp calico-kubernetes-master/config/node/network-environment-template network-environment
|
||||
```
|
||||
|
||||
3.) Edit `network-environment` to represent your current host's settings.
|
||||
3.) Edit `network-environment` to represent this node's settings.
|
||||
|
||||
4.) Move `network-environment` into `/etc`
|
||||
|
||||
@ -168,101 +165,57 @@ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template networ
|
||||
sudo mv -f network-environment /etc
|
||||
```
|
||||
|
||||
### Configure Docker on the Node
|
||||
|
||||
#### Create the veth
|
||||
|
||||
Instead of using docker's default interface (docker0), we will configure a new one to use desired IP ranges
|
||||
|
||||
```
|
||||
sudo apt-get install -y bridge-utils
|
||||
sudo brctl addbr cbr0
|
||||
sudo ifconfig cbr0 up
|
||||
sudo ifconfig cbr0 <IP>/24
|
||||
```
|
||||
|
||||
> Replace \<IP\> with the subnet for this host's containers. Example topology:
|
||||
|
||||
Node | cbr0 IP
|
||||
-------- | -------------
|
||||
node-1 | 192.168.1.1/24
|
||||
node-2 | 192.168.2.1/24
|
||||
node-X | 192.168.X.1/24
|
||||
|
||||
#### Start docker on cbr0
|
||||
|
||||
The Docker daemon must be started and told to use the already configured cbr0 instead of using the usual docker0, as well as disabling ip-masquerading and modification of the ip-tables.
|
||||
|
||||
1.) Edit the ubuntu-15.04 docker.service for systemd at: `/lib/systemd/system/docker.service`
|
||||
|
||||
2.) Find the line that reads `ExecStart=/usr/bin/docker -d -H fd://` and append the following flags: `--bridge=cbr0 --iptables=false --ip-masq=false`
|
||||
|
||||
3.) Reload systemctl and restart docker.
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
### Install Calico on the Node
|
||||
|
||||
1.) Install Calico
|
||||
We'll install Calico using the provided `calico-node.service` systemd unit file.
|
||||
|
||||
```
|
||||
# Get the calicoctl binary
|
||||
wget https://github.com/projectcalico/calico-docker/releases/download/v0.5.5/calicoctl
|
||||
# Install the `calicoctl` binary
|
||||
wget https://github.com/projectcalico/calico-docker/releases/download/v0.9.0/calicoctl
|
||||
chmod +x calicoctl
|
||||
sudo cp -f calicoctl /usr/bin
|
||||
sudo mv calicoctl /usr/bin
|
||||
|
||||
# Start calico on this node
|
||||
sudo cp calico-kubernetes-ubuntu-demo-master/node/calico-node.service /etc/systemd
|
||||
# Fetch the calico/node container
|
||||
sudo docker pull calico/node:v0.9.0
|
||||
|
||||
# Install, enable, and start the Calico service
|
||||
sudo cp -f calico-kubernetes-master/config/node/calico-node.service /etc/systemd
|
||||
sudo systemctl enable /etc/systemd/calico-node.service
|
||||
sudo systemctl start calico-node.service
|
||||
```
|
||||
|
||||
>The calico-node service will automatically get the kubernetes-calico plugin binary and install it on the host system.
|
||||
|
||||
2.) Use calicoctl to add an IP pool. We must specify the IP and port that the master's etcd is listening on.
|
||||
**NOTE: This step only needs to be performed once per Kubernetes deployment, as it covers all the node's IP ranges.**
|
||||
|
||||
```
|
||||
ETCD_AUTHORITY=<MASTER_IP>:4001 calicoctl pool add 192.168.0.0/16
|
||||
```
|
||||
|
||||
### Install Kubernetes on the Node
|
||||
|
||||
1.) Build & Install Kubernetes binaries
|
||||
1.) Download & Install the Kubernetes binaries.
|
||||
|
||||
```
|
||||
# Get the Kubernetes Source
|
||||
wget https://github.com/kubernetes/kubernetes/releases/download/v1.0.3/kubernetes.tar.gz
|
||||
# Get the Kubernetes Release.
|
||||
wget https://github.com/kubernetes/kubernetes/releases/download/v1.1.0/kubernetes.tar.gz
|
||||
|
||||
# Untar it
|
||||
# Extract the Kubernetes binaries.
|
||||
tar -xf kubernetes.tar.gz
|
||||
tar -xf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
|
||||
kubernetes/cluster/ubuntu/build.sh
|
||||
|
||||
# Add binaries to /usr/bin
|
||||
sudo cp -f binaries/minion/* /usr/bin
|
||||
|
||||
# Get the iptables based kube-proxy reccomended for this demo
|
||||
wget https://github.com/projectcalico/calico-kubernetes/releases/download/v0.1.1/kube-proxy
|
||||
sudo cp kube-proxy /usr/bin/
|
||||
sudo chmod +x /usr/bin/kube-proxy
|
||||
# Install the `kubelet` and `kube-proxy` binaries.
|
||||
sudo cp -f kubernetes/server/bin/kubelet /usr/bin
|
||||
sudo cp -f kubernetes/server/bin/kube-proxy /usr/bin
|
||||
```
|
||||
|
||||
2.) Install and launch the sample systemd processes settings for launching Kubernetes services
|
||||
2.) Install the `kubelet` and `kube-proxy` systemd unit files.
|
||||
|
||||
```
|
||||
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-proxy.service /etc/systemd/
|
||||
sudo cp calico-kubernetes-ubuntu-demo-master/node/kube-kubelet.service /etc/systemd/
|
||||
# Install the unit files
|
||||
sudo cp -f calico-kubernetes-master/config/node/kubelet.service /etc/systemd
|
||||
sudo cp -f calico-kubernetes-master/config/node/kube-proxy.service /etc/systemd
|
||||
|
||||
# Enable the unit files so that they run on boot
|
||||
sudo systemctl enable /etc/systemd/kubelet.service
|
||||
sudo systemctl enable /etc/systemd/kube-proxy.service
|
||||
sudo systemctl enable /etc/systemd/kube-kubelet.service
|
||||
sudo systemctl start kube-proxy.service
|
||||
sudo systemctl start kube-kubelet.service
|
||||
```
|
||||
|
||||
>*You may want to consider checking their status after to ensure everything is running*
|
||||
# Start the services
|
||||
sudo systemctl start kubelet.service
|
||||
sudo systemctl start kube-proxy.service
|
||||
```
|
||||
|
||||
## Install the DNS Addon
|
||||
|
||||
@ -270,22 +223,22 @@ Most Kubernetes deployments will require the DNS addon for service discovery. F
|
||||
|
||||
The config repository for this guide comes with manifest files to start the DNS addon. To install DNS, do the following on your Master node.
|
||||
|
||||
Replace `<MASTER_IP>` in `calico-kubernetes-ubuntu-demo-master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
|
||||
Replace `<MASTER_IP>` in `calico-kubernetes-master/config/master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
|
||||
|
||||
## Launch other Services With Calico-Kubernetes
|
||||
|
||||
At this point, you have a fully functioning cluster running on kubernetes with a master and 2 nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
|
||||
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
|
||||
|
||||
## Connectivity to outside the cluster
|
||||
|
||||
With this sample configuration, because the containers have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a full datacenter deployment, NAT is not always necessary, since Calico can peer with the border routers over BGP.
|
||||
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
|
||||
|
||||
### NAT on the nodes
|
||||
|
||||
The simplest method for enabling connectivity from containers to the internet is to use an iptables masquerade rule. This is the standard mechanism [recommended](../../docs/admin/networking.md#google-compute-engine-gce) in the Kubernetes GCE environment.
|
||||
The simplest method for enabling connectivity from containers to the internet is to use an `iptables` masquerade rule. This is the standard mechanism recommended in the [Kubernetes GCE environment](../../docs/admin/networking.md#google-compute-engine-gce).
|
||||
|
||||
We need to NAT traffic that has a destination outside of the cluster. Internal traffic includes the master/nodes, and the container IP pools. A suitable masquerade chain would follow the pattern below, replacing the following variables:
|
||||
- `CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. All cbr0 bridge subnets fall within this range. The above example uses `192.168.0.0/16`.
|
||||
We need to NAT traffic that has a destination outside of the cluster. Cluster-internal traffic includes the Kubernetes master/nodes, and the traffic within the container IP subnet. A suitable masquerade chain would follow this pattern below, replacing the following variables:
|
||||
- `CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. Run `ETCD_AUTHORITY=127.0.0.1:6666 calicoctl pool show` on the Kubernetes master to find your configured container subnet.
|
||||
- `KUBERNETES_HOST_SUBNET`: The subnet from which Kubernetes node / master IP addresses have been chosen.
|
||||
- `HOST_INTERFACE`: The interface on the Kubernetes node which is used for external connectivity. The above example uses `eth0`
|
||||
|
||||
@ -301,7 +254,7 @@ This chain should be applied on the master and all nodes. In production, these r
|
||||
|
||||
### NAT at the border router
|
||||
|
||||
In a datacenter environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the datacenter, and so NAT is not needed on the nodes (though it may be enabled at the datacenter edge to allow outbound-only internet connectivity).
|
||||
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
Loading…
Reference in New Issue
Block a user