Updates to fedora ansible guide.

This commit is contained in:
Avesh Agarwal 2015-07-19 11:53:44 -04:00
parent e3521a8e74
commit ace7b96d2a

View File

@ -39,184 +39,169 @@ Configuring kubernetes on Fedora via Ansible offers a simple way to quickly crea
- [Prerequisites](#prerequisites)
- [Architecture of the cluster](#architecture-of-the-cluster)
- [Configuring ssh access to the cluster](#configuring-ssh-access-to-the-cluster)
- [Configuring the internal kubernetes network](#configuring-the-internal-kubernetes-network)
- [Setting up ansible access to your nodes](#setting-up-ansible-access-to-your-nodes)
- [Setting up the cluster](#setting-up-the-cluster)
- [Testing and using your new cluster](#testing-and-using-your-new-cluster)
## Prerequisites
1. Host able to run ansible and able to clone the following repo: [kubernetes-ansible](https://github.com/eparis/kubernetes-ansible)
2. A Fedora 20+ or RHEL7 host to act as cluster master
3. As many Fedora 20+ or RHEL7 hosts as you would like, that act as cluster nodes
1. Host able to run ansible and able to clone the following repo: [kubernetes](https://github.com/GoogleCloudPlatform/kubernetes.git)
2. A Fedora 21+ host to act as cluster master
3. As many Fedora 21+ hosts as you would like, that act as cluster nodes
The hosts can be virtual or bare metal. The only requirement to make the ansible network setup work is that all of the machines are connected via the same layer 2 network.
Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc... This example will use one master and two nodes.
The hosts can be virtual or bare metal. Ansible will take care of the rest of the configuration for you - configuring networking, installing packages, handling the firewall, etc. This example will use one master and two nodes.
## Architecture of the cluster
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```console
fed1 (master,etcd) = 192.168.121.205
fed2 (node) = 192.168.121.84
fed3 (node) = 192.168.121.116
master,etcd = kube-master.example.com
node1 = kube-node-01.example.com
node2 = kube-node-02.example.com
```
**Make sure your local machine**
**Make sure your local machine has**
- has ansible
- has git
- ansible (must be 1.9.0+)
- git
- python-netaddr
**then we just clone down the kubernetes-ansible repository**
If not
```sh
yum install -y ansible git
git clone https://github.com/eparis/kubernetes-ansible.git
cd kubernetes-ansible
yum install -y ansible git python-netaddr
```
**Tell ansible about each machine and its role in your cluster.**
**Now clone down the kubernetes repository**
Get the IP addresses from the master and nodes. Add those to the `inventory` file (at the root of the repo) on the host running Ansible.
```sh
git clone https://github.com/GoogleCloudPlatform/kubernetes.git
cd kubernetes/contrib/ansible
```
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
**Tell ansible about each machine and its role in your cluster**
Get the IP addresses from the master and nodes. Add those to the `~/kubernetes/contrib/ansible/inventory` file on the host running Ansible.
```console
[masters]
192.168.121.205
[etcd]
192.168.121.205
kube-master.example.com
[minions]
192.168.121.84 kube_ip_addr=[10.254.0.1]
192.168.121.116 kube_ip_addr=[10.254.0.2]
[etcd]
kube-master.example.com
[nodes]
kube-node-01.example.com
kube-node-02.example.com
```
**Setup ansible access to your nodes**
## Setting up ansible access to your nodes
If you already are running on a machine which has passwordless ssh access to the fed[1-3] nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
If you already are running on a machine which has passwordless ssh access to the kube-master and kube-node-{01,02} nodes, and 'sudo' privileges, simply set the value of `ansible_ssh_user` in `~/kubernetes/contrib/ansible/group_vars/all.yaml` to the username which you use to ssh to the nodes (i.e. `fedora`), and proceed to the next step...
*Otherwise* setup ssh on the machines like so (you will need to know the root password to all machines in the cluster).
edit: group_vars/all.yml
edit: ~/kubernetes/contrib/ansible/group_vars/all.yml
```yaml
ansible_ssh_user: root
```
## Configuring ssh access to the cluster
**Configuring ssh access to the cluster**
If you already have ssh access to every machine using ssh public keys you may skip to [configuring the network](#configuring-the-network)
If you already have ssh access to every machine using ssh public keys you may skip to [setting up the cluster](#setting-up-the-cluster)
**Create a password file.**
The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root.
Make sure your local machine (root) has an ssh key pair if not
```sh
echo "password" > ~/rootpassword
ssh-keygen
```
**Agree to accept each machine's ssh public key**
After this is completed, ansible is now enabled to ssh into any of the machines you're configuring.
Copy the ssh public key to **all** nodes in the cluster
```sh
ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
```
**Push your ssh public key to every machine**
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
```sh
ansible-playbook -i inventory keys.yml
```
## Configuring the internal kubernetes network
If you already have configured your network and docker will use it correctly, skip to [setting up the cluster](#setting-up-the-cluster)
The ansible scripts are quite hacky configuring the network, you can see the [README](https://github.com/eparis/kubernetes-ansible) for details, or you can simply enter in variants of the 'kube_service_addresses' (in the all.yaml file) as `kube_ip_addr` entries in the nodes field, as shown in the next section.
**Configure the ip addresses which should be used to run pods on each machine**
The IP address pool used to assign addresses to pods for each node is the `kube_ip_addr`= option. Choose a /24 to use for each node and add that to your inventory file.
For this example, as shown earlier, we can do something like this...
```console
[minions]
192.168.121.84 kube_ip_addr=10.254.0.1
192.168.121.116 kube_ip_addr=10.254.0.2
```
**Run the network setup playbook**
There are two ways to do this: via flannel, or using NetworkManager.
Flannel is a cleaner mechanism to use, and is the recommended choice.
- If you are using flannel, you should check the kubernetes-ansible repository above.
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
```sh
ansible-playbook -i inventory flannel.yml
```
- On the other hand, if using the NetworkManager based setup (i.e. you do not want to use flannel).
On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run
the network manager playbook...
```sh
ansible-playbook -i inventory ./old-network-config/hack-network.yml
for node in kube-master.example.com kube-node-01.example.com kube-node-02.example.com; do
ssh-copy-id ${node}
done
```
## Setting up the cluster
Although the default value of variables in `~/kubernetes/contrib/ansible/group_vars/all.yml` should be good enough, if not, change them as needed.
edit: ~/kubernetes/contrib/ansible/group_vars/all.yml
**Configure the IP addresses used for services**
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment. This must be done even if you do not use the network setup provided by the ansible scripts.
edit: group_vars/all.yml
Each kubernetes service gets its own IP address. These are not real IPs. You need only select a range of IPs which are not in use elsewhere in your environment.
```yaml
kube_service_addresses: 10.254.0.0/16
```
**Managing flannel**
Modify `flannel_subnet`, `flannel_prefix` and `flannel_host_prefix` only if defaults are not appropriate for your cluster.
**Managing add on services in your cluster**
Set `cluster_logging` to false or true (default) to disable or enable logging with elasticsearch.
```yaml
cluster_logging: true
```
Turn `cluster_monitoring` to true (default) or false to enable or disable cluster monitoring with heapster and influxdb.
```yaml
cluster_monitoring: true
```
Turn `dns_setup` to true (recommended) or false to enable or disable whole DNS configuration.
```yaml
dns_setup: true
```
**Tell ansible to get to work!**
This will finally setup your whole kubernetes cluster for you.
```sh
ansible-playbook -i inventory setup.yml
cd ~/kubernetes/contrib/ansible/
./setup.sh
```
## Testing and using your new cluster
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
That's all there is to it. It's really that easy. At this point you should have a functioning kubernetes cluster.
**Show kubernets nodes**
**Show services running on masters and nodes.**
Run the following on the kube-master:
```sh
kubectl get nodes
```
**Show services running on masters and nodes**
```sh
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes.**
**Show firewall rules on the masters and nodes**
```sh
iptables -nvL
```
**Create the following apache.json file and deploy pod to node.**
**Create /tmp/apache.json on the master with the following contents and deploy pod**
```sh
cat << EOF > apache.json
```json
{
"kind": "Pod",
"apiVersion": "v1",
@ -241,34 +226,26 @@ cat << EOF > apache.json
]
}
}
EOF
/usr/bin/kubectl create -f apache.json
**Testing your new kube cluster**
```
**Check where the pod was created.**
```sh
kubectl create -f /tmp/apache.json
```
**Check where the pod was created**
```sh
kubectl get pods
```
Important : Note that the IP of the pods IP fields are on the network which you created in the kube_ip_addr file.
In this example, that was the 10.254 network.
If you see 172 in the IP fields, networking was not setup correctly, and you may want to re run or dive deeper into the way networking is being setup by looking at the details of the networking scripts used above.
**Check Docker status on node.**
**Check Docker status on nodes**
```sh
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the node.**
**After the pod is 'Running' Check web server access on the node**
```sh
curl http://localhost