Improve markdown highlighting in fedora getting started guides.

This commit is contained in:
Alex Robinson 2015-07-18 19:01:59 -07:00
parent 72106a2de3
commit bfd85cefad
3 changed files with 59 additions and 56 deletions

View File

@ -58,7 +58,7 @@ Ansible will take care of the rest of the configuration for you - configuring ne
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
```
```console
fed1 (master,etcd) = 192.168.121.205
fed2 (node) = 192.168.121.84
fed3 (node) = 192.168.121.116
@ -71,7 +71,7 @@ A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a c
**then we just clone down the kubernetes-ansible repository**
```
```sh
yum install -y ansible git
git clone https://github.com/eparis/kubernetes-ansible.git
cd kubernetes-ansible
@ -83,7 +83,7 @@ Get the IP addresses from the master and nodes. Add those to the `inventory` fi
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
```
```console
[masters]
192.168.121.205
@ -103,7 +103,7 @@ If you already are running on a machine which has passwordless ssh access to the
edit: group_vars/all.yml
```
```yaml
ansible_ssh_user: root
```
@ -115,7 +115,7 @@ If you already have ssh access to every machine using ssh public keys you may sk
The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root.
```
```sh
echo "password" > ~/rootpassword
```
@ -123,7 +123,7 @@ echo "password" > ~/rootpassword
After this is completed, ansible is now enabled to ssh into any of the machines you're configuring.
```
```sh
ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
```
@ -131,7 +131,7 @@ ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
```
```sh
ansible-playbook -i inventory keys.yml
```
@ -147,7 +147,7 @@ The IP address pool used to assign addresses to pods for each node is the `kube_
For this example, as shown earlier, we can do something like this...
```
```console
[minions]
192.168.121.84 kube_ip_addr=10.254.0.1
192.168.121.116 kube_ip_addr=10.254.0.2
@ -163,7 +163,7 @@ Flannel is a cleaner mechanism to use, and is the recommended choice.
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
```
```sh
ansible-playbook -i inventory flannel.yml
```
@ -172,7 +172,7 @@ ansible-playbook -i inventory flannel.yml
On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run
the network manager playbook...
```
```sh
ansible-playbook -i inventory ./old-network-config/hack-network.yml
```
@ -184,7 +184,7 @@ Each kubernetes service gets its own IP address. These are not real IPs. You n
edit: group_vars/all.yml
```
```yaml
kube_service_addresses: 10.254.0.0/16
```
@ -192,7 +192,7 @@ kube_service_addresses: 10.254.0.0/16
This will finally setup your whole kubernetes cluster for you.
```
```sh
ansible-playbook -i inventory setup.yml
```
@ -203,19 +203,19 @@ That's all there is to it. It's really that easy. At this point you should hav
**Show services running on masters and nodes.**
```
```sh
systemctl | grep -i kube
```
**Show firewall rules on the masters and nodes.**
```
```sh
iptables -nvL
```
**Create the following apache.json file and deploy pod to node.**
```
```sh
cat << EOF > apache.json
{
"kind": "Pod",
@ -251,7 +251,7 @@ EOF
**Check where the pod was created.**
```
```sh
kubectl get pods
```
@ -263,14 +263,14 @@ If you see 172 in the IP fields, networking was not setup correctly, and you may
**Check Docker status on node.**
```
```sh
docker ps
docker images
```
**After the pod is 'Running' Check web server access on the node.**
```
```sh
curl http://localhost
```

View File

@ -65,26 +65,26 @@ fed-node = 192.168.121.65
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
```
```sh
yum -y install --enablerepo=updates-testing kubernetes
```
* Install etcd and iptables
```
```sh
yum -y install etcd iptables
```
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
```
```sh
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
```
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
```
```sh
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
@ -100,7 +100,7 @@ KUBE_ALLOW_PRIV="--allow_privileged=false"
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
```
```sh
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
```
@ -109,7 +109,7 @@ systemctl stop iptables-services firewalld
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
```
```sh
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
@ -125,13 +125,13 @@ KUBE_API_ARGS=""
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
```
```sh
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
```
* Start the appropriate services on master:
```
```sh
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
@ -159,7 +159,7 @@ done
Now create a node object internally in your kubernetes cluster by running:
```
```console
$ kubectl create -f ./node.json
$ kubectl get nodes
@ -180,7 +180,7 @@ a kubernetes node (fed-node) below.
* Edit /etc/kubernetes/kubelet to appear as such:
```
```sh
###
# kubernetes kubelet (node) config
@ -199,7 +199,7 @@ KUBELET_API_SERVER="--api_servers=http://fed-master:8080"
* Start the appropriate services on the node (fed-node).
```
```sh
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
@ -209,7 +209,7 @@ done
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
```
```console
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
@ -219,8 +219,8 @@ fed-node name=fed-node-label Ready
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
```
$ kubectl delete -f ./node.json
```sh
kubectl delete -f ./node.json
```
*You should be finished!*

View File

@ -55,7 +55,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
```
```json
{
"Network": "18.16.0.0/16",
"SubnetLen": 24,
@ -70,14 +70,14 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
* Add the configuration to the etcd server on fed-master.
```
# etcdctl set /coreos.com/network/config < flannel-config.json
```sh
etcdctl set /coreos.com/network/config < flannel-config.json
```
* Verify the key exists in the etcd server on fed-master.
```
# etcdctl get /coreos.com/network/config
```sh
etcdctl get /coreos.com/network/config
```
## Node Setup
@ -86,7 +86,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
```
```sh
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
@ -104,23 +104,23 @@ FLANNEL_OPTIONS=""
* Enable the flannel service.
```
# systemctl enable flanneld
```sh
systemctl enable flanneld
```
* If docker is not running, then starting flannel service is enough and skip the next step.
```
# systemctl start flanneld
```sh
systemctl start flanneld
```
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
```
# systemctl stop docker
# ip link delete docker0
# systemctl start flanneld
# systemctl start docker
```sh
systemctl stop docker
ip link delete docker0
systemctl start flanneld
systemctl start docker
```
***
@ -129,7 +129,7 @@ FLANNEL_OPTIONS=""
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
```
```console
# ip -4 a|grep inet
inet 127.0.0.1/8 scope host lo
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
@ -139,8 +139,11 @@ FLANNEL_OPTIONS=""
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
```sh
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```
# curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
```json
{
"node": {
"key": "/coreos.com/network/subnets",
@ -162,7 +165,7 @@ FLANNEL_OPTIONS=""
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
```
```console
# cat /run/flannel/subnet.env
FLANNEL_SUBNET=18.16.29.1/24
FLANNEL_MTU=1450
@ -173,35 +176,35 @@ FLANNEL_IPMASQ=false
* Issue the following commands on any 2 nodes:
```
#docker run -it fedora:latest bash
```console
# docker run -it fedora:latest bash
bash-4.3#
```
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
```
```console
bash-4.3# yum -y install iproute iputils
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
```
* Now note the IP address on the first node:
```
```console
bash-4.3# ip -4 a l eth0 | grep inet
inet 18.16.29.4/24 scope global eth0
```
* And also note the IP address on the other node:
```
```console
bash-4.3# ip a l eth0 | grep inet
inet 18.16.90.4/24 scope global eth0
```
* Now ping from the first node to the other node:
```
```console
bash-4.3# ping 18.16.90.4
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms