mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-08-02 00:07:50 +00:00
Merge pull request #11542 from a-robinson/docs
Improve getting started guide and user guide docs syntax highlighting
This commit is contained in:
commit
fc69110f56
@ -105,7 +105,7 @@ For more complete applications, please look in the [examples directory](../../ex
|
||||
|
||||
## Tearing down the cluster
|
||||
|
||||
```
|
||||
```sh
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
|
@ -72,7 +72,7 @@ gpgcheck=0
|
||||
|
||||
* Install kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
|
||||
|
||||
```
|
||||
```sh
|
||||
yum -y install --enablerepo=virt7-testing kubernetes
|
||||
```
|
||||
|
||||
@ -82,27 +82,27 @@ If you do not get etcd-0.4.6-7 installed with virt7-testing repo,
|
||||
|
||||
In the current virt7-testing repo, the etcd package is updated which causes service failure. To avoid this,
|
||||
|
||||
```
|
||||
```sh
|
||||
yum erase etcd
|
||||
```
|
||||
|
||||
It will uninstall the current available etcd package
|
||||
|
||||
```
|
||||
```sh
|
||||
yum install http://cbs.centos.org/kojifiles/packages/etcd/0.4.6/7.el7.centos/x86_64/etcd-0.4.6-7.el7.centos.x86_64.rpm
|
||||
yum -y install --enablerepo=virt7-testing kubernetes
|
||||
```
|
||||
|
||||
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
|
||||
|
||||
```
|
||||
```sh
|
||||
echo "192.168.121.9 centos-master
|
||||
192.168.121.65 centos-minion" >> /etc/hosts
|
||||
```
|
||||
|
||||
* Edit /etc/kubernetes/config which will be the same on all hosts to contain:
|
||||
|
||||
```
|
||||
```sh
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_ETCD_SERVERS="--etcd_servers=http://centos-master:4001"
|
||||
|
||||
@ -118,7 +118,7 @@ KUBE_ALLOW_PRIV="--allow_privileged=false"
|
||||
|
||||
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
|
||||
|
||||
```
|
||||
```sh
|
||||
systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
@ -127,7 +127,7 @@ systemctl stop iptables-services firewalld
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such:
|
||||
|
||||
```
|
||||
```sh
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
@ -149,7 +149,7 @@ KUBE_API_ARGS=""
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```
|
||||
```sh
|
||||
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
@ -163,7 +163,7 @@ done
|
||||
|
||||
* Edit /etc/kubernetes/kubelet to appear as such:
|
||||
|
||||
```
|
||||
```sh
|
||||
# The address for the info server to serve on
|
||||
KUBELET_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
@ -179,7 +179,7 @@ KUBELET_ARGS=""
|
||||
|
||||
* Start the appropriate services on node (centos-minion).
|
||||
|
||||
```
|
||||
```sh
|
||||
for SERVICES in kube-proxy kubelet docker; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
@ -191,8 +191,8 @@ done
|
||||
|
||||
* Check to make sure the cluster can see the node (on centos-master)
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
centos-minion <none> Ready
|
||||
```
|
||||
|
@ -56,7 +56,7 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
|
||||
|
||||
To get started, you need to checkout the code:
|
||||
|
||||
```
|
||||
```sh
|
||||
git clone https://github.com/GoogleCloudPlatform/kubernetes
|
||||
cd kubernetes/docs/getting-started-guides/coreos/azure/
|
||||
```
|
||||
@ -65,13 +65,13 @@ You will need to have [Node.js installed](http://nodejs.org/download/) on you ma
|
||||
|
||||
First, you need to install some of the dependencies with
|
||||
|
||||
```
|
||||
```sh
|
||||
npm install
|
||||
```
|
||||
|
||||
Now, all you need to do is:
|
||||
|
||||
```
|
||||
```sh
|
||||
./azure-login.js -u <your_username>
|
||||
./create-kubernetes-cluster.js
|
||||
```
|
||||
@ -82,7 +82,7 @@ This script will provision a cluster suitable for production use, where there is
|
||||
|
||||
Once the creation of Azure VMs has finished, you should see the following:
|
||||
|
||||
```
|
||||
```console
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
@ -92,7 +92,7 @@ azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.ym
|
||||
|
||||
Let's login to the master node like so:
|
||||
|
||||
```
|
||||
```sh
|
||||
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
```
|
||||
|
||||
@ -100,7 +100,7 @@ ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
|
||||
|
||||
Check there are 2 nodes in the cluster:
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 environment=production Ready
|
||||
@ -111,7 +111,7 @@ kube-02 environment=production Ready
|
||||
|
||||
Let's follow the Guestbook example now:
|
||||
|
||||
```
|
||||
```sh
|
||||
cd guestbook-example
|
||||
kubectl create -f examples/guestbook/redis-master-controller.yaml
|
||||
kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
@ -123,7 +123,7 @@ kubectl create -f examples/guestbook/frontend-service.yaml
|
||||
|
||||
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get pods --watch
|
||||
```
|
||||
|
||||
@ -131,7 +131,7 @@ kubectl get pods --watch
|
||||
|
||||
Eventually you should see:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-8anh8 1/1 Running 0 1m
|
||||
frontend-8pq5r 1/1 Running 0 1m
|
||||
@ -149,14 +149,14 @@ You will need to open another terminal window on your machine and go to the same
|
||||
|
||||
First, lets set the size of new VMs:
|
||||
|
||||
```
|
||||
```sh
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
|
||||
Now, run scale script with state file of the previous deployment and number of nodes to add:
|
||||
|
||||
```
|
||||
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
```console
|
||||
core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
@ -175,7 +175,7 @@ azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.ym
|
||||
|
||||
Back on `kube-00`:
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
kube-01 environment=production Ready
|
||||
@ -188,7 +188,7 @@ You can see that two more nodes joined happily. Let's scale the number of Guestb
|
||||
|
||||
First, double-check how many replication controllers there are:
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
|
||||
@ -198,7 +198,7 @@ redis-slave slave kubernetes/redis-slave:v2 name=r
|
||||
|
||||
As there are 4 nodes, let's scale proportionally:
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
|
||||
@ -207,7 +207,7 @@ scaled
|
||||
|
||||
Check what you have now:
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~ $ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
|
||||
@ -217,7 +217,7 @@ redis-slave slave kubernetes/redis-slave:v2 name=r
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```
|
||||
```console
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
frontend-8anh8 1/1 Running 0 3m
|
||||
@ -244,7 +244,7 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
|
||||
|
||||
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
|
||||
|
||||
```
|
||||
```sh
|
||||
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
|
||||
```
|
||||
|
||||
|
@ -52,14 +52,14 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```
|
||||
```sh
|
||||
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
|
||||
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
|
||||
```
|
||||
|
||||
```
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--image-id <ami_image_id> \
|
||||
--key-name <keypair> \
|
||||
@ -71,7 +71,7 @@ aws ec2 run-instances \
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```
|
||||
```sh
|
||||
aws ec2 describe-instances --instance-id <master-instance-id>
|
||||
```
|
||||
|
||||
@ -81,7 +81,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```
|
||||
```sh
|
||||
aws ec2 run-instances \
|
||||
--count 1 \
|
||||
--image-id <ami_image_id> \
|
||||
@ -98,7 +98,7 @@ aws ec2 run-instances \
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute instances create master \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
@ -110,7 +110,7 @@ gcloud compute instances create master \
|
||||
|
||||
#### Capture the private IP address
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute instances list
|
||||
```
|
||||
|
||||
@ -120,7 +120,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
|
||||
|
||||
#### Provision worker nodes
|
||||
|
||||
```
|
||||
```sh
|
||||
gcloud compute instances create node1 \
|
||||
--image-project coreos-cloud \
|
||||
--image <gce_image_id> \
|
||||
@ -140,7 +140,7 @@ run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
|
||||
|
||||
#### Create the master config-drive
|
||||
|
||||
```
|
||||
```sh
|
||||
mkdir -p /tmp/new-drive/openstack/latest/
|
||||
cp master.yaml /tmp/new-drive/openstack/latest/user_data
|
||||
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive
|
||||
@ -158,7 +158,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
|
||||
|
||||
#### Create the node config-drive
|
||||
|
||||
```
|
||||
```sh
|
||||
mkdir -p /tmp/new-drive/openstack/latest/
|
||||
cp node.yaml /tmp/new-drive/openstack/latest/user_data
|
||||
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive
|
||||
|
@ -67,7 +67,7 @@ across reboots and failures.
|
||||
|
||||
Run:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
|
||||
```
|
||||
|
||||
@ -114,7 +114,7 @@ The previous command should have printed a really long hash, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
@ -185,7 +185,7 @@ kubectl get nodes
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
@ -41,7 +41,7 @@ kubectl get nodes
|
||||
|
||||
That should show something like:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
10.240.99.26 kubernetes.io/hostname=10.240.99.26 Ready
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
@ -66,7 +66,7 @@ kubectl expose rc nginx --port=80
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME LABELS SELECTOR IP PORT(S)
|
||||
nginx <none> run=nginx <ip-addr> 80/TCP
|
||||
```
|
||||
|
@ -97,7 +97,7 @@ The previous command should have printed a really long hash, copy this hash.
|
||||
|
||||
Now get the subnet settings from flannel:
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from-above-here> cat /run/flannel/subnet.env
|
||||
```
|
||||
|
||||
|
@ -100,7 +100,7 @@ kubectl get nodes
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME LABELS STATUS
|
||||
127.0.0.1 <none> Ready
|
||||
```
|
||||
@ -123,7 +123,7 @@ kubectl expose rc nginx --port=80
|
||||
|
||||
This should print:
|
||||
|
||||
```
|
||||
```console
|
||||
NAME LABELS SELECTOR IP PORT(S)
|
||||
nginx <none> run=nginx <ip-addr> 80/TCP
|
||||
```
|
||||
|
@ -58,7 +58,7 @@ Ansible will take care of the rest of the configuration for you - configuring ne
|
||||
|
||||
A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a cluster with three hosts, for example:
|
||||
|
||||
```
|
||||
```console
|
||||
fed1 (master,etcd) = 192.168.121.205
|
||||
fed2 (node) = 192.168.121.84
|
||||
fed3 (node) = 192.168.121.116
|
||||
@ -71,7 +71,7 @@ A Kubernetes cluster requires etcd, a master, and n nodes, so we will create a c
|
||||
|
||||
**then we just clone down the kubernetes-ansible repository**
|
||||
|
||||
```
|
||||
```sh
|
||||
yum install -y ansible git
|
||||
git clone https://github.com/eparis/kubernetes-ansible.git
|
||||
cd kubernetes-ansible
|
||||
@ -83,7 +83,7 @@ Get the IP addresses from the master and nodes. Add those to the `inventory` fi
|
||||
|
||||
We will set the kube_ip_addr to '10.254.0.[1-3]', for now. The reason we do this is explained later... It might work for you as a default.
|
||||
|
||||
```
|
||||
```console
|
||||
[masters]
|
||||
192.168.121.205
|
||||
|
||||
@ -103,7 +103,7 @@ If you already are running on a machine which has passwordless ssh access to the
|
||||
|
||||
edit: group_vars/all.yml
|
||||
|
||||
```
|
||||
```yaml
|
||||
ansible_ssh_user: root
|
||||
```
|
||||
|
||||
@ -115,7 +115,7 @@ If you already have ssh access to every machine using ssh public keys you may sk
|
||||
|
||||
The password file should contain the root password for every machine in the cluster. It will be used in order to lay down your ssh public key. Make sure your machines sshd-config allows password logins from root.
|
||||
|
||||
```
|
||||
```sh
|
||||
echo "password" > ~/rootpassword
|
||||
```
|
||||
|
||||
@ -123,7 +123,7 @@ echo "password" > ~/rootpassword
|
||||
|
||||
After this is completed, ansible is now enabled to ssh into any of the machines you're configuring.
|
||||
|
||||
```
|
||||
```sh
|
||||
ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
|
||||
```
|
||||
|
||||
@ -131,7 +131,7 @@ ansible-playbook -i inventory ping.yml # This will look like it fails, that's ok
|
||||
|
||||
Again, you can skip this step if your ansible machine has ssh access to the nodes you are going to use in the kubernetes cluster.
|
||||
|
||||
```
|
||||
```sh
|
||||
ansible-playbook -i inventory keys.yml
|
||||
```
|
||||
|
||||
@ -147,7 +147,7 @@ The IP address pool used to assign addresses to pods for each node is the `kube_
|
||||
|
||||
For this example, as shown earlier, we can do something like this...
|
||||
|
||||
```
|
||||
```console
|
||||
[minions]
|
||||
192.168.121.84 kube_ip_addr=10.254.0.1
|
||||
192.168.121.116 kube_ip_addr=10.254.0.2
|
||||
@ -163,7 +163,7 @@ Flannel is a cleaner mechanism to use, and is the recommended choice.
|
||||
|
||||
Currently, you essentially have to (1) update group_vars/all.yml, and then (2) run
|
||||
|
||||
```
|
||||
```sh
|
||||
ansible-playbook -i inventory flannel.yml
|
||||
```
|
||||
|
||||
@ -172,7 +172,7 @@ ansible-playbook -i inventory flannel.yml
|
||||
On EACH node, make sure NetworkManager is installed, and the service "NetworkManager" is running, then you can run
|
||||
the network manager playbook...
|
||||
|
||||
```
|
||||
```sh
|
||||
ansible-playbook -i inventory ./old-network-config/hack-network.yml
|
||||
```
|
||||
|
||||
@ -184,7 +184,7 @@ Each kubernetes service gets its own IP address. These are not real IPs. You n
|
||||
|
||||
edit: group_vars/all.yml
|
||||
|
||||
```
|
||||
```yaml
|
||||
kube_service_addresses: 10.254.0.0/16
|
||||
```
|
||||
|
||||
@ -192,7 +192,7 @@ kube_service_addresses: 10.254.0.0/16
|
||||
|
||||
This will finally setup your whole kubernetes cluster for you.
|
||||
|
||||
```
|
||||
```sh
|
||||
ansible-playbook -i inventory setup.yml
|
||||
```
|
||||
|
||||
@ -203,19 +203,19 @@ That's all there is to it. It's really that easy. At this point you should hav
|
||||
|
||||
**Show services running on masters and nodes.**
|
||||
|
||||
```
|
||||
```sh
|
||||
systemctl | grep -i kube
|
||||
```
|
||||
|
||||
**Show firewall rules on the masters and nodes.**
|
||||
|
||||
```
|
||||
```sh
|
||||
iptables -nvL
|
||||
```
|
||||
|
||||
**Create the following apache.json file and deploy pod to node.**
|
||||
|
||||
```
|
||||
```sh
|
||||
cat << EOF > apache.json
|
||||
{
|
||||
"kind": "Pod",
|
||||
@ -251,7 +251,7 @@ EOF
|
||||
|
||||
**Check where the pod was created.**
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
@ -263,14 +263,14 @@ If you see 172 in the IP fields, networking was not setup correctly, and you may
|
||||
|
||||
**Check Docker status on node.**
|
||||
|
||||
```
|
||||
```sh
|
||||
docker ps
|
||||
docker images
|
||||
```
|
||||
|
||||
**After the pod is 'Running' Check web server access on the node.**
|
||||
|
||||
```
|
||||
```sh
|
||||
curl http://localhost
|
||||
```
|
||||
|
||||
|
@ -65,26 +65,26 @@ fed-node = 192.168.121.65
|
||||
* The [--enablerepo=update-testing](https://fedoraproject.org/wiki/QA:Updates_Testing) directive in the yum command below will ensure that the most recent Kubernetes version that is scheduled for pre-release will be installed. This should be a more recent version than the Fedora "stable" release for Kubernetes that you would get without adding the directive.
|
||||
* If you want the very latest Kubernetes release [you can download and yum install the RPM directly from Fedora Koji](http://koji.fedoraproject.org/koji/packageinfo?packageID=19202) instead of using the yum install command below.
|
||||
|
||||
```
|
||||
```sh
|
||||
yum -y install --enablerepo=updates-testing kubernetes
|
||||
```
|
||||
|
||||
* Install etcd and iptables
|
||||
|
||||
```
|
||||
```sh
|
||||
yum -y install etcd iptables
|
||||
```
|
||||
|
||||
* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS). Make sure that communication works between fed-master and fed-node by using a utility such as ping.
|
||||
|
||||
```
|
||||
```sh
|
||||
echo "192.168.121.9 fed-master
|
||||
192.168.121.65 fed-node" >> /etc/hosts
|
||||
```
|
||||
|
||||
* Edit /etc/kubernetes/config which will be the same on all hosts (master and node) to contain:
|
||||
|
||||
```
|
||||
```sh
|
||||
# Comma separated list of nodes in the etcd cluster
|
||||
KUBE_MASTER="--master=http://fed-master:8080"
|
||||
|
||||
@ -100,7 +100,7 @@ KUBE_ALLOW_PRIV="--allow_privileged=false"
|
||||
|
||||
* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers. Please note that iptables-services does not exist on default fedora server install.
|
||||
|
||||
```
|
||||
```sh
|
||||
systemctl disable iptables-services firewalld
|
||||
systemctl stop iptables-services firewalld
|
||||
```
|
||||
@ -109,7 +109,7 @@ systemctl stop iptables-services firewalld
|
||||
|
||||
* Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
|
||||
|
||||
```
|
||||
```sh
|
||||
# The address on the local server to listen to.
|
||||
KUBE_API_ADDRESS="--address=0.0.0.0"
|
||||
|
||||
@ -125,13 +125,13 @@ KUBE_API_ARGS=""
|
||||
|
||||
* Edit /etc/etcd/etcd.conf,let the etcd to listen all the ip instead of 127.0.0.1, if not, you will get the error like "connection refused"
|
||||
|
||||
```
|
||||
```sh
|
||||
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
|
||||
```
|
||||
|
||||
* Start the appropriate services on master:
|
||||
|
||||
```
|
||||
```sh
|
||||
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
@ -159,7 +159,7 @@ done
|
||||
|
||||
Now create a node object internally in your kubernetes cluster by running:
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl create -f ./node.json
|
||||
|
||||
$ kubectl get nodes
|
||||
@ -180,7 +180,7 @@ a kubernetes node (fed-node) below.
|
||||
|
||||
* Edit /etc/kubernetes/kubelet to appear as such:
|
||||
|
||||
```
|
||||
```sh
|
||||
###
|
||||
# kubernetes kubelet (node) config
|
||||
|
||||
@ -199,7 +199,7 @@ KUBELET_API_SERVER="--api_servers=http://fed-master:8080"
|
||||
|
||||
* Start the appropriate services on the node (fed-node).
|
||||
|
||||
```
|
||||
```sh
|
||||
for SERVICES in kube-proxy kubelet docker; do
|
||||
systemctl restart $SERVICES
|
||||
systemctl enable $SERVICES
|
||||
@ -209,7 +209,7 @@ done
|
||||
|
||||
* Check to make sure now the cluster can see the fed-node on fed-master, and its status changes to _Ready_.
|
||||
|
||||
```
|
||||
```console
|
||||
kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
fed-node name=fed-node-label Ready
|
||||
@ -219,8 +219,8 @@ fed-node name=fed-node-label Ready
|
||||
|
||||
To delete _fed-node_ from your kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
|
||||
|
||||
```
|
||||
$ kubectl delete -f ./node.json
|
||||
```sh
|
||||
kubectl delete -f ./node.json
|
||||
```
|
||||
|
||||
*You should be finished!*
|
||||
|
@ -55,7 +55,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
||||
|
||||
* Configure flannel by creating a `flannel-config.json` in your current directory on fed-master. flannel provides udp and vxlan among other overlay networking backend options. In this guide, we choose kernel based vxlan backend. The contents of the json are:
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"Network": "18.16.0.0/16",
|
||||
"SubnetLen": 24,
|
||||
@ -70,14 +70,14 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
||||
|
||||
* Add the configuration to the etcd server on fed-master.
|
||||
|
||||
```
|
||||
# etcdctl set /coreos.com/network/config < flannel-config.json
|
||||
```sh
|
||||
etcdctl set /coreos.com/network/config < flannel-config.json
|
||||
```
|
||||
|
||||
* Verify the key exists in the etcd server on fed-master.
|
||||
|
||||
```
|
||||
# etcdctl get /coreos.com/network/config
|
||||
```sh
|
||||
etcdctl get /coreos.com/network/config
|
||||
```
|
||||
|
||||
## Node Setup
|
||||
@ -86,7 +86,7 @@ This document describes how to deploy kubernetes on multiple hosts to set up a m
|
||||
|
||||
* Edit the flannel configuration file /etc/sysconfig/flanneld as follows:
|
||||
|
||||
```
|
||||
```sh
|
||||
# Flanneld configuration options
|
||||
|
||||
# etcd url location. Point this to the server where etcd runs
|
||||
@ -104,23 +104,23 @@ FLANNEL_OPTIONS=""
|
||||
|
||||
* Enable the flannel service.
|
||||
|
||||
```
|
||||
# systemctl enable flanneld
|
||||
```sh
|
||||
systemctl enable flanneld
|
||||
```
|
||||
|
||||
* If docker is not running, then starting flannel service is enough and skip the next step.
|
||||
|
||||
```
|
||||
# systemctl start flanneld
|
||||
```sh
|
||||
systemctl start flanneld
|
||||
```
|
||||
|
||||
* If docker is already running, then stop docker, delete docker bridge (docker0), start flanneld and restart docker as follows. Another alternative is to just reboot the system (`systemctl reboot`).
|
||||
|
||||
```
|
||||
# systemctl stop docker
|
||||
# ip link delete docker0
|
||||
# systemctl start flanneld
|
||||
# systemctl start docker
|
||||
```sh
|
||||
systemctl stop docker
|
||||
ip link delete docker0
|
||||
systemctl start flanneld
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
***
|
||||
@ -129,7 +129,7 @@ FLANNEL_OPTIONS=""
|
||||
|
||||
* Now check the interfaces on the nodes. Notice there is now a flannel.1 interface, and the ip addresses of docker0 and flannel.1 interfaces are in the same network. You will notice that docker0 is assigned a subnet (18.16.29.0/24 as shown below) on each kubernetes node out of the IP range configured above. A working output should look like this:
|
||||
|
||||
```
|
||||
```console
|
||||
# ip -4 a|grep inet
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
inet 192.168.122.77/24 brd 192.168.122.255 scope global dynamic eth0
|
||||
@ -139,8 +139,11 @@ FLANNEL_OPTIONS=""
|
||||
|
||||
* From any node in the cluster, check the cluster members by issuing a query to etcd server via curl (only partial output is shown using `grep -E "\{|\}|key|value"`). If you set up a 1 master and 3 nodes cluster, you should see one block for each node showing the subnets they have been assigned. You can associate those subnets to each node by the MAC address (VtepMAC) and IP address (Public IP) that is listed in the output.
|
||||
|
||||
```sh
|
||||
curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
|
||||
```
|
||||
# curl -s http://fed-master:4001/v2/keys/coreos.com/network/subnets | python -mjson.tool
|
||||
|
||||
```json
|
||||
{
|
||||
"node": {
|
||||
"key": "/coreos.com/network/subnets",
|
||||
@ -162,7 +165,7 @@ FLANNEL_OPTIONS=""
|
||||
|
||||
* From all nodes, review the `/run/flannel/subnet.env` file. This file was generated automatically by flannel.
|
||||
|
||||
```
|
||||
```console
|
||||
# cat /run/flannel/subnet.env
|
||||
FLANNEL_SUBNET=18.16.29.1/24
|
||||
FLANNEL_MTU=1450
|
||||
@ -173,35 +176,35 @@ FLANNEL_IPMASQ=false
|
||||
|
||||
* Issue the following commands on any 2 nodes:
|
||||
|
||||
```
|
||||
#docker run -it fedora:latest bash
|
||||
```console
|
||||
# docker run -it fedora:latest bash
|
||||
bash-4.3#
|
||||
```
|
||||
|
||||
* This will place you inside the container. Install iproute and iputils packages to install ip and ping utilities. Due to a [bug](https://bugzilla.redhat.com/show_bug.cgi?id=1142311), it is required to modify capabilities of ping binary to work around "Operation not permitted" error.
|
||||
|
||||
```
|
||||
```console
|
||||
bash-4.3# yum -y install iproute iputils
|
||||
bash-4.3# setcap cap_net_raw-ep /usr/bin/ping
|
||||
```
|
||||
|
||||
* Now note the IP address on the first node:
|
||||
|
||||
```
|
||||
```console
|
||||
bash-4.3# ip -4 a l eth0 | grep inet
|
||||
inet 18.16.29.4/24 scope global eth0
|
||||
```
|
||||
|
||||
* And also note the IP address on the other node:
|
||||
|
||||
```
|
||||
```console
|
||||
bash-4.3# ip a l eth0 | grep inet
|
||||
inet 18.16.90.4/24 scope global eth0
|
||||
```
|
||||
|
||||
* Now ping from the first node to the other node:
|
||||
|
||||
```
|
||||
```console
|
||||
bash-4.3# ping 18.16.90.4
|
||||
PING 18.16.90.4 (18.16.90.4) 56(84) bytes of data.
|
||||
64 bytes from 18.16.90.4: icmp_seq=1 ttl=62 time=0.275 ms
|
||||
|
@ -138,13 +138,13 @@ potential issues with client/server version skew.
|
||||
|
||||
Once `kubectl` is in your path, you can use it to look at your cluster. E.g., running:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get --all-namespaces services
|
||||
```
|
||||
|
||||
should show a set of [services](../user-guide/services.md) that look something like this:
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAMESPACE NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
default kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
|
||||
kube-system kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
|
||||
@ -159,7 +159,7 @@ kube-system monitoring-influxdb kubernetes.io/cluster-service=true,kubernete
|
||||
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
|
||||
You can do this via the
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get --all-namespaces pods
|
||||
```
|
||||
|
||||
@ -167,7 +167,7 @@ command.
|
||||
|
||||
You'll see a list of pods that looks something like this (the name specifics will be different):
|
||||
|
||||
```shell
|
||||
```console
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-63uo 1/1 Running 0 14m
|
||||
kube-system fluentd-cloud-logging-kubernetes-minion-c1n9 1/1 Running 0 14m
|
||||
|
@ -143,7 +143,7 @@ No pods will be available before starting a container:
|
||||
|
||||
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`
|
||||
|
||||
```
|
||||
```json
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "Pod",
|
||||
|
@ -68,7 +68,7 @@ Getting started with libvirt CoreOS
|
||||
|
||||
You can test it with the following command:
|
||||
|
||||
```
|
||||
```sh
|
||||
virsh -c qemu:///system pool-list
|
||||
```
|
||||
|
||||
@ -76,7 +76,7 @@ If you have access error messages, please read https://libvirt.org/acl.html and
|
||||
|
||||
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create `/etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules` as follows to grant full access to libvirt to `$USER`
|
||||
|
||||
```
|
||||
```sh
|
||||
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
@ -91,11 +91,11 @@ EOF
|
||||
|
||||
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
|
||||
|
||||
```
|
||||
ls -l /var/run/libvirt/libvirt-sock
|
||||
```console
|
||||
$ ls -l /var/run/libvirt/libvirt-sock
|
||||
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
|
||||
|
||||
usermod -a -G libvirtd $USER
|
||||
$ usermod -a -G libvirtd $USER
|
||||
# $USER needs to logout/login to have the new group be taken into account
|
||||
```
|
||||
|
||||
@ -109,7 +109,7 @@ As we’re using the `qemu:///system` instance of libvirt, qemu will run with a
|
||||
|
||||
If your `$HOME` is world readable, everything is fine. If your $HOME is private, `cluster/kube-up.sh` will fail with an error message like:
|
||||
|
||||
```
|
||||
```console
|
||||
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
|
||||
```
|
||||
|
||||
@ -122,7 +122,7 @@ In order to fix that issue, you have several possibilities:
|
||||
|
||||
On Arch:
|
||||
|
||||
```
|
||||
```sh
|
||||
setfacl -m g:kvm:--x ~
|
||||
```
|
||||
|
||||
@ -132,7 +132,7 @@ By default, the libvirt-coreos setup will create a single kubernetes master and
|
||||
|
||||
To start your local cluster, open a shell and run:
|
||||
|
||||
```shell
|
||||
```sh
|
||||
cd kubernetes
|
||||
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
@ -150,8 +150,8 @@ The `KUBE_PUSH` environment variable may be set to specify which kubernetes bina
|
||||
|
||||
You can check that your machines are there and running with:
|
||||
|
||||
```
|
||||
virsh -c qemu:///system list
|
||||
```console
|
||||
$ virsh -c qemu:///system list
|
||||
Id Name State
|
||||
----------------------------------------------------
|
||||
15 kubernetes_master running
|
||||
@ -162,7 +162,7 @@ virsh -c qemu:///system list
|
||||
|
||||
You can check that the kubernetes cluster is working with:
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
NAME LABELS STATUS
|
||||
192.168.10.2 <none> Ready
|
||||
@ -178,13 +178,13 @@ The IPs to connect to the nodes are 192.168.10.2 and onwards.
|
||||
|
||||
Connect to `kubernetes_master`:
|
||||
|
||||
```
|
||||
```sh
|
||||
ssh core@192.168.10.1
|
||||
```
|
||||
|
||||
Connect to `kubernetes_minion-01`:
|
||||
|
||||
```
|
||||
```sh
|
||||
ssh core@192.168.10.2
|
||||
```
|
||||
|
||||
@ -192,37 +192,37 @@ ssh core@192.168.10.2
|
||||
|
||||
All of the following commands assume you have set `KUBERNETES_PROVIDER` appropriately:
|
||||
|
||||
```
|
||||
```sh
|
||||
export KUBERNETES_PROVIDER=libvirt-coreos
|
||||
```
|
||||
|
||||
Bring up a libvirt-CoreOS cluster of 5 nodes
|
||||
|
||||
```
|
||||
```sh
|
||||
NUM_MINIONS=5 cluster/kube-up.sh
|
||||
```
|
||||
|
||||
Destroy the libvirt-CoreOS cluster
|
||||
|
||||
```
|
||||
```sh
|
||||
cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by `make release` or `make release-skip-tests`:
|
||||
|
||||
```
|
||||
```sh
|
||||
cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by `make`:
|
||||
|
||||
```
|
||||
```sh
|
||||
KUBE_PUSH=local cluster/kube-push.sh
|
||||
```
|
||||
|
||||
Interact with the cluster
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl ...
|
||||
```
|
||||
|
||||
@ -232,7 +232,7 @@ kubectl ...
|
||||
|
||||
Build the release tarballs:
|
||||
|
||||
```
|
||||
```sh
|
||||
make release
|
||||
```
|
||||
|
||||
@ -242,19 +242,19 @@ Install libvirt
|
||||
|
||||
On Arch:
|
||||
|
||||
```
|
||||
```sh
|
||||
pacman -S qemu libvirt
|
||||
```
|
||||
|
||||
On Ubuntu 14.04.1:
|
||||
|
||||
```
|
||||
```sh
|
||||
aptitude install qemu-system-x86 libvirt-bin
|
||||
```
|
||||
|
||||
On Fedora 21:
|
||||
|
||||
```
|
||||
```sh
|
||||
yum install qemu libvirt
|
||||
```
|
||||
|
||||
@ -264,13 +264,13 @@ Start the libvirt daemon
|
||||
|
||||
On Arch:
|
||||
|
||||
```
|
||||
```sh
|
||||
systemctl start libvirtd
|
||||
```
|
||||
|
||||
On Ubuntu 14.04.1:
|
||||
|
||||
```
|
||||
```sh
|
||||
service libvirt-bin start
|
||||
```
|
||||
|
||||
@ -280,7 +280,7 @@ Fix libvirt access permission (Remember to adapt `$USER`)
|
||||
|
||||
On Arch and Fedora 21:
|
||||
|
||||
```
|
||||
```sh
|
||||
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.libvirt.unix.manage" &&
|
||||
@ -295,7 +295,7 @@ EOF
|
||||
|
||||
On Ubuntu:
|
||||
|
||||
```
|
||||
```sh
|
||||
usermod -a -G libvirtd $USER
|
||||
```
|
||||
|
||||
|
@ -75,7 +75,7 @@ You need [go](https://golang.org/doc/install) at least 1.3+ in your path, please
|
||||
|
||||
In a separate tab of your terminal, run the following (since one needs sudo access to start/stop kubernetes daemons, it is easier to run the entire script as root):
|
||||
|
||||
```
|
||||
```sh
|
||||
cd kubernetes
|
||||
hack/local-up-cluster.sh
|
||||
```
|
||||
@ -93,7 +93,7 @@ Your cluster is running, and you want to start running containers!
|
||||
|
||||
You can now use any of the cluster/kubectl.sh commands to interact with your local setup.
|
||||
|
||||
```
|
||||
```sh
|
||||
cluster/kubectl.sh get pods
|
||||
cluster/kubectl.sh get services
|
||||
cluster/kubectl.sh get replicationcontrollers
|
||||
@ -123,7 +123,7 @@ However you cannot view the nginx start page on localhost. To verify that nginx
|
||||
|
||||
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:
|
||||
|
||||
```
|
||||
```sh
|
||||
cluster/kubectl.sh create -f docs/user-guide/pod.yaml
|
||||
```
|
||||
|
||||
@ -149,7 +149,7 @@ You are running a single node setup. This has the limitation of only supporting
|
||||
|
||||
#### I changed Kubernetes code, how do I run it?
|
||||
|
||||
```
|
||||
```sh
|
||||
cd kubernetes
|
||||
hack/build-go.sh
|
||||
hack/local-up-cluster.sh
|
||||
|
@ -83,18 +83,18 @@ ssh jclouds@${ip_address_of_master_node}
|
||||
Build Kubernetes-Mesos.
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/GoogleCloudPlatform/kubernetes
|
||||
$ cd kubernetes
|
||||
$ export KUBERNETES_CONTRIB=mesos
|
||||
$ make
|
||||
git clone https://github.com/GoogleCloudPlatform/kubernetes
|
||||
cd kubernetes
|
||||
export KUBERNETES_CONTRIB=mesos
|
||||
make
|
||||
```
|
||||
|
||||
Set some environment variables.
|
||||
The internal IP address of the master may be obtained via `hostname -i`.
|
||||
|
||||
```bash
|
||||
$ export KUBERNETES_MASTER_IP=$(hostname -i)
|
||||
$ export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
|
||||
export KUBERNETES_MASTER_IP=$(hostname -i)
|
||||
export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
|
||||
```
|
||||
|
||||
### Deploy etcd
|
||||
@ -102,10 +102,10 @@ $ export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
|
||||
Start etcd and verify that it is running:
|
||||
|
||||
```bash
|
||||
$ sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12
|
||||
sudo docker run -d --hostname $(uname -n) --name etcd -p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12
|
||||
```
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ sudo docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
fd7bac9e2301 quay.io/coreos/etcd:v2.0.12 "/etcd" 5s ago Up 3s 2379/tcp, 2380/... etcd
|
||||
@ -124,19 +124,19 @@ If connectivity is OK, you will see an output of the available keys in etcd (if
|
||||
Update your PATH to more easily run the Kubernetes-Mesos binaries:
|
||||
|
||||
```bash
|
||||
$ export PATH="$(pwd)/_output/local/go/bin:$PATH"
|
||||
export PATH="$(pwd)/_output/local/go/bin:$PATH"
|
||||
```
|
||||
|
||||
Identify your Mesos master: depending on your Mesos installation this is either a `host:port` like `mesos_master:5050` or a ZooKeeper URL like `zk://zookeeper:2181/mesos`.
|
||||
In order to let Kubernetes survive Mesos master changes, the ZooKeeper URL is recommended for production environments.
|
||||
|
||||
```bash
|
||||
$ export MESOS_MASTER=<host:port or zk:// url>
|
||||
export MESOS_MASTER=<host:port or zk:// url>
|
||||
```
|
||||
|
||||
Create a cloud config file `mesos-cloud.conf` in the current directory with the following contents:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ cat <<EOF >mesos-cloud.conf
|
||||
[mesos-cloud]
|
||||
mesos-master = ${MESOS_MASTER}
|
||||
@ -145,7 +145,7 @@ EOF
|
||||
|
||||
Now start the kubernetes-mesos API server, controller manager, and scheduler on the master node:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ km apiserver \
|
||||
--address=${KUBERNETES_MASTER_IP} \
|
||||
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
|
||||
@ -175,7 +175,7 @@ $ km scheduler \
|
||||
Disown your background jobs so that they'll stay running if you log out.
|
||||
|
||||
```bash
|
||||
$ disown -a
|
||||
disown -a
|
||||
```
|
||||
|
||||
#### Validate KM Services
|
||||
@ -188,12 +188,12 @@ export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
|
||||
Interact with the kubernetes-mesos framework via `kubectl`:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
```
|
||||
|
||||
```bash
|
||||
```console
|
||||
# NOTE: your service IPs will likely differ
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@ -211,6 +211,9 @@ Write a JSON pod description to a local file:
|
||||
|
||||
```bash
|
||||
$ cat <<EOPOD >nginx.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
@ -226,7 +229,7 @@ EOPOD
|
||||
|
||||
Send the pod description to Kubernetes using the `kubectl` CLI:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl create -f ./nginx.yaml
|
||||
pods/nginx
|
||||
```
|
||||
@ -234,7 +237,7 @@ pods/nginx
|
||||
Wait a minute or two while `dockerd` downloads the image layers from the internet.
|
||||
We can use the `kubectl` interface to monitor the status of our pod:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx 1/1 Running 0 14s
|
||||
@ -295,6 +298,9 @@ To check that the new DNS service in the cluster works, we start a busybox pod a
|
||||
|
||||
```bash
|
||||
cat <<EOF >busybox.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
@ -326,7 +332,7 @@ kubectl exec busybox -- nslookup kubernetes
|
||||
|
||||
If everything works fine, you will get this output:
|
||||
|
||||
```
|
||||
```console
|
||||
Server: 10.10.10.10
|
||||
Address 1: 10.10.10.10
|
||||
|
||||
|
@ -49,13 +49,13 @@ We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernete
|
||||
|
||||
To start the `rkt metadata service`, you can simply run:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ sudo rkt metadata-service
|
||||
```
|
||||
|
||||
If you want the service to be running as a systemd service, then:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ sudo systemd-run rkt metadata-service
|
||||
```
|
||||
|
||||
@ -66,7 +66,7 @@ We still have [a bunch of work](https://github.com/GoogleCloudPlatform/kubernete
|
||||
|
||||
To use rkt as the container runtime, you just need to set the environment variable `CONTAINER_RUNTIME`:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export CONTAINER_RUNTIME=rkt
|
||||
$ hack/local-up-cluster.sh
|
||||
```
|
||||
@ -75,7 +75,7 @@ $ hack/local-up-cluster.sh
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
$ export KUBE_GCE_MINION_IMAGE=<image_id>
|
||||
$ export KUBE_GCE_MINION_PROJECT=coreos-cloud
|
||||
@ -84,13 +84,13 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kube-up.sh
|
||||
```
|
||||
|
||||
@ -100,7 +100,7 @@ Note that we are still working on making all containerized the master components
|
||||
|
||||
To use rkt as the container runtime for your CoreOS cluster on AWS, you need to specify the provider and OS distribution:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export KUBERNETES_PROVIDER=aws
|
||||
$ export KUBE_OS_DISTRIBUTION=coreos
|
||||
$ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
@ -108,19 +108,19 @@ $ export KUBE_CONTAINER_RUNTIME=rkt
|
||||
|
||||
You can optionally choose the version of rkt used by setting `KUBE_RKT_VERSION`:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export KUBE_RKT_VERSION=0.5.6
|
||||
```
|
||||
|
||||
You can optionally choose the CoreOS channel by setting `COREOS_CHANNEL`:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ export COREOS_CHANNEL=stable
|
||||
```
|
||||
|
||||
Then you can launch the cluster by:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kube-up.sh
|
||||
```
|
||||
|
||||
|
@ -67,6 +67,25 @@ steps that existing cluster setup scripts are making.
|
||||
- [kubelet](#kubelet)
|
||||
- [kube-proxy](#kube-proxy)
|
||||
- [Networking](#networking)
|
||||
- [Other](#other)
|
||||
- [Using Configuration Management](#using-configuration-management)
|
||||
- [Bootstrapping the Cluster](#bootstrapping-the-cluster)
|
||||
- [etcd](#etcd)
|
||||
- [Apiserver](#apiserver)
|
||||
- [Apiserver pod template](#apiserver-pod-template)
|
||||
- [Starting Apiserver](#starting-apiserver)
|
||||
- [Scheduler](#scheduler)
|
||||
- [Controller Manager](#controller-manager)
|
||||
- [Logging](#logging)
|
||||
- [Monitoring](#monitoring)
|
||||
- [DNS](#dns)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Running validate-cluster](#running-validate-cluster)
|
||||
- [Inspect pods and services](#inspect-pods-and-services)
|
||||
- [Try Examples](#try-examples)
|
||||
- [Running the Conformance Test](#running-the-conformance-test)
|
||||
- [Networking](#networking)
|
||||
- [Getting Help](#getting-help)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
@ -309,7 +328,7 @@ many distinct files to make:
|
||||
You can make the files by copying the `$HOME/.kube/config`, by following the code
|
||||
in `cluster/gce/configure-vm.sh` or by using the following template:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
users:
|
||||
@ -356,7 +375,7 @@ If you previously had Docker installed on a node without setting Kubernetes-spec
|
||||
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
|
||||
as follows before proceeding to configure Docker for Kubernetes.
|
||||
|
||||
```
|
||||
```sh
|
||||
iptables -t nat -F
|
||||
ifconfig docker0 down
|
||||
brctl delbr docker0
|
||||
@ -462,7 +481,11 @@ because of how this is used later.
|
||||
If you have turned off Docker's IP masquerading to allow pods to talk to each
|
||||
other, then you may need to do masquerading just for destination IPs outside
|
||||
the cluster network. For example:
|
||||
```iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}```
|
||||
|
||||
```sh
|
||||
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE \! -d ${CLUSTER_SUBNET}
|
||||
```
|
||||
|
||||
This will rewrite the source address from
|
||||
the PodIP to the Node IP for traffic bound outside the cluster, and kernel
|
||||
[connection tracking](http://www.iptables.info/en/connection-state.html)
|
||||
@ -635,7 +658,7 @@ Place the completed pod template into the kubelet config dir
|
||||
|
||||
Next, verify that kubelet has started a container for the apiserver:
|
||||
|
||||
```
|
||||
```console
|
||||
$ sudo docker ps | grep apiserver:
|
||||
5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695
|
||||
|
||||
@ -643,7 +666,7 @@ $ sudo docker ps | grep apiserver:
|
||||
|
||||
Then try to connect to the apiserver:
|
||||
|
||||
```
|
||||
```console
|
||||
|
||||
$ echo $(curl -s http://localhost:8080/healthz)
|
||||
ok
|
||||
|
@ -87,7 +87,7 @@ An example cluster is listed as below:
|
||||
|
||||
First configure the cluster information in cluster/ubuntu/config-default.sh, below is a simple sample.
|
||||
|
||||
```
|
||||
```sh
|
||||
export nodes="vcap@10.10.103.250 vcap@10.10.103.162 vcap@10.10.103.223"
|
||||
|
||||
export roles="ai i i"
|
||||
@ -123,7 +123,7 @@ After all the above variable being set correctly. We can use below command in cl
|
||||
|
||||
The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password.
|
||||
|
||||
```
|
||||
```console
|
||||
|
||||
Deploying minion on machine 10.10.103.223
|
||||
|
||||
@ -142,7 +142,7 @@ You can also use `kubectl` command to see if the newly created k8s is working co
|
||||
|
||||
For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below.
|
||||
|
||||
```
|
||||
```console
|
||||
|
||||
NAME LABELS STATUS
|
||||
|
||||
@ -164,7 +164,7 @@ After the previous parts, you will have a working k8s cluster, this part will te
|
||||
|
||||
The configuration of dns is configured in cluster/ubuntu/config-default.sh.
|
||||
|
||||
```
|
||||
```sh
|
||||
|
||||
ENABLE_CLUSTER_DNS=true
|
||||
|
||||
@ -182,7 +182,7 @@ The `DNS_REPLICAS` describes how many dns pod running in the cluster.
|
||||
|
||||
After all the above variable have been set. Just type the below command
|
||||
|
||||
```
|
||||
```console
|
||||
|
||||
$ cd cluster/ubuntu
|
||||
|
||||
@ -218,7 +218,7 @@ Please try:
|
||||
|
||||
2. Check `/etc/default/etcd`, as we do not have much input validation, a right config should be like:
|
||||
|
||||
```
|
||||
```sh
|
||||
ETCD_OPTS="-name infra1 -initial-advertise-peer-urls <http://ip_of_this_node:2380> -listen-peer-urls <http://ip_of_this_node:2380> -initial-cluster-token etcd-cluster-1 -initial-cluster infra1=<http://ip_of_this_node:2380>,infra2=<http://ip_of_another_node:2380>,infra3=<http://ip_of_another_node:2380> -initial-cluster-state new"
|
||||
```
|
||||
|
||||
|
@ -117,8 +117,8 @@ The master node instantiates the Kubernetes master components as pods on the mac
|
||||
|
||||
To view the service status and/or logs on the kubernetes-master:
|
||||
|
||||
```sh
|
||||
vagrant ssh master
|
||||
```console
|
||||
[vagrant@kubernetes-master ~] $ vagrant ssh master
|
||||
[vagrant@kubernetes-master ~] $ sudo su
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status kubelet
|
||||
@ -134,8 +134,8 @@ vagrant ssh master
|
||||
|
||||
To view the services on any of the nodes:
|
||||
|
||||
```sh
|
||||
vagrant ssh minion-1
|
||||
```console
|
||||
[vagrant@kubernetes-master ~] $ vagrant ssh minion-1
|
||||
[vagrant@kubernetes-master ~] $ sudo su
|
||||
|
||||
[root@kubernetes-master ~] $ systemctl status kubelet
|
||||
@ -172,7 +172,7 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS
|
||||
@ -187,6 +187,9 @@ When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script w
|
||||
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
```json
|
||||
{ "User": "vagrant",
|
||||
"Password": "vagrant",
|
||||
"CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt",
|
||||
@ -205,7 +208,7 @@ You should now be set to use the `cluster/kubectl.sh` script. For example try to
|
||||
|
||||
Your cluster is running, you can list the nodes in your cluster:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
|
||||
NAME LABELS
|
||||
@ -219,7 +222,7 @@ Now start running some containers!
|
||||
You can now use any of the `cluster/kube-*.sh` commands to interact with your VM machines.
|
||||
Before starting a container there will be no pods, services and replication controllers.
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
||||
@ -232,13 +235,13 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
Start a container running nginx with a replication controller and three replicas
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
|
||||
```
|
||||
|
||||
When listing the pods, you will see that three containers have been started and are in Waiting state:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-5kq0g 0/1 Pending 0 10s
|
||||
@ -248,7 +251,7 @@ my-nginx-xql4j 0/1 Pending 0 10s
|
||||
|
||||
You need to wait for the provisioning to complete, you can monitor the nodes by doing:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ vagrant ssh minion-1 -c 'sudo docker images'
|
||||
kubernetes-minion-1:
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
@ -259,7 +262,7 @@ kubernetes-minion-1:
|
||||
|
||||
Once the docker image for nginx has been downloaded, the container will start and you can list it:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ vagrant ssh minion-1 -c 'sudo docker ps'
|
||||
kubernetes-minion-1:
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
@ -271,7 +274,7 @@ kubernetes-minion-1:
|
||||
|
||||
Going back to listing the pods, services and replicationcontrollers, you now have:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-nginx-5kq0g 1/1 Running 0 1m
|
||||
@ -290,7 +293,7 @@ We did not start any services, hence there are none listed. But we see three rep
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with scaling the replicas with:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@ -325,6 +328,9 @@ After using kubectl.sh make sure that the correct credentials are set:
|
||||
|
||||
```sh
|
||||
cat ~/.kubernetes_vagrant_auth
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"User": "vagrant",
|
||||
"Password": "vagrant"
|
||||
|
@ -50,7 +50,7 @@ containers to be injected with the name and namespace of the pod the container i
|
||||
Use the [`examples/downward-api/dapi-pod.yaml`](dapi-pod.yaml) file to create a Pod with a container that consumes the
|
||||
downward API.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
|
||||
```
|
||||
|
||||
@ -59,7 +59,7 @@ $ kubectl create -f docs/user-guide/downward-api/dapi-pod.yaml
|
||||
This pod runs the `env` command in a container that consumes the downward API. You can grep
|
||||
through the pod logs to see that the pod was injected with the correct values:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl logs dapi-test-pod | grep POD_
|
||||
2015-04-30T20:22:18.568024817Z POD_NAME=dapi-test-pod
|
||||
2015-04-30T20:22:18.568087688Z POD_NAMESPACE=default
|
||||
|
@ -71,7 +71,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
|
||||
|
||||
Let's create a new namespace called limit-example:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/limitrange/namespace.yaml
|
||||
namespaces/limit-example
|
||||
$ kubectl get namespaces
|
||||
@ -84,14 +84,14 @@ Step 2: Apply a limit to the namespace
|
||||
-----------------------------------------
|
||||
Let's create a simple limit in our namespace.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/limitrange/limits.yaml --namespace=limit-example
|
||||
limitranges/mylimits
|
||||
```
|
||||
|
||||
Let's describe the limits that we have imposed in our namespace.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl describe limits mylimits --namespace=limit-example
|
||||
Name: mylimits
|
||||
Type Resource Min Max Default
|
||||
@ -123,7 +123,7 @@ of creation explaining why.
|
||||
Let's first spin up a replication controller that creates a single container pod to demonstrate
|
||||
how default values are applied to each pod.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
nginx nginx nginx run=nginx 1
|
||||
@ -132,6 +132,9 @@ POD IP CONTAINER(S) IMAGE(S) HOST LABELS S
|
||||
nginx-ykj4j 10.246.1.3 10.245.1.3/ run=nginx Running About a minute
|
||||
nginx nginx Running 54 seconds
|
||||
$ kubectl get pods nginx-ykj4j --namespace=limit-example -o yaml | grep resources -C 5
|
||||
```
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
- capabilities: {}
|
||||
image: nginx
|
||||
@ -149,17 +152,20 @@ Note that our nginx container has picked up the namespace default cpu and memory
|
||||
|
||||
Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/limitrange/invalid-pod.yaml --namespace=limit-example
|
||||
Error from server: Pod "invalid-pod" is forbidden: Maximum CPU usage per pod is 2, but requested 3
|
||||
```
|
||||
|
||||
Let's create a pod that falls within the allowed limit boundaries.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/limitrange/valid-pod.yaml --namespace=limit-example
|
||||
pods/valid-pod
|
||||
$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 5 resources
|
||||
```
|
||||
|
||||
```yaml
|
||||
containers:
|
||||
- capabilities: {}
|
||||
image: gcr.io/google_containers/serve_hostname
|
||||
@ -179,7 +185,7 @@ Step 4: Cleanup
|
||||
----------------------------
|
||||
To remove the resources used by this example, you can just delete the limit-example namespace.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl delete namespace limit-example
|
||||
namespaces/limit-example
|
||||
$ kubectl get namespaces
|
||||
|
@ -37,7 +37,7 @@ This example shows two types of pod [health checks](../production-pods.md#livene
|
||||
|
||||
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
|
||||
|
||||
```
|
||||
```yaml
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
@ -51,7 +51,7 @@ Kubelet executes the command `cat /tmp/health` in the container and reports fail
|
||||
|
||||
Note that the container removes the `/tmp/health` file after 10 seconds,
|
||||
|
||||
```
|
||||
```sh
|
||||
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
|
||||
```
|
||||
|
||||
@ -60,7 +60,7 @@ so when Kubelet executes the health check 15 seconds (defined by initialDelaySec
|
||||
|
||||
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
|
||||
|
||||
```
|
||||
```yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
@ -77,15 +77,15 @@ This [guide](../walkthrough/k8s201.md#health-checking) has more information on h
|
||||
|
||||
To show the health check is actually working, first create the pods:
|
||||
|
||||
```
|
||||
# kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
|
||||
# kubectl create -f docs/user-guide/liveness/http-liveness.yaml
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/liveness/exec-liveness.yaml
|
||||
$ kubectl create -f docs/user-guide/liveness/http-liveness.yaml
|
||||
```
|
||||
|
||||
Check the status of the pods once they are created:
|
||||
|
||||
```
|
||||
# kubectl get pods
|
||||
```console
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
[...]
|
||||
liveness-exec 1/1 Running 0 13s
|
||||
@ -94,8 +94,8 @@ liveness-http 1/1 Running 0
|
||||
|
||||
Check the status half a minute later, you will see the container restart count being incremented:
|
||||
|
||||
```
|
||||
# kubectl get pods
|
||||
```console
|
||||
$ kubectl get pods
|
||||
mwielgus@mwielgusd:~/test/k2/kubernetes/examples/liveness$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
[...]
|
||||
@ -105,8 +105,8 @@ liveness-http 1/1 Running 1
|
||||
|
||||
At the bottom of the *kubectl describe* output there are messages indicating that the liveness probes have failed, and the containers have been killed and recreated.
|
||||
|
||||
```
|
||||
# kubectl describe pods liveness-exec
|
||||
```console
|
||||
$ kubectl describe pods liveness-exec
|
||||
[...]
|
||||
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
|
||||
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
|
||||
|
@ -58,7 +58,7 @@ services, and replication controllers used by the cluster.
|
||||
|
||||
Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS
|
||||
default <none>
|
||||
@ -83,7 +83,7 @@ Let's create two new namespaces to hold our work.
|
||||
|
||||
Use the file [`namespace-dev.json`](namespace-dev.json) which describes a development namespace:
|
||||
|
||||
```js
|
||||
```json
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
@ -98,19 +98,19 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
|
||||
|
||||
Create the development namespace using kubectl.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/namespaces/namespace-dev.json
|
||||
```
|
||||
|
||||
And then lets create the production namespace using kubectl.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/namespaces/namespace-prod.json
|
||||
```
|
||||
|
||||
To be sure things are right, let's list all of the namespaces in our cluster.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
default <none> Active
|
||||
@ -129,7 +129,7 @@ To demonstrate this, let's spin up a simple replication controller and pod in th
|
||||
|
||||
We first check what is the current context:
|
||||
|
||||
```shell
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
@ -158,7 +158,7 @@ users:
|
||||
|
||||
The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
$ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
|
||||
```
|
||||
@ -168,14 +168,17 @@ wish to work against.
|
||||
|
||||
Let's switch to operate in the development namespace.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl config use-context dev
|
||||
```
|
||||
|
||||
You can verify your current context by doing the following:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl config view
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
@ -216,13 +219,13 @@ At this point, all requests we make to the Kubernetes cluster from the command l
|
||||
|
||||
Let's create some content.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
|
||||
```
|
||||
|
||||
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
|
||||
@ -237,13 +240,13 @@ And this is great, developers are able to do what they want, and they do not hav
|
||||
|
||||
Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl config use-context prod
|
||||
```
|
||||
|
||||
The production namespace should be empty.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
||||
@ -253,7 +256,7 @@ NAME READY STATUS RESTARTS AGE
|
||||
|
||||
Production likes to run cattle, so let's create some cattle pods.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5
|
||||
|
||||
$ kubectl get rc
|
||||
|
@ -53,17 +53,15 @@ support local storage on the host at this time. There is no guarantee your pod
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
// this will be nginx's webroot
|
||||
```console
|
||||
# This will be nginx's webroot
|
||||
$ mkdir /tmp/data01
|
||||
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
|
||||
|
||||
```
|
||||
|
||||
PVs are created by posting them to the API server.
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/persistent-volumes/volumes/local-01.yaml
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
|
||||
pv0001 type=local 10737418240 RWO Available
|
||||
@ -76,7 +74,7 @@ They just know they can rely on their claim to storage and can manage its lifecy
|
||||
|
||||
Claims must be created in the same namespace as the pods that use them.
|
||||
|
||||
```
|
||||
```console
|
||||
|
||||
$ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml
|
||||
|
||||
@ -101,7 +99,7 @@ pv0001 type=local 10737418240 RWO Bound default/myclaim-1
|
||||
|
||||
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/persistent-volumes/simpletest/pod.yaml
|
||||
|
||||
$ kubectl get pods
|
||||
@ -120,11 +118,9 @@ kubernetes component=apiserver,provider=kubernetes <none>
|
||||
You should be able to query your service endpoint and see what content nginx is serving. A "forbidden" error might mean you
|
||||
need to disable SELinux (setenforce 0).
|
||||
|
||||
```
|
||||
|
||||
curl 10.0.0.241:3000
|
||||
```console
|
||||
$ curl 10.0.0.241:3000
|
||||
I love Kubernetes storage!
|
||||
|
||||
```
|
||||
|
||||
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
|
||||
|
@ -42,7 +42,7 @@ This example will work in a custom namespace to demonstrate the concepts involve
|
||||
|
||||
Let's create a new namespace called quota-example:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/resourcequota/namespace.yaml
|
||||
$ kubectl get namespaces
|
||||
NAME LABELS STATUS
|
||||
@ -62,7 +62,7 @@ and API resources (pods, services, etc.) that a namespace may consume.
|
||||
|
||||
Let's create a simple quota in our namespace:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/resourcequota/quota.yaml --namespace=quota-example
|
||||
```
|
||||
|
||||
@ -72,7 +72,7 @@ in the namespace until the quota usage has been calculated. This should happen
|
||||
You can describe your current quota usage to see what resources are being consumed in your
|
||||
namespace.
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: quota-example
|
||||
@ -97,7 +97,7 @@ cpu and memory by creating an nginx container.
|
||||
|
||||
To demonstrate, lets create a replication controller that runs nginx:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
nginx nginx nginx run=nginx 1
|
||||
@ -105,14 +105,14 @@ nginx nginx nginx run=nginx 1
|
||||
|
||||
Now let's look at the pods that were created.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
```
|
||||
|
||||
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
|
||||
|
||||
```shell
|
||||
```console
|
||||
kubectl describe rc nginx --namespace=quota-example
|
||||
Name: nginx
|
||||
Image(s): nginx
|
||||
@ -130,7 +130,7 @@ do not specify any memory usage.
|
||||
|
||||
So let's set some default limits for the amount of cpu and memory a pod can consume:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/resourcequota/limits.yaml --namespace=quota-example
|
||||
limitranges/limits
|
||||
$ kubectl describe limits limits --namespace=quota-example
|
||||
@ -148,7 +148,7 @@ amount of cpu and memory per container will be applied as part of admission cont
|
||||
Now that we have applied default limits for our namespace, our replication controller should be able to
|
||||
create its pods.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get pods --namespace=quota-example
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-t9cap 1/1 Running 0 49s
|
||||
@ -156,8 +156,8 @@ nginx-t9cap 1/1 Running 0 49s
|
||||
|
||||
And if we print out our quota usage in the namespace:
|
||||
|
||||
```shell
|
||||
kubectl describe quota quota --namespace=quota-example
|
||||
```console
|
||||
$ kubectl describe quota quota --namespace=quota-example
|
||||
Name: quota
|
||||
Namespace: default
|
||||
Resource Used Hard
|
||||
|
@ -47,13 +47,13 @@ A secret contains a set of named byte arrays.
|
||||
|
||||
Use the [`examples/secrets/secret.yaml`](secret.yaml) file to create a secret:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/secrets/secret.yaml
|
||||
```
|
||||
|
||||
You can use `kubectl` to see information about the secret:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl get secrets
|
||||
NAME TYPE DATA
|
||||
test-secret Opaque 2
|
||||
@ -78,14 +78,14 @@ consumes it.
|
||||
|
||||
Use the [`examples/secrets/secret-pod.yaml`](secret-pod.yaml) file to create a Pod that consumes the secret.
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/secrets/secret-pod.yaml
|
||||
```
|
||||
|
||||
This pod runs a binary that displays the content of one of the pieces of secret data in the secret
|
||||
volume:
|
||||
|
||||
```shell
|
||||
```console
|
||||
$ kubectl logs secret-test-pod
|
||||
2015-04-29T21:17:24.712206409Z content of file "/etc/secret-volume/data-1": value-1
|
||||
```
|
||||
|
@ -55,7 +55,7 @@ This example demonstrates the usage of Kubernetes to perform a [rolling update](
|
||||
|
||||
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../../docs/getting-started-guides/):
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ cd kubernetes
|
||||
$ ./cluster/kube-up.sh
|
||||
```
|
||||
@ -67,9 +67,8 @@ This can sometimes spew to the output so you could also run it in a different te
|
||||
Kubernetes repository. Otherwise you will get "404 page not found" errors as the paths will not match. You can find more information about `kubectl proxy`
|
||||
[here](../../../docs/user-guide/kubectl/kubectl_proxy.md).
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl proxy --www=examples/update-demo/local/ &
|
||||
+ kubectl proxy --www=examples/update-demo/local/
|
||||
I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
|
||||
```
|
||||
|
||||
@ -79,7 +78,7 @@ Now visit the the [demo website](http://localhost:8001/static). You won't see a
|
||||
|
||||
Now we will turn up two replicas of an [image](../images.md). They all serve on internal port 80.
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml
|
||||
```
|
||||
|
||||
@ -89,7 +88,7 @@ After pulling the image from the Docker Hub to your worker nodes (which may take
|
||||
|
||||
Now we will increase the number of replicas from two to four:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl scale rc update-demo-nautilus --replicas=4
|
||||
```
|
||||
|
||||
@ -99,7 +98,7 @@ If you go back to the [demo website](http://localhost:8001/static/index.html) yo
|
||||
|
||||
We will now update the docker image to serve a different image by doing a rolling update to a new Docker image.
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl rolling-update update-demo-nautilus --update-period=10s -f docs/user-guide/update-demo/kitten-rc.yaml
|
||||
```
|
||||
|
||||
@ -114,7 +113,7 @@ But if the replica count had been specified, the final replica count of the new
|
||||
|
||||
### Step Five: Bring down the pods
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ kubectl stop rc update-demo-kitten
|
||||
```
|
||||
|
||||
@ -124,14 +123,14 @@ This first stops the replication controller by turning the target number of repl
|
||||
|
||||
To turn down a Kubernetes cluster:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ ./cluster/kube-down.sh
|
||||
```
|
||||
|
||||
Kill the proxy running in the background:
|
||||
After you are done running this demo make sure to kill it:
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ jobs
|
||||
[1]+ Running ./kubectl proxy --www=local/ &
|
||||
$ kill %1
|
||||
@ -142,7 +141,7 @@ $ kill %1
|
||||
|
||||
If you want to build your own docker images, you can set `$DOCKER_HUB_USER` to your Docker user id and run the included shell script. It can take a few minutes to download/upload stuff.
|
||||
|
||||
```bash
|
||||
```console
|
||||
$ export DOCKER_HUB_USER=my-docker-id
|
||||
$ ./examples/update-demo/build-images.sh
|
||||
```
|
||||
|
@ -86,13 +86,13 @@ spec:
|
||||
|
||||
Create the labeled pod ([pod-nginx-with-label.yaml](pod-nginx-with-label.yaml)):
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/walkthrough/pod-nginx-with-label.yaml
|
||||
```
|
||||
|
||||
List all pods with the label `app=nginx`:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl get pods -l app=nginx
|
||||
```
|
||||
|
||||
@ -139,19 +139,19 @@ spec:
|
||||
|
||||
Create an nginx replication controller ([replication-controller.yaml](replication-controller.yaml)):
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/walkthrough/replication-controller.yaml
|
||||
```
|
||||
|
||||
List all replication controllers:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl get rc
|
||||
```
|
||||
|
||||
Delete the replication controller by name:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl delete rc nginx-controller
|
||||
```
|
||||
|
||||
@ -187,13 +187,13 @@ spec:
|
||||
|
||||
Create an nginx service ([service.yaml](service.yaml)):
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl create -f docs/user-guide/walkthrough/service.yaml
|
||||
```
|
||||
|
||||
List all services:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl get services
|
||||
```
|
||||
|
||||
@ -201,7 +201,7 @@ On most providers, the service IPs are not externally accessible. The easiest wa
|
||||
|
||||
Provided the service IP is accessible, you should be able to access its http endpoint with curl on port 80:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ export SERVICE_IP=$(kubectl get service nginx-service -o=template -t={{.spec.clusterIP}})
|
||||
$ export SERVICE_PORT=$(kubectl get service nginx-service -o=template '-t={{(index .spec.ports 0).port}}')
|
||||
$ curl http://${SERVICE_IP}:${SERVICE_PORT}
|
||||
@ -209,7 +209,7 @@ $ curl http://${SERVICE_IP}:${SERVICE_PORT}
|
||||
|
||||
To delete the service by name:
|
||||
|
||||
```sh
|
||||
```console
|
||||
$ kubectl delete service nginx-controller
|
||||
```
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user