mirror of
				https://github.com/k3s-io/kubernetes.git
				synced 2025-10-30 21:30:16 +00:00 
			
		
		
		
	Purge cluster/kubectl.sh from nearly all docs.
Mark cluster/kubectl.sh as deprecated.
This commit is contained in:
		| @@ -34,7 +34,7 @@ The `release.sh` script will build a release.  It will build binaries, run tests | |||||||
|  |  | ||||||
| The main output is a tar file: `kubernetes.tar.gz`.  This includes: | The main output is a tar file: `kubernetes.tar.gz`.  This includes: | ||||||
| * Cross compiled client utilities. | * Cross compiled client utilities. | ||||||
| * Script (`cluster/kubectl.sh`) for picking and running the right client binary based on platform. | * Script (`kubectl`) for picking and running the right client binary based on platform. | ||||||
| * Examples | * Examples | ||||||
| * Cluster deployment scripts for various clouds | * Cluster deployment scripts for various clouds | ||||||
| * Tar file containing all server binaries | * Tar file containing all server binaries | ||||||
|   | |||||||
| @@ -18,6 +18,17 @@ set -o errexit | |||||||
| set -o nounset | set -o nounset | ||||||
| set -o pipefail | set -o pipefail | ||||||
|  |  | ||||||
|  | echo "-=-=-=-=-=-=-=-=-=-=" | ||||||
|  | echo "NOTE:" | ||||||
|  | echo "kubectl.sh is deprecated and will be removed soon." | ||||||
|  | echo "please replace all usage with calls to the kubectl" | ||||||
|  | echo "binary and ensure that it is in your PATH."  | ||||||
|  | echo "" | ||||||
|  | echo "Please see 'kubectl help config' for more details" | ||||||
|  | echo "about configuring kubectl for your cluster." | ||||||
|  | echo "-=-=-=-=-=-=-=-=-=-=" | ||||||
|  |  | ||||||
|  |  | ||||||
| KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. | KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. | ||||||
| source "${KUBE_ROOT}/cluster/kube-env.sh" | source "${KUBE_ROOT}/cluster/kube-env.sh" | ||||||
| UTILS=${KUBE_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh | UTILS=${KUBE_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh | ||||||
|   | |||||||
| @@ -110,7 +110,7 @@ spec: | |||||||
|  |  | ||||||
| -------------------------------------------------- | -------------------------------------------------- | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pv | kubectl get pv | ||||||
|  |  | ||||||
| NAME                LABELS              CAPACITY            ACCESSMODES         STATUS              CLAIM | NAME                LABELS              CAPACITY            ACCESSMODES         STATUS              CLAIM | ||||||
| pv0001              map[]               10737418240         RWO                 Pending     | pv0001              map[]               10737418240         RWO                 Pending     | ||||||
| @@ -140,7 +140,7 @@ spec: | |||||||
|  |  | ||||||
| -------------------------------------------------- | -------------------------------------------------- | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pvc | kubectl get pvc | ||||||
|  |  | ||||||
|  |  | ||||||
| NAME                LABELS              STATUS              VOLUME | NAME                LABELS              STATUS              VOLUME | ||||||
| @@ -155,13 +155,13 @@ myclaim-1           map[]               pending | |||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pv | kubectl get pv | ||||||
|  |  | ||||||
| NAME                LABELS              CAPACITY            ACCESSMODES         STATUS              CLAIM | NAME                LABELS              CAPACITY            ACCESSMODES         STATUS              CLAIM | ||||||
| pv0001              map[]               10737418240         RWO                 Bound               myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e | pv0001              map[]               10737418240         RWO                 Bound               myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e | ||||||
|  |  | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pvc | kubectl get pvc | ||||||
|  |  | ||||||
| NAME                LABELS              STATUS              VOLUME | NAME                LABELS              STATUS              VOLUME | ||||||
| myclaim-1           map[]               Bound               b16e91d6-c0ef-11e4-8be4-80e6500a981e | myclaim-1           map[]               Bound               b16e91d6-c0ef-11e4-8be4-80e6500a981e | ||||||
| @@ -205,7 +205,7 @@ When a claim holder is finished with their data, they can delete their claim. | |||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| cluster/kubectl.sh delete pvc myclaim-1 | kubectl delete pvc myclaim-1 | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|   | |||||||
| @@ -33,7 +33,9 @@ spec: | |||||||
| ``` | ``` | ||||||
| Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. | Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. | ||||||
|  |  | ||||||
| ```./cluster/kubectl.sh create -f controller.yaml``` | ``` | ||||||
|  | kubectl create -f controller.yaml | ||||||
|  | ``` | ||||||
|  |  | ||||||
| This will spin up 24 instances of the test.  They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. | This will spin up 24 instances of the test.  They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test. | ||||||
| You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. | You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently. | ||||||
| @@ -52,7 +54,7 @@ grep "Exited ([^0])" output.txt | |||||||
| Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: | Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: | ||||||
|  |  | ||||||
| ```sh | ```sh | ||||||
| ./cluster/kubectl.sh stop replicationcontroller flakecontroller | kubectl stop replicationcontroller flakecontroller | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. | If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller. | ||||||
|   | |||||||
| @@ -116,7 +116,7 @@ virsh -c qemu:///system list | |||||||
| You can check that the kubernetes cluster is working with: | You can check that the kubernetes cluster is working with: | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
| $ ./cluster/kubectl.sh get minions | $ kubectl get nodes | ||||||
| NAME                LABELS              STATUS | NAME                LABELS              STATUS | ||||||
| 192.168.10.2        <none>              Ready | 192.168.10.2        <none>              Ready | ||||||
| 192.168.10.3        <none>              Ready | 192.168.10.3        <none>              Ready | ||||||
| @@ -173,7 +173,7 @@ KUBE_PUSH=local cluster/kube-push.sh | |||||||
| Interact with the cluster | Interact with the cluster | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
| cluster/kubectl.sh | kubectl ... | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| ### Troubleshooting | ### Troubleshooting | ||||||
|   | |||||||
| @@ -34,11 +34,6 @@ $ export CONTAINER_RUNTIME=rkt | |||||||
| $ hack/local-up-cluster.sh | $ hack/local-up-cluster.sh | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| After this, you can launch some pods in another terminal: |  | ||||||
| ```shell |  | ||||||
| $ cluster/kubectl.sh create -f example/pod.yaml |  | ||||||
| ``` |  | ||||||
|  |  | ||||||
| ### CoreOS cluster on GCE | ### CoreOS cluster on GCE | ||||||
|  |  | ||||||
| To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image: | To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image: | ||||||
| @@ -88,6 +83,10 @@ $ kube-up.sh | |||||||
| Note: CoreOS is not supported as the master using the automated launch | Note: CoreOS is not supported as the master using the automated launch | ||||||
| scripts. The master node is always Ubuntu. | scripts. The master node is always Ubuntu. | ||||||
|  |  | ||||||
|  | ### Getting started with your cluster | ||||||
|  | See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster. | ||||||
|  |  | ||||||
|  | For more complete applications, please look in the [examples directory](../../examples). | ||||||
|  |  | ||||||
|  |  | ||||||
| []() | []() | ||||||
|   | |||||||
| @@ -7,7 +7,7 @@ Kubernetes has an extensible user interface with default functionality that desc | |||||||
| Assuming that you have a cluster running locally at `localhost:8080`, as described [here](getting-started-guides/locally.md), you can run the UI against it with kubectl: | Assuming that you have a cluster running locally at `localhost:8080`, as described [here](getting-started-guides/locally.md), you can run the UI against it with kubectl: | ||||||
|  |  | ||||||
| ```sh | ```sh | ||||||
| cluster/kubectl.sh proxy --www=www/app --www-prefix=/ | kubectl proxy --www=www/app --www-prefix=/ | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| You should now be able to access it by visiting [localhost:8001](http://localhost:8001/). | You should now be able to access it by visiting [localhost:8001](http://localhost:8001/). | ||||||
|   | |||||||
| @@ -56,8 +56,6 @@ To start the service, run: | |||||||
| $ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml | $ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| **NOTE**: If you're running Kubernetes from source, you can use `cluster/kubectl.sh` instead of `kubectl`. |  | ||||||
|  |  | ||||||
| This service allows other pods to connect to the rabbitmq. To them, it will be seen as available on port 5672, although the service is routing the traffic to the container (also via port 5672). | This service allows other pods to connect to the rabbitmq. To them, it will be seen as available on port 5672, although the service is routing the traffic to the container (also via port 5672). | ||||||
|  |  | ||||||
|  |  | ||||||
|   | |||||||
| @@ -16,14 +16,14 @@ $ hack/dev-build-and-up.sh | |||||||
| We'll see how cluster DNS works across multiple [namespaces](../../docs/namespaces.md), first we need to create two namespaces: | We'll see how cluster DNS works across multiple [namespaces](../../docs/namespaces.md), first we need to create two namespaces: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/namespace-dev.yaml | $ kubectl create -f examples/cluster-dns/namespace-dev.yaml | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/namespace-prod.yaml | $ kubectl create -f examples/cluster-dns/namespace-prod.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Now list all namespaces: | Now list all namespaces: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get namespaces | $ kubectl get namespaces | ||||||
| NAME          LABELS             STATUS | NAME          LABELS             STATUS | ||||||
| default       <none>             Active | default       <none>             Active | ||||||
| development   name=development   Active | development   name=development   Active | ||||||
| @@ -33,8 +33,8 @@ production    name=production    Active | |||||||
| For kubectl client to work with each namespace, we define two contexts: | For kubectl client to work with each namespace, we define two contexts: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME} | $ kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME} | ||||||
| $ cluster/kubectl.sh config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME} | $ kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME} | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| ### Step Two: Create backend replication controller in each namespace | ### Step Two: Create backend replication controller in each namespace | ||||||
| @@ -42,14 +42,14 @@ $ cluster/kubectl.sh config set-context prod --namespace=production --cluster=${ | |||||||
| Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/replication-controller.md) in each namespace. | Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/replication-controller.md) in each namespace. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context dev | $ kubectl config use-context dev | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-backend-rc.yaml | $ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once that's up you can list the pod in the cluster: | Once that's up you can list the pod in the cluster: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER    CONTAINER(S)   IMAGE(S)              SELECTOR           REPLICAS | CONTROLLER    CONTAINER(S)   IMAGE(S)              SELECTOR           REPLICAS | ||||||
| dns-backend   dns-backend    ddysher/dns-backend   name=dns-backend   1 | dns-backend   dns-backend    ddysher/dns-backend   name=dns-backend   1 | ||||||
| ``` | ``` | ||||||
| @@ -57,9 +57,9 @@ dns-backend   dns-backend    ddysher/dns-backend   name=dns-backend   1 | |||||||
| Now repeat the above commands to create a replication controller in prod namespace: | Now repeat the above commands to create a replication controller in prod namespace: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context prod | $ kubectl config use-context prod | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-backend-rc.yaml | $ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER    CONTAINER(S)   IMAGE(S)              SELECTOR           REPLICAS | CONTROLLER    CONTAINER(S)   IMAGE(S)              SELECTOR           REPLICAS | ||||||
| dns-backend   dns-backend    ddysher/dns-backend   name=dns-backend   1 | dns-backend   dns-backend    ddysher/dns-backend   name=dns-backend   1 | ||||||
| ``` | ``` | ||||||
| @@ -70,14 +70,14 @@ Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-servi | |||||||
| a [service](../../docs/services.md) for the backend server. | a [service](../../docs/services.md) for the backend server. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context dev | $ kubectl config use-context dev | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-backend-service.yaml | $ kubectl create -f examples/cluster-dns/dns-backend-service.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once that's up you can list the service in the cluster: | Once that's up you can list the service in the cluster: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get service dns-backend | $ kubectl get service dns-backend | ||||||
| NAME          LABELS    SELECTOR           IP(S)          PORT(S) | NAME          LABELS    SELECTOR           IP(S)          PORT(S) | ||||||
| dns-backend   <none>    name=dns-backend   10.0.236.129   8000/TCP | dns-backend   <none>    name=dns-backend   10.0.236.129   8000/TCP | ||||||
| ``` | ``` | ||||||
| @@ -85,9 +85,9 @@ dns-backend   <none>    name=dns-backend   10.0.236.129   8000/TCP | |||||||
| Again, repeat the same process for prod namespace: | Again, repeat the same process for prod namespace: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context prod | $ kubectl config use-context prod | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-backend-service.yaml | $ kubectl create -f examples/cluster-dns/dns-backend-service.yaml | ||||||
| $ cluster/kubectl.sh get service dns-backend | $ kubectl get service dns-backend | ||||||
| NAME          LABELS    SELECTOR           IP(S)         PORT(S) | NAME          LABELS    SELECTOR           IP(S)         PORT(S) | ||||||
| dns-backend   <none>    name=dns-backend   10.0.35.246   8000/TCP | dns-backend   <none>    name=dns-backend   10.0.35.246   8000/TCP | ||||||
| ``` | ``` | ||||||
| @@ -97,14 +97,14 @@ dns-backend   <none>    name=dns-backend   10.0.35.246   8000/TCP | |||||||
| Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.kubernetes.local:8000`. | Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.kubernetes.local:8000`. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context dev | $ kubectl config use-context dev | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-frontend-pod.yaml | $ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once that's up you can list the pod in the cluster: | Once that's up you can list the pod in the cluster: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get pods dns-frontend | $ kubectl get pods dns-frontend | ||||||
| POD            IP           CONTAINER(S)   IMAGE(S)               HOST                                    LABELS              STATUS    CREATED     MESSAGE | POD            IP           CONTAINER(S)   IMAGE(S)               HOST                                    LABELS              STATUS    CREATED     MESSAGE | ||||||
| dns-frontend   10.244.2.9                                         kubernetes-minion-sswf/104.154.55.211   name=dns-frontend   Running   3 seconds | dns-frontend   10.244.2.9                                         kubernetes-minion-sswf/104.154.55.211   name=dns-frontend   Running   3 seconds | ||||||
|                             dns-frontend   ddysher/dns-frontend                                                               Running   2 seconds |                             dns-frontend   ddysher/dns-frontend                                                               Running   2 seconds | ||||||
| @@ -113,7 +113,7 @@ dns-frontend   10.244.2.9                                         kubernetes-min | |||||||
| Wait until the pod succeeds, then we can see the output from the client pod: | Wait until the pod succeeds, then we can see the output from the client pod: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh log dns-frontend | $ kubectl log dns-frontend | ||||||
| 2015-05-07T20:13:54.147664936Z 10.0.236.129 | 2015-05-07T20:13:54.147664936Z 10.0.236.129 | ||||||
| 2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 | 2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 | ||||||
| 2015-05-07T20:13:54.147733438Z <Response [200]> | 2015-05-07T20:13:54.147733438Z <Response [200]> | ||||||
| @@ -123,9 +123,9 @@ $ cluster/kubectl.sh log dns-frontend | |||||||
| Please refer to the [source code](./images/frontend/client.py) about the logs. First line prints out the ip address associated with the service in dev namespace; remaining lines print out our request and server response. If we switch to prod namespace with the same pod config, we'll see the same result, i.e. dns will resolve across namespace. | Please refer to the [source code](./images/frontend/client.py) about the logs. First line prints out the ip address associated with the service in dev namespace; remaining lines print out our request and server response. If we switch to prod namespace with the same pod config, we'll see the same result, i.e. dns will resolve across namespace. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context prod | $ kubectl config use-context prod | ||||||
| $ cluster/kubectl.sh create -f examples/cluster-dns/dns-frontend-pod.yaml | $ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml | ||||||
| $ cluster/kubectl.sh log dns-frontend | $ kubectl log dns-frontend | ||||||
| 2015-05-07T20:13:54.147664936Z 10.0.236.129 | 2015-05-07T20:13:54.147664936Z 10.0.236.129 | ||||||
| 2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 | 2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.kubernetes.local:8000 | ||||||
| 2015-05-07T20:13:54.147733438Z <Response [200]> | 2015-05-07T20:13:54.147733438Z <Response [200]> | ||||||
|   | |||||||
| @@ -13,9 +13,8 @@ Currently, you can look at: | |||||||
|  |  | ||||||
| Example from command line (the DNS lookup looks better from a web browser): | Example from command line (the DNS lookup looks better from a web browser): | ||||||
| ``` | ``` | ||||||
| $ alias kctl=../../../cluster/kubectl.sh | $ kubectl create -f pod.json | ||||||
| $ kctl create -f pod.json | $ kubectl proxy & | ||||||
| $ kctl proxy & |  | ||||||
| Starting to serve on localhost:8001 | Starting to serve on localhost:8001 | ||||||
|  |  | ||||||
| $ curl localhost:8001/api/v1beta3/proxy/namespaces/default/pods/explorer:8080/vars/ | $ curl localhost:8001/api/v1beta3/proxy/namespaces/default/pods/explorer:8080/vars/ | ||||||
|   | |||||||
| @@ -18,12 +18,12 @@ Use the file `examples/guestbook-go/redis-master-controller.json` to create a [r | |||||||
| Create the redis master replication controller in your Kubernetes cluster using the `kubectl` CLI: | Create the redis master replication controller in your Kubernetes cluster using the `kubectl` CLI: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/redis-master-controller.json | $ kubectl create -f examples/guestbook-go/redis-master-controller.json | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once that's up you can list the replication controllers in the cluster: | Once that's up you can list the replication controllers in the cluster: | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER                             CONTAINER(S)            IMAGE(S)                            SELECTOR                     REPLICAS | CONTROLLER                             CONTAINER(S)            IMAGE(S)                            SELECTOR                     REPLICAS | ||||||
| redis-master-controller                redis-master            gurpartap/redis                     name=redis,role=master       1 | redis-master-controller                redis-master            gurpartap/redis                     name=redis,role=master       1 | ||||||
| ``` | ``` | ||||||
| @@ -31,7 +31,7 @@ redis-master-controller                redis-master            gurpartap/redis | |||||||
| List pods in cluster to verify the master is running. You'll see a single redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds). | List pods in cluster to verify the master is running. You'll see a single redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds). | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| POD                  IP           CONTAINER(S)   IMAGE(S)          HOST                                    LABELS                   STATUS    CREATED     MESSAGE | POD                  IP           CONTAINER(S)   IMAGE(S)          HOST                                    LABELS                   STATUS    CREATED     MESSAGE | ||||||
| redis-master-y06lj   10.244.3.4                                    kubernetes-minion-bz1p/104.154.61.231   name=redis,role=master   Running   8 seconds    | redis-master-y06lj   10.244.3.4                                    kubernetes-minion-bz1p/104.154.61.231   name=redis,role=master   Running   8 seconds    | ||||||
|                                   redis-master   gurpartap/redis                                                                    Running   3 seconds   |                                   redis-master   gurpartap/redis                                                                    Running   3 seconds   | ||||||
| @@ -55,9 +55,9 @@ A Kubernetes '[service](../../docs/services.md)' is a named load balancer that p | |||||||
| The pod that you created in Step One has the label `name=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.  Use the file `examples/guestbook-go/redis-master-service.json` to create the service in the `kubectl` cli: | The pod that you created in Step One has the label `name=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.  Use the file `examples/guestbook-go/redis-master-service.json` to create the service in the `kubectl` cli: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/redis-master-service.json | $ kubectl create -f examples/guestbook-go/redis-master-service.json | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get services | $ kubectl get services | ||||||
| NAME           LABELS                   SELECTOR                 IP(S)         PORT(S) | NAME           LABELS                   SELECTOR                 IP(S)         PORT(S) | ||||||
| redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173   6379/TCP | redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173   6379/TCP | ||||||
| ``` | ``` | ||||||
| @@ -70,9 +70,9 @@ Although the redis master is a single pod, the redis read slaves are a 'replicat | |||||||
| Use the file `examples/guestbook-go/redis-slave-controller.json` to create the replication controller: | Use the file `examples/guestbook-go/redis-slave-controller.json` to create the replication controller: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/redis-slave-controller.json | $ kubectl create -f examples/guestbook-go/redis-slave-controller.json | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER     CONTAINER(S)   IMAGE(S)          SELECTOR                 REPLICAS | CONTROLLER     CONTAINER(S)   IMAGE(S)          SELECTOR                 REPLICAS | ||||||
| redis-master   redis-master   gurpartap/redis   name=redis,role=master   1 | redis-master   redis-master   gurpartap/redis   name=redis,role=master   1 | ||||||
| redis-slave    redis-slave    gurpartap/redis   name=redis,role=slave    2 | redis-slave    redis-slave    gurpartap/redis   name=redis,role=slave    2 | ||||||
| @@ -87,7 +87,7 @@ redis-server --slaveof redis-master 6379 | |||||||
| Once that's up you can list the pods in the cluster, to verify that the master and slaves are running: | Once that's up you can list the pods in the cluster, to verify that the master and slaves are running: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| POD                  IP           CONTAINER(S)   IMAGE(S)          HOST                                     LABELS                   STATUS    CREATED      MESSAGE | POD                  IP           CONTAINER(S)   IMAGE(S)          HOST                                     LABELS                   STATUS    CREATED      MESSAGE | ||||||
| redis-master-y06lj   10.244.3.4                                    kubernetes-minion-bz1p/104.154.61.231    name=redis,role=master   Running   5 minutes | redis-master-y06lj   10.244.3.4                                    kubernetes-minion-bz1p/104.154.61.231    name=redis,role=master   Running   5 minutes | ||||||
|                                   redis-master   gurpartap/redis                                                                     Running   5 minutes |                                   redis-master   gurpartap/redis                                                                     Running   5 minutes | ||||||
| @@ -108,9 +108,9 @@ This time the selector for the service is `name=redis,role=slave`, because that | |||||||
| Now that you have created the service specification, create it in your cluster with the `kubectl` CLI: | Now that you have created the service specification, create it in your cluster with the `kubectl` CLI: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/redis-slave-service.json | $ kubectl create -f examples/guestbook-go/redis-slave-service.json | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get services | $ kubectl get services | ||||||
| NAME           LABELS                   SELECTOR                 IP(S)         PORT(S) | NAME           LABELS                   SELECTOR                 IP(S)         PORT(S) | ||||||
| redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173   6379/TCP | redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173   6379/TCP | ||||||
| redis-slave    name=redis,role=slave    name=redis,role=slave    10.0.234.24   6379/TCP | redis-slave    name=redis,role=slave    name=redis,role=slave    10.0.234.24   6379/TCP | ||||||
| @@ -123,9 +123,9 @@ This is a simple Go net/http ([negroni](https://github.com/codegangsta/negroni) | |||||||
| The pod is described in the file `examples/guestbook-go/guestbook-controller.json`. Using this file, you can turn up your guestbook with: | The pod is described in the file `examples/guestbook-go/guestbook-controller.json`. Using this file, you can turn up your guestbook with: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/guestbook-controller.json | $ kubectl create -f examples/guestbook-go/guestbook-controller.json | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get replicationControllers | $ kubectl get replicationControllers | ||||||
| CONTROLLER     CONTAINER(S)   IMAGE(S)                  SELECTOR                 REPLICAS | CONTROLLER     CONTAINER(S)   IMAGE(S)                  SELECTOR                 REPLICAS | ||||||
| guestbook      guestbook      kubernetes/guestbook:v2   name=guestbook           3 | guestbook      guestbook      kubernetes/guestbook:v2   name=guestbook           3 | ||||||
| redis-master   redis-master   gurpartap/redis           name=redis,role=master   1 | redis-master   redis-master   gurpartap/redis           name=redis,role=master   1 | ||||||
| @@ -157,9 +157,9 @@ You will see a single redis master pod, two redis slaves, and three guestbook po | |||||||
| Just like the others, you want a service to group your guestbook pods.  The service specification for the guestbook is in `examples/guestbook-go/guestbook-service.json`.  There's a twist this time - because we want it to be externally visible, we set the `createExternalLoadBalancer` flag on the service. | Just like the others, you want a service to group your guestbook pods.  The service specification for the guestbook is in `examples/guestbook-go/guestbook-service.json`.  There's a twist this time - because we want it to be externally visible, we set the `createExternalLoadBalancer` flag on the service. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/guestbook-go/guestbook-service.json | $ kubectl create -f examples/guestbook-go/guestbook-service.json | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get services | $ kubectl get services | ||||||
| NAME           LABELS                   SELECTOR                 IP(S)          PORT(S) | NAME           LABELS                   SELECTOR                 IP(S)          PORT(S) | ||||||
| guestbook      name=guestbook           name=guestbook           10.0.114.109   3000/TCP | guestbook      name=guestbook           name=guestbook           10.0.114.109   3000/TCP | ||||||
| redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173    6379/TCP | redis-master   name=redis,role=master   name=redis,role=master   10.0.11.173    6379/TCP | ||||||
| @@ -169,7 +169,7 @@ redis-slave    name=redis,role=slave    name=redis,role=slave    10.0.234.24 | |||||||
| To play with the service itself, find the external IP of the load balancer: | To play with the service itself, find the external IP of the load balancer: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}' | $ kubectl get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}' | ||||||
| 104.154.63.66$ | 104.154.63.66$ | ||||||
| ``` | ``` | ||||||
| and then visit port 3000 of that IP address e.g. `http://104.154.63.66:3000`. | and then visit port 3000 of that IP address e.g. `http://104.154.63.66:3000`. | ||||||
| @@ -190,7 +190,7 @@ For details about limiting traffic to specific sources, see the [GCE firewall do | |||||||
|  |  | ||||||
| You should delete the service which will remove any associated resources that were created e.g. load balancers, forwarding rules and target pools. All the resources (replication controllers and service) can be deleted with a single command: | You should delete the service which will remove any associated resources that were created e.g. load balancers, forwarding rules and target pools. All the resources (replication controllers and service) can be deleted with a single command: | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh delete -f examples/guestbook-go | $ kubectl delete -f examples/guestbook-go | ||||||
| guestbook-controller | guestbook-controller | ||||||
| guestbook | guestbook | ||||||
| redis-master-controller | redis-master-controller | ||||||
|   | |||||||
| @@ -12,8 +12,6 @@ The web front end interacts with the redis master via javascript redis API calls | |||||||
|  |  | ||||||
| This example requires a kubernetes cluster.  See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. | This example requires a kubernetes cluster.  See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. | ||||||
|  |  | ||||||
| If you are running from source, replace commands such as `kubectl` below with calls to `cluster/kubectl.sh`. |  | ||||||
|  |  | ||||||
| ### Step One: Fire up the redis master | ### Step One: Fire up the redis master | ||||||
|  |  | ||||||
| Note: This redis-master is *not* highly available.  Making it highly available would be a very interesting, but intricate exercise - redis doesn't actually support multi-master deployments at the time of this writing, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on. | Note: This redis-master is *not* highly available.  Making it highly available would be a very interesting, but intricate exercise - redis doesn't actually support multi-master deployments at the time of this writing, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on. | ||||||
| @@ -252,7 +250,7 @@ The service specification for the slaves is in `examples/guestbook/redis-slave-s | |||||||
| } | } | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `cluster/kubectl.sh get services -l "label=value"` command. | This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `kubectl get services -l "label=value"` command. | ||||||
|  |  | ||||||
| Now that you have created the service specification, create it in your cluster by running: | Now that you have created the service specification, create it in your cluster by running: | ||||||
|  |  | ||||||
| @@ -431,7 +429,7 @@ redis-slave             name=redis-slave                          name=redis-sla | |||||||
|  |  | ||||||
| ### A few Google Container Engine specifics for playing around with the services. | ### A few Google Container Engine specifics for playing around with the services. | ||||||
|  |  | ||||||
| In GCE, `cluster/kubectl.sh` automatically creates forwarding rule for services with `createExternalLoadBalancer`. | In GCE, `kubectl` automatically creates forwarding rule for services with `createExternalLoadBalancer`. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ gcloud compute forwarding-rules list | $ gcloud compute forwarding-rules list | ||||||
|   | |||||||
| @@ -19,13 +19,15 @@ Once you have installed iSCSI initiator and new Kubernetes, you can create a pod | |||||||
|  |  | ||||||
| Once your pod is created, run it on the Kubernetes master: | Once your pod is created, run it on the Kubernetes master: | ||||||
|  |  | ||||||
|     #cluster/kubectl.sh create -f your_new_pod.json | ```console | ||||||
|  | kubectl create -f your_new_pod.json | ||||||
|  | ``` | ||||||
|  |  | ||||||
| Here is my command and output: | Here is my command and output: | ||||||
|  |  | ||||||
| ```console | ```console | ||||||
| # cluster/kubectl.sh create -f examples/iscsi/iscsi.json | # kubectl create -f examples/iscsi/iscsi.json | ||||||
| # cluster/kubectl.sh get pods | # kubectl get pods | ||||||
| POD       IP            CONTAINER(S)   IMAGE(S)           HOST                                    LABELS    STATUS    CREATED         MESSAGE | POD       IP            CONTAINER(S)   IMAGE(S)           HOST                                    LABELS    STATUS    CREATED         MESSAGE | ||||||
| iscsipd   10.244.3.14                                     kubernetes-minion-bz1p/104.154.61.231   <none>    Running   About an hour    | iscsipd   10.244.3.14                                     kubernetes-minion-bz1p/104.154.61.231   <none>    Running   About an hour    | ||||||
|                         iscsipd-rw     kubernetes/pause                                                     Running   About an hour    |                         iscsipd-rw     kubernetes/pause                                                     Running   About an hour    | ||||||
|   | |||||||
| @@ -26,7 +26,7 @@ services, and replication controllers used by the cluster. | |||||||
| Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: | Assuming you have a fresh cluster, you can introspect the available namespace's by doing the following: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get namespaces | $ kubectl get namespaces | ||||||
| NAME                LABELS | NAME                LABELS | ||||||
| default             <none> | default             <none> | ||||||
| ``` | ``` | ||||||
| @@ -66,19 +66,19 @@ Use the file [`examples/kubernetes-namespaces/namespace-dev.json`](namespace-dev | |||||||
| Create the development namespace using kubectl. | Create the development namespace using kubectl. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/kubernetes-namespaces/namespace-dev.json | $ kubectl create -f examples/kubernetes-namespaces/namespace-dev.json | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| And then lets create the production namespace using kubectl. | And then lets create the production namespace using kubectl. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/kubernetes-namespaces/namespace-prod.json | $ kubectl create -f examples/kubernetes-namespaces/namespace-prod.json | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| To be sure things are right, let's list all of the namespaces in our cluster. | To be sure things are right, let's list all of the namespaces in our cluster. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get namespaces | $ kubectl get namespaces | ||||||
| NAME          LABELS             STATUS | NAME          LABELS             STATUS | ||||||
| default       <none>             Active | default       <none>             Active | ||||||
| development   name=development   Active | development   name=development   Active | ||||||
| @@ -126,8 +126,8 @@ users: | |||||||
| The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. | The next step is to define a context for the kubectl client to work in each namespace. The value of "cluster" and "user" fields are copied from the current context. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes | $ kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes | ||||||
| $ cluster/kubectl.sh config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes | $ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| The above commands provided two request contexts you can alternate against depending on what namespace you | The above commands provided two request contexts you can alternate against depending on what namespace you | ||||||
| @@ -136,13 +136,13 @@ wish to work against. | |||||||
| Let's switch to operate in the development namespace. | Let's switch to operate in the development namespace. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context dev | $ kubectl config use-context dev | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| You can verify your current context by doing the following: | You can verify your current context by doing the following: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config view | $ kubectl config view | ||||||
| apiVersion: v1 | apiVersion: v1 | ||||||
| clusters: | clusters: | ||||||
| - cluster: | - cluster: | ||||||
| @@ -184,17 +184,17 @@ At this point, all requests we make to the Kubernetes cluster from the command l | |||||||
| Let's create some content. | Let's create some content. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh run snowflake --image=kubernetes/serve_hostname --replicas=2 | $ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. | We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| cluster/kubectl.sh get rc | kubectl get rc | ||||||
| CONTROLLER          CONTAINER(S)        IMAGE(S)                    SELECTOR                  REPLICAS | CONTROLLER          CONTAINER(S)        IMAGE(S)                    SELECTOR                  REPLICAS | ||||||
| snowflake           snowflake           kubernetes/serve_hostname   run=snowflake   2 | snowflake           snowflake           kubernetes/serve_hostname   run=snowflake   2 | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| POD               IP           CONTAINER(S)   IMAGE(S)                    HOST                                   LABELS                    STATUS    CREATED         MESSAGE | POD               IP           CONTAINER(S)   IMAGE(S)                    HOST                                   LABELS                    STATUS    CREATED         MESSAGE | ||||||
| snowflake-mbrfi   10.244.2.4                                              kubernetes-minion-ilqx/104.197.8.214   run=snowflake             Running   About an hour | snowflake-mbrfi   10.244.2.4                                              kubernetes-minion-ilqx/104.197.8.214   run=snowflake             Running   About an hour | ||||||
|                                snowflake      kubernetes/serve_hostname                                                                    Running   About an hour |                                snowflake      kubernetes/serve_hostname                                                                    Running   About an hour | ||||||
| @@ -207,29 +207,29 @@ And this is great, developers are able to do what they want, and they do not hav | |||||||
| Let's switch to the production namespace and show how resources in one namespace are hidden from the other. | Let's switch to the production namespace and show how resources in one namespace are hidden from the other. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh config use-context prod | $ kubectl config use-context prod | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| The production namespace should be empty. | The production namespace should be empty. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER          CONTAINER(S)        IMAGE(S)            SELECTOR            REPLICAS | CONTROLLER          CONTAINER(S)        IMAGE(S)            SELECTOR            REPLICAS | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| POD                 IP                  CONTAINER(S)        IMAGE(S)            HOST                LABELS              STATUS          CREATED       MESSAGE | POD                 IP                  CONTAINER(S)        IMAGE(S)            HOST                LABELS              STATUS          CREATED       MESSAGE | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Production likes to run cattle, so let's create some cattle pods. | Production likes to run cattle, so let's create some cattle pods. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh run cattle --image=kubernetes/serve_hostname --replicas=5 | $ kubectl run cattle --image=kubernetes/serve_hostname --replicas=5 | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get rc | $ kubectl get rc | ||||||
| CONTROLLER          CONTAINER(S)        IMAGE(S)                    SELECTOR               REPLICAS | CONTROLLER          CONTAINER(S)        IMAGE(S)                    SELECTOR               REPLICAS | ||||||
| cattle              cattle              kubernetes/serve_hostname   run=cattle             5 | cattle              cattle              kubernetes/serve_hostname   run=cattle             5 | ||||||
|  |  | ||||||
| $ cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| POD            IP           CONTAINER(S)   IMAGE(S)                    HOST                                    LABELS                 STATUS    CREATED         MESSAGE | POD            IP           CONTAINER(S)   IMAGE(S)                    HOST                                    LABELS                 STATUS    CREATED         MESSAGE | ||||||
| cattle-1kyvj   10.244.0.4                                              kubernetes-minion-7s1y/23.236.54.97     run=cattle             Running   About an hour | cattle-1kyvj   10.244.0.4                                              kubernetes-minion-7s1y/23.236.54.97     run=cattle             Running   About an hour | ||||||
|                             cattle         kubernetes/serve_hostname                                                                  Running   About an hour |                             cattle         kubernetes/serve_hostname                                                                  Running   About an hour | ||||||
|   | |||||||
| @@ -36,13 +36,13 @@ This [guide](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examp | |||||||
| ## Get your hands dirty | ## Get your hands dirty | ||||||
| To show the health check is actually working, first create the pods: | To show the health check is actually working, first create the pods: | ||||||
| ``` | ``` | ||||||
| # cluster/kubectl.sh create -f exec-liveness.yaml | # kubectl create -f exec-liveness.yaml | ||||||
| # cluster/kbuectl.sh create -f http-liveness.yaml | # cluster/kbuectl.sh create -f http-liveness.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Check the status of the pods once they are created: | Check the status of the pods once they are created: | ||||||
| ``` | ``` | ||||||
| # cluster/kubectl.sh get pods | # kubectl get pods | ||||||
| POD             IP           CONTAINER(S)   IMAGE(S)                            HOST                                     LABELS          STATUS    CREATED     MESSAGE | POD             IP           CONTAINER(S)   IMAGE(S)                            HOST                                     LABELS          STATUS    CREATED     MESSAGE | ||||||
| liveness-exec   10.244.3.7                                                      kubernetes-minion-f08h/130.211.122.180   test=liveness   Running   3 seconds    | liveness-exec   10.244.3.7                                                      kubernetes-minion-f08h/130.211.122.180   test=liveness   Running   3 seconds    | ||||||
|                              liveness       gcr.io/google_containers/busybox                                                             Running   2 seconds    |                              liveness       gcr.io/google_containers/busybox                                                             Running   2 seconds    | ||||||
| @@ -52,7 +52,7 @@ liveness-http   10.244.0.8 | |||||||
|  |  | ||||||
| Check the status half a minute later, you will see the termination messages: | Check the status half a minute later, you will see the termination messages: | ||||||
| ``` | ``` | ||||||
| # cluster/kubectl.sh get pods | # kubectl get pods | ||||||
| POD             IP           CONTAINER(S)   IMAGE(S)                            HOST                                     LABELS          STATUS    CREATED      MESSAGE | POD             IP           CONTAINER(S)   IMAGE(S)                            HOST                                     LABELS          STATUS    CREATED      MESSAGE | ||||||
| liveness-exec   10.244.3.7                                                      kubernetes-minion-f08h/130.211.122.180   test=liveness   Running   34 seconds    | liveness-exec   10.244.3.7                                                      kubernetes-minion-f08h/130.211.122.180   test=liveness   Running   34 seconds    | ||||||
|                              liveness       gcr.io/google_containers/busybox                                                             Running   3 seconds    last termination: exit code 137 |                              liveness       gcr.io/google_containers/busybox                                                             Running   3 seconds    last termination: exit code 137 | ||||||
| @@ -63,7 +63,7 @@ The termination messages indicate that the liveness probes have failed, and the | |||||||
|  |  | ||||||
| You can also see the container restart count being incremented by running `kubectl describe`. | You can also see the container restart count being incremented by running `kubectl describe`. | ||||||
| ``` | ``` | ||||||
| # cluster/kubectl.sh describe pods liveness-exec | grep "Restart Count" | # kubectl describe pods liveness-exec | grep "Restart Count" | ||||||
| Restart Count:      8 | Restart Count:      8 | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|   | |||||||
| @@ -21,7 +21,7 @@ gcloud config set project <project-name> | |||||||
|  |  | ||||||
| Next, grab the Kubernetes [release binary](https://github.com/GoogleCloudPlatform/kubernetes/releases) and start up a Kubernetes cluster: | Next, grab the Kubernetes [release binary](https://github.com/GoogleCloudPlatform/kubernetes/releases) and start up a Kubernetes cluster: | ||||||
| ``` | ``` | ||||||
| $ <kubernetes>/cluster/kube-up.sh | $ cluster/kube-up.sh | ||||||
| ``` | ``` | ||||||
| where `<kubernetes>` is the path to your Kubernetes installation. | where `<kubernetes>` is the path to your Kubernetes installation. | ||||||
|  |  | ||||||
| @@ -104,14 +104,14 @@ Note that we've defined a volume mount for `/var/lib/mysql`, and specified a vol | |||||||
| Once you've edited the file to set your database password, create the pod as follows, where `<kubernetes>` is the path to your Kubernetes installation: | Once you've edited the file to set your database password, create the pod as follows, where `<kubernetes>` is the path to your Kubernetes installation: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh create -f mysql.yaml | $ kubectl create -f mysql.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| It may take a short period before the new pod reaches the `Running` state. | It may take a short period before the new pod reaches the `Running` state. | ||||||
| List all pods to see the status of this new pod and the cluster node that it is running on: | List all pods to see the status of this new pod and the cluster node that it is running on: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|  |  | ||||||
| @@ -120,7 +120,7 @@ $ <kubernetes>/cluster/kubectl.sh get pods | |||||||
| You can take a look at the logs for a pod by using `kubectl.sh log`.  For example: | You can take a look at the logs for a pod by using `kubectl.sh log`.  For example: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh log mysql | $ kubectl log mysql | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| If you want to do deeper troubleshooting, e.g. if it seems a container is not staying up, you can also ssh in to the node that a pod is running on.  There, you can run `sudo -s`, then `docker ps -a` to see all the containers.  You can then inspect the logs of containers that have exited, via `docker logs <container_id>`.  (You can also find some relevant logs under `/var/log`, e.g. `docker.log` and `kubelet.log`). | If you want to do deeper troubleshooting, e.g. if it seems a container is not staying up, you can also ssh in to the node that a pod is running on.  There, you can run `sudo -s`, then `docker ps -a` to see all the containers.  You can then inspect the logs of containers that have exited, via `docker logs <container_id>`.  (You can also find some relevant logs under `/var/log`, e.g. `docker.log` and `kubelet.log`). | ||||||
| @@ -153,13 +153,13 @@ spec: | |||||||
| Start the service like this: | Start the service like this: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh create -f mysql-service.yaml | $ kubectl create -f mysql-service.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| You can see what services are running via: | You can see what services are running via: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh get services | $ kubectl get services | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|  |  | ||||||
| @@ -203,14 +203,14 @@ spec: | |||||||
| Create the pod: | Create the pod: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh create -f wordpress.yaml | $ kubectl create -f wordpress.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| And list the pods to check that the status of the new pod changes | And list the pods to check that the status of the new pod changes | ||||||
| to `Running`.  As above, this might take a minute. | to `Running`.  As above, this might take a minute. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh get pods | $ kubectl get pods | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| ### Start the WordPress service | ### Start the WordPress service | ||||||
| @@ -242,13 +242,13 @@ Note also that we've set the service port to 80.  We'll return to that shortly. | |||||||
| Start the service: | Start the service: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh create -f wordpress-service.yaml | $ kubectl create -f wordpress-service.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| and see it in the list of services: | and see it in the list of services: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh get services | $ kubectl get services | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Then, find the external IP for your WordPress service by listing the forwarding rules for your project: | Then, find the external IP for your WordPress service by listing the forwarding rules for your project: | ||||||
| @@ -285,8 +285,8 @@ Set up your WordPress blog and play around with it a bit.  Then, take down its p | |||||||
| If you are just experimenting, you can take down and bring up only the pods: | If you are just experimenting, you can take down and bring up only the pods: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kubectl.sh delete -f wordpress.yaml | $ kubectl delete -f wordpress.yaml | ||||||
| $ <kubernetes>/cluster/kubectl.sh delete -f mysql.yaml | $ kubectl delete -f mysql.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| When you restart the pods again (using the `create` operation as described above), their services will pick up the new pods based on their labels. | When you restart the pods again (using the `create` operation as described above), their services will pick up the new pods based on their labels. | ||||||
| @@ -296,7 +296,7 @@ If you want to shut down the entire app installation, you can delete the service | |||||||
| If you are ready to turn down your Kubernetes cluster altogether, run: | If you are ready to turn down your Kubernetes cluster altogether, run: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ <kubernetes>/cluster/kube-down.sh | $ cluster/kube-down.sh | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|  |  | ||||||
|   | |||||||
| @@ -29,8 +29,8 @@ PVs are created by posting them to the API server. | |||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| cluster/kubectl.sh create -f examples/persistent-volumes/volumes/local-01.yaml | kubectl create -f examples/persistent-volumes/volumes/local-01.yaml | ||||||
| cluster/kubectl.sh get pv | kubectl get pv | ||||||
|  |  | ||||||
| NAME        LABELS       CAPACITY            ACCESSMODES         STATUS              CLAIM | NAME        LABELS       CAPACITY            ACCESSMODES         STATUS              CLAIM | ||||||
| pv0001      map[]        10737418240         RWO                 Available                             | pv0001      map[]        10737418240         RWO                 Available                             | ||||||
| @@ -46,8 +46,8 @@ Claims must be created in the same namespace as the pods that use them. | |||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| cluster/kubectl.sh create -f examples/persistent-volumes/claims/claim-01.yaml | kubectl create -f examples/persistent-volumes/claims/claim-01.yaml | ||||||
| cluster/kubectl.sh get pvc | kubectl get pvc | ||||||
|  |  | ||||||
| NAME                LABELS              STATUS              VOLUME | NAME                LABELS              STATUS              VOLUME | ||||||
| myclaim-1           map[]                                    | myclaim-1           map[]                                    | ||||||
| @@ -56,13 +56,13 @@ myclaim-1           map[] | |||||||
| # A background process will attempt to match this claim to a volume. | # A background process will attempt to match this claim to a volume. | ||||||
| # The eventual state of your claim will look something like this: | # The eventual state of your claim will look something like this: | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pvc | kubectl get pvc | ||||||
|  |  | ||||||
| NAME        LABELS    STATUS    VOLUME                                                           | NAME        LABELS    STATUS    VOLUME                                                           | ||||||
| myclaim-1   map[]     Bound     f5c3a89a-e50a-11e4-972f-80e6500a981e     | myclaim-1   map[]     Bound     f5c3a89a-e50a-11e4-972f-80e6500a981e     | ||||||
|  |  | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pv | kubectl get pv | ||||||
|  |  | ||||||
| NAME                LABELS              CAPACITY            ACCESSMODES         STATUS    CLAIM | NAME                LABELS              CAPACITY            ACCESSMODES         STATUS    CLAIM | ||||||
| pv0001              map[]               10737418240         RWO                 Bound     myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e           | pv0001              map[]               10737418240         RWO                 Bound     myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e           | ||||||
| @@ -75,16 +75,16 @@ Claims are used as volumes in pods.  Kubernetes uses the claim to look up its bo | |||||||
|  |  | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| cluster/kubectl.sh create -f examples/persistent-volumes/simpletest/pod.yaml | kubectl create -f examples/persistent-volumes/simpletest/pod.yaml | ||||||
|  |  | ||||||
| cluster/kubectl.sh get pods | kubectl get pods | ||||||
|  |  | ||||||
| POD       IP           CONTAINER(S)   IMAGE(S)   HOST                  LABELS    STATUS    CREATED | POD       IP           CONTAINER(S)   IMAGE(S)   HOST                  LABELS    STATUS    CREATED | ||||||
| mypod     172.17.0.2   myfrontend     nginx      127.0.0.1/127.0.0.1   <none>    Running   12 minutes | mypod     172.17.0.2   myfrontend     nginx      127.0.0.1/127.0.0.1   <none>    Running   12 minutes | ||||||
|  |  | ||||||
|  |  | ||||||
| cluster/kubectl.sh create -f examples/persistent-volumes/simpletest/service.json | kubectl create -f examples/persistent-volumes/simpletest/service.json | ||||||
| cluster/kubectl.sh get services | kubectl get services | ||||||
|  |  | ||||||
| NAME              LABELS                                    SELECTOR            IP           PORT(S) | NAME              LABELS                                    SELECTOR            IP           PORT(S) | ||||||
| frontendservice   <none>                                    name=frontendhttp   10.0.0.241   3000/TCP | frontendservice   <none>                                    name=frontendhttp   10.0.0.241   3000/TCP | ||||||
|   | |||||||
| @@ -66,13 +66,13 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont | |||||||
| Create the phabricator pod in your Kubernetes cluster by running: | Create the phabricator pod in your Kubernetes cluster by running: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/phabricator/phabricator-controller.json | $ kubectl create -f examples/phabricator/phabricator-controller.json | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once that's up you can list the pods in the cluster, to verify that it is running: | Once that's up you can list the pods in the cluster, to verify that it is running: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| cluster/kubectl.sh get pods | kubectl get pods | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds): | You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds): | ||||||
| @@ -99,7 +99,7 @@ CONTAINER ID        IMAGE                             COMMAND     CREATED | |||||||
| If you read logs of the phabricator container you will notice the following error message: | If you read logs of the phabricator container you will notice the following error message: | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ cluster/kubectl.sh log phabricator-controller-02qp4 | $ kubectl log phabricator-controller-02qp4 | ||||||
| [...] | [...] | ||||||
| Raw MySQL Error: Attempt to connect to root@173.194.252.142 failed with error | Raw MySQL Error: Attempt to connect to root@173.194.252.142 failed with error | ||||||
| #2013: Lost connection to MySQL server at 'reading initial communication | #2013: Lost connection to MySQL server at 'reading initial communication | ||||||
| @@ -152,7 +152,7 @@ To automate this process and make sure that a proper host is authorized even if | |||||||
| To create the pod run: | To create the pod run: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/phabricator/authenticator-controller.json | $ kubectl create -f examples/phabricator/authenticator-controller.json | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|  |  | ||||||
| @@ -199,7 +199,7 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi | |||||||
| To create the service run: | To create the service run: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f examples/phabricator/phabricator-service.json | $ kubectl create -f examples/phabricator/phabricator-service.json | ||||||
| phabricator | phabricator | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
|   | |||||||
| @@ -23,7 +23,7 @@ Once you have installed Ceph and new Kubernetes, you can create a pod based on m | |||||||
| If Ceph authentication secret is provided, the secret should be first be base64 encoded, then encoded string is placed in a secret yaml. An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command. | If Ceph authentication secret is provided, the secret should be first be base64 encoded, then encoded string is placed in a secret yaml. An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command. | ||||||
|  |  | ||||||
| ```console | ```console | ||||||
|     # cluster/kubectl.sh create -f examples/rbd/secret/ceph-secret.yaml |     # kubectl create -f examples/rbd/secret/ceph-secret.yaml | ||||||
| ```	 | ```	 | ||||||
|  |  | ||||||
| # Get started | # Get started | ||||||
| @@ -31,8 +31,8 @@ If Ceph authentication secret is provided, the secret should be first be base64 | |||||||
| Here are my commands: | Here are my commands: | ||||||
|  |  | ||||||
| ```console | ```console | ||||||
|     # cluster/kubectl.sh create -f examples/rbd/v1beta3/rbd.json |     # kubectl create -f examples/rbd/v1beta3/rbd.json | ||||||
|     # cluster/kubectl.sh get pods |     # kubectl get pods | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| On the Kubernetes host, I got these in mount output | On the Kubernetes host, I got these in mount output | ||||||
|   | |||||||
| @@ -11,8 +11,8 @@ This example will work in a custom namespace to demonstrate the concepts involve | |||||||
| Let's create a new namespace called quota-example: | Let's create a new namespace called quota-example: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f namespace.yaml | $ kubectl create -f namespace.yaml | ||||||
| $ cluster/kubectl.sh get namespaces | $ kubectl get namespaces | ||||||
| NAME            LABELS             STATUS | NAME            LABELS             STATUS | ||||||
| default         <none>             Active | default         <none>             Active | ||||||
| quota-example   <none>             Active | quota-example   <none>             Active | ||||||
| @@ -31,7 +31,7 @@ and API resources (pods, services, etc.) that a namespace may consume. | |||||||
| Let's create a simple quota in our namespace: | Let's create a simple quota in our namespace: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f quota.yaml --namespace=quota-example | $ kubectl create -f quota.yaml --namespace=quota-example | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Once your quota is applied to a namespace, the system will restrict any creation of content | Once your quota is applied to a namespace, the system will restrict any creation of content | ||||||
| @@ -41,7 +41,7 @@ You can describe your current quota usage to see what resources are being consum | |||||||
| namespace. | namespace. | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
| $ cluster/kubectl.sh describe quota quota --namespace=quota-example | $ kubectl describe quota quota --namespace=quota-example | ||||||
| Name:     quota | Name:     quota | ||||||
| Resource    Used  Hard | Resource    Used  Hard | ||||||
| --------    ----  ---- | --------    ----  ---- | ||||||
| @@ -65,7 +65,7 @@ cpu and memory by creating an nginx container. | |||||||
| To demonstrate, lets create a replication controller that runs nginx: | To demonstrate, lets create a replication controller that runs nginx: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh run nginx --image=nginx --replicas=1 --namespace=quota-example | $ kubectl run nginx --image=nginx --replicas=1 --namespace=quota-example | ||||||
| CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS | CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR    REPLICAS | ||||||
| nginx        nginx          nginx      run=nginx   1 | nginx        nginx          nginx      run=nginx   1 | ||||||
| ``` | ``` | ||||||
| @@ -73,14 +73,14 @@ nginx        nginx          nginx      run=nginx   1 | |||||||
| Now let's look at the pods that were created. | Now let's look at the pods that were created. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get pods --namespace=quota-example | $ kubectl get pods --namespace=quota-example | ||||||
| POD       IP        CONTAINER(S)   IMAGE(S)   HOST      LABELS    STATUS    CREATED   MESSAGE | POD       IP        CONTAINER(S)   IMAGE(S)   HOST      LABELS    STATUS    CREATED   MESSAGE | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| What happened?  I have no pods!  Let's describe the replication controller to get a view of what is happening. | What happened?  I have no pods!  Let's describe the replication controller to get a view of what is happening. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| cluster/kubectl.sh describe rc nginx --namespace=quota-example | kubectl describe rc nginx --namespace=quota-example | ||||||
| Name:   nginx | Name:   nginx | ||||||
| Image(s): nginx | Image(s): nginx | ||||||
| Selector: run=nginx | Selector: run=nginx | ||||||
| @@ -98,9 +98,9 @@ do not specify any memory usage. | |||||||
| So let's set some default limits for the amount of cpu and memory a pod can consume: | So let's set some default limits for the amount of cpu and memory a pod can consume: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh create -f limits.yaml --namespace=quota-example | $ kubectl create -f limits.yaml --namespace=quota-example | ||||||
| limitranges/limits | limitranges/limits | ||||||
| $ cluster/kubectl.sh describe limits limits --namespace=quota-example | $ kubectl describe limits limits --namespace=quota-example | ||||||
| Name:   limits | Name:   limits | ||||||
| Type    Resource  Min Max Default | Type    Resource  Min Max Default | ||||||
| ----    --------  --- --- --- | ----    --------  --- --- --- | ||||||
| @@ -115,7 +115,7 @@ Now that we have applied default limits for our namespace, our replication contr | |||||||
| create its pods. | create its pods. | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| $ cluster/kubectl.sh get pods --namespace=quota-example | $ kubectl get pods --namespace=quota-example | ||||||
| POD           IP         CONTAINER(S)   IMAGE(S)   HOST                    LABELS      STATUS    CREATED     MESSAGE | POD           IP         CONTAINER(S)   IMAGE(S)   HOST                    LABELS      STATUS    CREATED     MESSAGE | ||||||
| nginx-t40zm   10.0.0.2                             10.245.1.3/10.245.1.3   run=nginx   Running   2 minutes | nginx-t40zm   10.0.0.2                             10.245.1.3/10.245.1.3   run=nginx   Running   2 minutes | ||||||
|                          nginx          nginx                                          Running   2 minutes |                          nginx          nginx                                          Running   2 minutes | ||||||
| @@ -124,7 +124,7 @@ nginx-t40zm   10.0.0.2                             10.245.1.3/10.245.1.3   run=n | |||||||
| And if we print out our quota usage in the namespace: | And if we print out our quota usage in the namespace: | ||||||
|  |  | ||||||
| ```shell | ```shell | ||||||
| cluster/kubectl.sh describe quota quota --namespace=quota-example | kubectl describe quota quota --namespace=quota-example | ||||||
| Name:     quota | Name:     quota | ||||||
| Resource    Used    Hard | Resource    Used    Hard | ||||||
| --------    ----    ---- | --------    ----    ---- | ||||||
|   | |||||||
| @@ -31,8 +31,8 @@ $ ./cluster/kube-up.sh | |||||||
| You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly).  This can sometimes spew to the output so you could also run it in a different terminal. | You can use bash job control to run this in the background (note that you must use the default port -- 8001 -- for the following demonstration to work properly).  This can sometimes spew to the output so you could also run it in a different terminal. | ||||||
|  |  | ||||||
| ``` | ``` | ||||||
| $ ./cluster/kubectl.sh proxy --www=examples/update-demo/local/ & | $ ./kubectl proxy --www=examples/update-demo/local/ & | ||||||
| + ./cluster/kubectl.sh proxy --www=examples/update-demo/local/ | + ./kubectl proxy --www=examples/update-demo/local/ | ||||||
| I0218 15:18:31.623279   67480 proxy.go:36] Starting to serve on localhost:8001 | I0218 15:18:31.623279   67480 proxy.go:36] Starting to serve on localhost:8001 | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| @@ -42,7 +42,7 @@ Now visit the the [demo website](http://localhost:8001/static).  You won't see a | |||||||
| Now we will turn up two replicas of an image.  They all serve on internal port 80. | Now we will turn up two replicas of an image.  They all serve on internal port 80. | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ ./cluster/kubectl.sh create -f examples/update-demo/nautilus-rc.yaml | $ ./kubectl create -f examples/update-demo/nautilus-rc.yaml | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up.  A cute little nautilus. | After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up.  A cute little nautilus. | ||||||
| @@ -52,7 +52,7 @@ After pulling the image from the Docker Hub to your worker nodes (which may take | |||||||
| Now we will increase the number of replicas from two to four: | Now we will increase the number of replicas from two to four: | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ ./cluster/kubectl.sh scale rc update-demo-nautilus --replicas=4 | $ ./kubectl scale rc update-demo-nautilus --replicas=4 | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod. | If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod. | ||||||
| @@ -61,7 +61,7 @@ If you go back to the [demo website](http://localhost:8001/static/index.html) yo | |||||||
| We will now update the docker image to serve a different image by doing a rolling update to a new Docker image. | We will now update the docker image to serve a different image by doing a rolling update to a new Docker image. | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ ./cluster/kubectl.sh rolling-update update-demo-nautilus --update-period=10s -f examples/update-demo/kitten-rc.yaml | $ ./kubectl rolling-update update-demo-nautilus --update-period=10s -f examples/update-demo/kitten-rc.yaml | ||||||
| ``` | ``` | ||||||
| The rolling-update command in kubectl will do 2 things: | The rolling-update command in kubectl will do 2 things: | ||||||
|  |  | ||||||
| @@ -73,7 +73,7 @@ Watch the [demo website](http://localhost:8001/static/index.html), it will updat | |||||||
| ### Step Five: Bring down the pods | ### Step Five: Bring down the pods | ||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ ./cluster/kubectl.sh stop rc update-demo-kitten | $ ./kubectl stop rc update-demo-kitten | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| This will first 'stop' the replication controller by turning the target number of replicas to 0.  It'll then delete that controller. | This will first 'stop' the replication controller by turning the target number of replicas to 0.  It'll then delete that controller. | ||||||
| @@ -91,9 +91,9 @@ After you are done running this demo make sure to kill it: | |||||||
|  |  | ||||||
| ```bash | ```bash | ||||||
| $ jobs | $ jobs | ||||||
| [1]+  Running                 ./cluster/kubectl.sh proxy --www=local/ & | [1]+  Running                 ./kubectl proxy --www=local/ & | ||||||
| $ kill %1 | $ kill %1 | ||||||
| [1]+  Terminated: 15          ./cluster/kubectl.sh proxy --www=local/ | [1]+  Terminated: 15          ./kubectl proxy --www=local/ | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| ### Updating the Docker images | ### Updating the Docker images | ||||||
|   | |||||||
| @@ -9,7 +9,7 @@ scaling. | |||||||
| Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods.  Please do!  But eventually you will need a system to organize these pods into groups.  The system for achieving this in Kubernetes is Labels.  Labels are key-value pairs that are attached to each object in Kubernetes.  Label selectors can be passed along with a RESTful ```list``` request to the apiserver to retrieve a list of objects which match that label selector.  For example: | Having already learned about Pods and how to create them, you may be struck by an urge to create many, many pods.  Please do!  But eventually you will need a system to organize these pods into groups.  The system for achieving this in Kubernetes is Labels.  Labels are key-value pairs that are attached to each object in Kubernetes.  Label selectors can be passed along with a RESTful ```list``` request to the apiserver to retrieve a list of objects which match that label selector.  For example: | ||||||
|  |  | ||||||
| ```sh | ```sh | ||||||
| cluster/kubectl.sh get pods -l name=nginx | kubectl get pods -l name=nginx | ||||||
| ``` | ``` | ||||||
|  |  | ||||||
| Lists all pods who name label matches 'nginx'.  Labels are discussed in detail [elsewhere](http://docs.k8s.io/labels.md), but they are a core concept for two additional building blocks for Kubernetes, Replication Controllers and Services | Lists all pods who name label matches 'nginx'.  Labels are discussed in detail [elsewhere](http://docs.k8s.io/labels.md), but they are a core concept for two additional building blocks for Kubernetes, Replication Controllers and Services | ||||||
|   | |||||||
		Reference in New Issue
	
	Block a user