Merge pull request #10941 from caesarxuchao/getting-started-output

update the kubectl output in gettint-started-guides
This commit is contained in:
Victor Marmol 2015-07-08 15:55:44 -07:00
commit a599d80343
8 changed files with 38 additions and 39 deletions

View File

@ -156,7 +156,7 @@ done
* Check to make sure the cluster can see the minion (on centos-master) * Check to make sure the cluster can see the minion (on centos-master)
``` ```
kubectl get minions kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
centos-minion <none> Ready centos-minion <none> Ready
``` ```

View File

@ -91,14 +91,13 @@ kubectl get pods --watch
Eventually you should see: Eventually you should see:
``` ```
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
frontend-controller-0133o 10.2.1.14 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running frontend-8anh8 1/1 Running 0 1m
frontend-controller-ls6k1 10.2.3.10 php-redis kubernetes/example-guestbook-php-redis <unassigned> name=frontend,uses=redisslave,redis-master Running frontend-8pq5r 1/1 Running 0 1m
frontend-controller-oh43e 10.2.2.15 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running frontend-v7tbq 1/1 Running 0 1m
redis-master 10.2.1.3 master redis kube-01/172.18.0.13 name=redis-master Running redis-master-u0my3 1/1 Running 0 1m
redis-slave-controller-fplln 10.2.2.3 slave brendanburns/redis-slave kube-02/172.18.0.14 name=redisslave,uses=redis-master Running redis-slave-4eznf 1/1 Running 0 1m
redis-slave-controller-gziey 10.2.1.4 slave brendanburns/redis-slave kube-01/172.18.0.13 name=redisslave,uses=redis-master Running redis-slave-hf40f 1/1 Running 0 1m
``` ```
## Scaling ## Scaling
@ -170,11 +169,11 @@ You now will have more instances of front-end Guestbook apps and Redis slaves; a
``` ```
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
frontend-controller-0133o 10.2.1.19 php-redis kubernetes/example-guestbook-php-redis kube-01/172.18.0.13 name=frontend,uses=redisslave,redis-master Running frontend-8anh8 1/1 Running 0 3m
frontend-controller-i7hvs 10.2.4.5 php-redis kubernetes/example-guestbook-php-redis kube-04/172.18.0.21 name=frontend,uses=redisslave,redis-master Running frontend-8pq5r 1/1 Running 0 3m
frontend-controller-ls6k1 10.2.3.18 php-redis kubernetes/example-guestbook-php-redis kube-03/172.18.0.20 name=frontend,uses=redisslave,redis-master Running frontend-oz8uo 1/1 Running 0 51s
frontend-controller-oh43e 10.2.2.22 php-redis kubernetes/example-guestbook-php-redis kube-02/172.18.0.14 name=frontend,uses=redisslave,redis-master Running frontend-v7tbq 1/1 Running 0 3m
``` ```
## Exposing the app to the outside world ## Exposing the app to the outside world

View File

@ -32,7 +32,7 @@ Deploy a CoreOS running Kubernetes environment. This particular guild is made to
3. Update the DHCP config to reflect the host needing deployment 3. Update the DHCP config to reflect the host needing deployment
4. Setup nodes to deploy CoreOS creating a etcd cluster. 4. Setup nodes to deploy CoreOS creating a etcd cluster.
5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/). 5. Have no access to the public [etcd discovery tool](https://discovery.etcd.io/).
6. Installing the CoreOS slaves to become Kubernetes minions. 6. Installing the CoreOS slaves to become Kubernetes nodes.
## This Guides variables ## This Guides variables
| Node Description | MAC | IP | | Node Description | MAC | IP |
@ -649,7 +649,7 @@ Check system status of services on a minion node:
List Kubernetes List Kubernetes
kubectl get pods kubectl get pods
kubectl get minions kubectl get nodes
Kill all pods: Kill all pods:

View File

@ -102,7 +102,7 @@ KUBERNETES_MASTER to point at the ip of `kubernetes-master/0`.
No pods will be available before starting a container: No pods will be available before starting a container:
kubectl get pods kubectl get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
kubectl get replicationcontrollers kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS

View File

@ -41,7 +41,7 @@ viewer should be running soon after the cluster comes to life.
``` ```
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h

View File

@ -6,7 +6,7 @@ Cluster level logging for Kubernetes allows us to collect logs which persist bey
``` ```
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY STATUS RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m
@ -45,7 +45,7 @@ This pod specification has one container which runs a bash script when the conta
We can observe the running pod: We can observe the running pod:
``` ```
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 5m counter 1/1 Running 0 5m
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 55m fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 55m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 55m fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 55m

View File

@ -137,7 +137,7 @@ Interact with the kubernetes-mesos framework via `kubectl`:
```bash ```bash
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY STATUS RESTARTS AGE
``` ```
```bash ```bash
@ -183,7 +183,7 @@ We can use the `kubectl` interface to monitor the status of our pod:
```bash ```bash
$ kubectl get pods $ kubectl get pods
NAME READY REASON RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 14s nginx 1/1 Running 0 14s
``` ```

View File

@ -181,13 +181,13 @@ Before starting a container there will be no pods, services and replication cont
```sh ```sh
$ ./cluster/kubectl.sh get pods $ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
$ ./cluster/kubectl.sh get services $ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT NAME LABELS SELECTOR IP(S) PORT(S)
$ ./cluster/kubectl.sh get replicationcontrollers $ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
``` ```
Start a container running nginx with a replication controller and three replicas Start a container running nginx with a replication controller and three replicas
@ -200,10 +200,10 @@ When listing the pods, you will see that three containers have been started and
```sh ```sh
$ ./cluster/kubectl.sh get pods $ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Waiting my-nginx-5kq0g 0/1 Pending 0 10s
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Waiting my-nginx-gr3hh 0/1 Pending 0 10s
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Waiting my-nginx-xql4j 0/1 Pending 0 10s
``` ```
You need to wait for the provisioning to complete, you can monitor the nodes by doing: You need to wait for the provisioning to complete, you can monitor the nodes by doing:
@ -233,17 +233,17 @@ Going back to listing the pods, services and replicationcontrollers, you now hav
```sh ```sh
$ ./cluster/kubectl.sh get pods $ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
781191ff-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.4/10.245.2.4 name=myNginx Running my-nginx-5kq0g 1/1 Running 0 1m
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running my-nginx-gr3hh 1/1 Running 0 1m
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running my-nginx-xql4j 1/1 Running 0 1m
$ ./cluster/kubectl.sh get services $ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT NAME LABELS SELECTOR IP(S) PORT(S)
$ ./cluster/kubectl.sh get replicationcontrollers $ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
myNginx nginx name=my-nginx 3 my-nginx my-nginx nginx run=my-nginx 3
``` ```
We did not start any services, hence there are none listed. But we see three replicas displayed properly. We did not start any services, hence there are none listed. But we see three replicas displayed properly.
@ -253,9 +253,9 @@ You can already play with scaling the replicas with:
```sh ```sh
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 $ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
$ ./cluster/kubectl.sh get pods $ ./cluster/kubectl.sh get pods
NAME IMAGE(S) HOST LABELS STATUS NAME READY STATUS RESTARTS AGE
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running my-nginx-5kq0g 1/1 Running 0 2m
78140853-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.3/10.245.2.3 name=myNginx Running my-nginx-gr3hh 1/1 Running 0 2m
``` ```
Congratulations! Congratulations!