Update output of kubectl in examples

This commit is contained in:
Chao Xu 2015-07-07 22:52:52 -07:00
parent 127fe8d4a5
commit f8047aa635
6 changed files with 63 additions and 73 deletions

View File

@ -20,21 +20,17 @@ support local storage on the host at this time. There is no guarantee your pod
```
// this will be nginx's webroot
mkdir /tmp/data01
echo 'I love Kubernetes storage!' > /tmp/data01/index.html
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
```
PVs are created by posting them to the API server.
```
kubectl create -f examples/persistent-volumes/volumes/local-01.yaml
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Available
$ kubectl create -f examples/persistent-volumes/volumes/local-01.yaml
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Available
```
## Requesting storage
@ -46,9 +42,9 @@ Claims must be created in the same namespace as the pods that use them.
```
kubectl create -f examples/persistent-volumes/claims/claim-01.yaml
kubectl get pvc
$ kubectl create -f examples/persistent-volumes/claims/claim-01.yaml
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[]
@ -56,17 +52,13 @@ myclaim-1 map[]
# A background process will attempt to match this claim to a volume.
# The eventual state of your claim will look something like this:
kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound f5c3a89a-e50a-11e4-972f-80e6500a981e
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Bound myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e
$ kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound pv0001
$ kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 type=local 10737418240 RWO Bound default/myclaim-1
```
## Using your claim as a volume
@ -74,19 +66,15 @@ pv0001 map[] 10737418240 RWO
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
```
$ kubectl create -f examples/persistent-volumes/simpletest/pod.yaml
kubectl create -f examples/persistent-volumes/simpletest/pod.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1h
kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
mypod 172.17.0.2 myfrontend nginx 127.0.0.1/127.0.0.1 <none> Running 12 minutes
kubectl create -f examples/persistent-volumes/simpletest/service.json
kubectl get services
NAME LABELS SELECTOR IP PORT(S)
$ kubectl create -f examples/persistent-volumes/simpletest/service.json
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
frontendservice <none> name=frontendhttp 10.0.0.241 3000/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP

View File

@ -7,7 +7,7 @@ metadata:
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"

View File

@ -78,8 +78,8 @@ kubectl get pods
You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):
```
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
phabricator-controller-02qp4 10.244.1.34 phabricator fgrzadkowski/phabricator kubernetes-minion-2.c.myproject.internal/130.211.141.151 name=phabricator
NAME READY STATUS RESTARTS AGE
phabricator-controller-9vy68 1/1 Running 0 1m
```
If you ssh to that machine, you can run `docker ps` to see the actual pod:
@ -203,7 +203,7 @@ phabricator
To play with the service itself, find the external IP of the load balancer:
```shell
$ kubectl get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
$ kubectl get services phabricator -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}{{"\n"}}'
```
and then visit port 80 of that IP address.

View File

@ -42,17 +42,18 @@ namespace.
```
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Resource Used Hard
-------- ---- ----
cpu 0m 20
memory 0m 1Gi
persistentvolumeclaims 0m 10
pods 0m 10
replicationcontrollers 0m 20
resourcequotas 1 1
secrets 1 10
services 0m 5
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
Step 3: Applying default resource limits
@ -74,7 +75,7 @@ Now let's look at the pods that were created.
```shell
$ kubectl get pods --namespace=quota-example
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
NAME READY STATUS RESTARTS AGE
```
What happened? I have no pods! Let's describe the replication controller to get a view of what is happening.
@ -101,11 +102,12 @@ So let's set some default limits for the amount of cpu and memory a pod can cons
$ kubectl create -f limits.yaml --namespace=quota-example
limitranges/limits
$ kubectl describe limits limits --namespace=quota-example
Name: limits
Type Resource Min Max Default
---- -------- --- --- ---
Container cpu - - 100m
Container memory - - 512Mi
Name: limits
Namespace: quota-example
Type Resource Min Max Default
---- -------- --- --- ---
Container memory - - 512Mi
Container cpu - - 100m
```
Now any time a pod is created in this namespace, if it has not specified any resource limits, the default
@ -116,26 +118,26 @@ create its pods.
```shell
$ kubectl get pods --namespace=quota-example
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
nginx-t40zm 10.0.0.2 10.245.1.3/10.245.1.3 run=nginx Running 2 minutes
nginx nginx Running 2 minutes
NAME READY STATUS RESTARTS AGE
nginx-t9cap 1/1 Running 0 49s
```
And if we print out our quota usage in the namespace:
```shell
kubectl describe quota quota --namespace=quota-example
Name: quota
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 536870912 1Gi
persistentvolumeclaims 0m 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0m 5
Name: quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 100m 20
memory 536870912 1Gi
persistentvolumeclaims 0 10
pods 1 10
replicationcontrollers 1 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
You can now see the pod that was created is consuming explicit amounts of resources, and the usage is being

View File

@ -46,7 +46,7 @@ $ kubectl create -f examples/spark/spark-master-service.json
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
NAME READY STATUS RESTARTS AGE
[...]
spark-master 1/1 Running 0 25s
@ -97,7 +97,7 @@ $ kubectl create -f examples/spark/spark-worker-controller.json
```shell
$ kubectl get pods
NAME READY REASON RESTARTS AGE
NAME READY STATUS RESTARTS AGE
[...]
spark-master 1/1 Running 0 14m
spark-worker-controller-hifwi 1/1 Running 0 33s

View File

@ -52,15 +52,15 @@ before proceeding.
```shell
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS
zookeeper 192.168.86.4 zookeeper mattf/zookeeper 172.18.145.8/172.18.145.8 name=zookeeper Running
NAME READY STATUS RESTARTS AGE
zookeeper 1/1 Running 0 43s
```
### Check to see if ZooKeeper is accessible
```shell
$ kubectl get services
NAME LABELS SELECTOR IP PORT
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181
@ -94,7 +94,7 @@ Ensure that the Nimbus service is running and functional.
```shell
$ kubectl get services
NAME LABELS SELECTOR IP PORT
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181
nimbus name=nimbus name=nimbus 10.254.115.208 6627