Merge pull request #18177 from eosrei/1111-doc-machine-names

Auto commit by PR queue bot
This commit is contained in:
k8s-merge-robot 2015-12-10 21:42:08 -08:00
commit 4a22f2f5f5
13 changed files with 92 additions and 92 deletions

View File

@ -49,17 +49,17 @@ The configuration files are just standard pod definition in json or yaml format
For example, this is how to start a simple web server as a static pod: For example, this is how to start a simple web server as a static pod:
1. Choose a node where we want to run the static pod. In this example, it's `my-minion1`. 1. Choose a node where we want to run the static pod. In this example, it's `my-node1`.
```console ```console
[joe@host ~] $ ssh my-minion1 [joe@host ~] $ ssh my-node1
``` ```
2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`: 2. Choose a directory, say `/etc/kubelet.d` and place a web server pod definition there, e.g. `/etc/kubernetes.d/static-web.yaml`:
```console ```console
[root@my-minion1 ~] $ mkdir /etc/kubernetes.d/ [root@my-node1 ~] $ mkdir /etc/kubernetes.d/
[root@my-minion1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml [root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-web.yaml
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
@ -88,7 +88,7 @@ For example, this is how to start a simple web server as a static pod:
3. Restart kubelet. On Fedora 21, this is: 3. Restart kubelet. On Fedora 21, this is:
```console ```console
[root@my-minion1 ~] $ systemctl restart kubelet [root@my-node1 ~] $ systemctl restart kubelet
``` ```
## Pods created via HTTP ## Pods created via HTTP
@ -100,9 +100,9 @@ Kubelet periodically downloads a file specified by `--manifest-url=<URL>` argume
When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient…): When kubelet starts, it automatically starts all pods defined in directory specified in `--config=` or `--manifest-url=` arguments, i.e. our static-web. (It may take some time to pull nginx image, be patient…):
```console ```console
[joe@my-minion1 ~] $ docker ps [joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-minion1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c f6d05272b57e nginx:latest "nginx" 8 minutes ago Up 8 minutes k8s_web.6f802af4_static-web-fk-node1_default_67e24ed9466ba55986d120c867395f3c_378e5f3c
``` ```
If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too: If we look at our Kubernetes API server (running on host `my-master`), we see that a new mirror-pod was created there too:
@ -111,7 +111,7 @@ If we look at our Kubernetes API server (running on host `my-master`), we see th
[joe@host ~] $ ssh my-master [joe@host ~] $ ssh my-master
[joe@my-master ~] $ kubectl get pods [joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 role=myrole Running 11 minutes static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 role=myrole Running 11 minutes
web nginx Running 11 minutes web nginx Running 11 minutes
``` ```
@ -120,20 +120,20 @@ Labels from the static pod are propagated into the mirror-pod and can be used as
Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl.md) command), kubelet simply won't remove it. Notice we cannot delete the pod with the API server (e.g. via [`kubectl`](../user-guide/kubectl/kubectl.md) command), kubelet simply won't remove it.
```console ```console
[joe@my-master ~] $ kubectl delete pod static-web-my-minion1 [joe@my-master ~] $ kubectl delete pod static-web-my-node1
pods/static-web-my-minion1 pods/static-web-my-node1
[joe@my-master ~] $ kubectl get pods [joe@my-master ~] $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST ... POD IP CONTAINER(S) IMAGE(S) HOST ...
static-web-my-minion1 172.17.0.3 my-minion1/192.168.100.71 ... static-web-my-node1 172.17.0.3 my-node1/192.168.100.71 ...
``` ```
Back to our `my-minion1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while: Back to our `my-node1` host, we can try to stop the container manually and see, that kubelet automatically restarts it in a while:
```console ```console
[joe@host ~] $ ssh my-minion1 [joe@host ~] $ ssh my-node1
[joe@my-minion1 ~] $ docker stop f6d05272b57e [joe@my-node1 ~] $ docker stop f6d05272b57e
[joe@my-minion1 ~] $ sleep 20 [joe@my-node1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps [joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ... CONTAINER ID IMAGE COMMAND CREATED ...
5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ... 5b920cbaf8b1 nginx:latest "nginx -g 'daemon of 2 seconds ago ...
``` ```
@ -143,13 +143,13 @@ CONTAINER ID IMAGE COMMAND CREATED ...
Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory. Running kubelet periodically scans the configured directory (`/etc/kubelet.d` in our example) for changes and adds/removes pods as files appear/disappear in this directory.
```console ```console
[joe@my-minion1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp [joe@my-node1 ~] $ mv /etc/kubernetes.d/static-web.yaml /tmp
[joe@my-minion1 ~] $ sleep 20 [joe@my-node1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps [joe@my-node1 ~] $ docker ps
// no nginx container is running // no nginx container is running
[joe@my-minion1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/ [joe@my-node1 ~] $ mv /tmp/static-web.yaml /etc/kubernetes.d/
[joe@my-minion1 ~] $ sleep 20 [joe@my-node1 ~] $ sleep 20
[joe@my-minion1 ~] $ docker ps [joe@my-node1 ~] $ docker ps
CONTAINER ID IMAGE COMMAND CREATED ... CONTAINER ID IMAGE COMMAND CREATED ...
e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago e7a62e3427f1 nginx:latest "nginx -g 'daemon of 27 seconds ago
``` ```

View File

@ -119,17 +119,17 @@ Sample kubectl output
```console ```console
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-node-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-3.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-2.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest" Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-node-4.c.saad-dev-vms.internal
``` ```
This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries. This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries.

View File

@ -80,9 +80,9 @@ You can use this script to automate checking for failures, assuming your cluster
```sh ```sh
echo "" > output.txt echo "" > output.txt
for i in {1..4}; do for i in {1..4}; do
echo "Checking kubernetes-minion-${i}" echo "Checking kubernetes-node-${i}"
echo "kubernetes-minion-${i}:" >> output.txt echo "kubernetes-node-${i}:" >> output.txt
gcloud compute ssh "kubernetes-minion-${i}" --command="sudo docker ps -a" >> output.txt gcloud compute ssh "kubernetes-node-${i}" --command="sudo docker ps -a" >> output.txt
done done
grep "Exited ([^0])" output.txt grep "Exited ([^0])" output.txt
``` ```

View File

@ -181,9 +181,9 @@ $ virsh -c qemu:///system list
Id Name State Id Name State
---------------------------------------------------- ----------------------------------------------------
15 kubernetes_master running 15 kubernetes_master running
16 kubernetes_minion-01 running 16 kubernetes_node-01 running
17 kubernetes_minion-02 running 17 kubernetes_node-02 running
18 kubernetes_minion-03 running 18 kubernetes_node-03 running
``` ```
You can check that the Kubernetes cluster is working with: You can check that the Kubernetes cluster is working with:
@ -208,7 +208,7 @@ Connect to `kubernetes_master`:
ssh core@192.168.10.1 ssh core@192.168.10.1
``` ```
Connect to `kubernetes_minion-01`: Connect to `kubernetes_node-01`:
```sh ```sh
ssh core@192.168.10.2 ssh core@192.168.10.2

View File

@ -77,10 +77,10 @@ $ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE NAME READY REASON RESTARTS AGE
elasticsearch-logging-v1-78nog 1/1 Running 0 2h elasticsearch-logging-v1-78nog 1/1 Running 0 2h
elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h elasticsearch-logging-v1-nj2nb 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-5oq0 1/1 Running 0 2h fluentd-elasticsearch-kubernetes-node-5oq0 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-6896 1/1 Running 0 2h fluentd-elasticsearch-kubernetes-node-6896 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-l1ds 1/1 Running 0 2h fluentd-elasticsearch-kubernetes-node-l1ds 1/1 Running 0 2h
fluentd-elasticsearch-kubernetes-minion-lz9j 1/1 Running 0 2h fluentd-elasticsearch-kubernetes-node-lz9j 1/1 Running 0 2h
kibana-logging-v1-bhpo8 1/1 Running 0 2h kibana-logging-v1-bhpo8 1/1 Running 0 2h
kube-dns-v3-7r1l9 3/3 Running 0 2h kube-dns-v3-7r1l9 3/3 Running 0 2h
monitoring-heapster-v4-yl332 1/1 Running 1 2h monitoring-heapster-v4-yl332 1/1 Running 1 2h

View File

@ -41,10 +41,10 @@ logging and DNS resolution for names of Kubernetes services:
```console ```console
$ kubectl get pods --namespace=kube-system $ kubectl get pods --namespace=kube-system
NAME READY REASON RESTARTS AGE NAME READY REASON RESTARTS AGE
fluentd-cloud-logging-kubernetes-minion-0f64 1/1 Running 0 32m fluentd-cloud-logging-kubernetes-node-0f64 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-27gf 1/1 Running 0 32m fluentd-cloud-logging-kubernetes-node-27gf 1/1 Running 0 32m
fluentd-cloud-logging-kubernetes-minion-pk22 1/1 Running 0 31m fluentd-cloud-logging-kubernetes-node-pk22 1/1 Running 0 31m
fluentd-cloud-logging-kubernetes-minion-20ej 1/1 Running 0 31m fluentd-cloud-logging-kubernetes-node-20ej 1/1 Running 0 31m
kube-dns-v3-pk22 3/3 Running 0 32m kube-dns-v3-pk22 3/3 Running 0 32m
monitoring-heapster-v1-20ej 0/1 Running 9 32m monitoring-heapster-v1-20ej 0/1 Running 9 32m
``` ```

View File

@ -68,7 +68,7 @@ user via a periodically refreshing interface similar to `top` on Unix-like
systems. This info could let users assign resource limits more efficiently. systems. This info could let users assign resource limits more efficiently.
``` ```
$ kubectl top kubernetes-minion-abcd $ kubectl top kubernetes-node-abcd
POD CPU MEM POD CPU MEM
monitoring-heapster-abcde 0.12 cores 302 MB monitoring-heapster-abcde 0.12 cores 302 MB
kube-ui-v1-nd7in 0.07 cores 130 MB kube-ui-v1-nd7in 0.07 cores 130 MB

View File

@ -228,7 +228,7 @@ on the pod you are interested in:
Name: simmemleak-hra99 Name: simmemleak-hra99
Namespace: default Namespace: default
Image(s): saadali/simmemleak Image(s): saadali/simmemleak
Node: kubernetes-minion-tf0f/10.240.216.66 Node: kubernetes-node-tf0f/10.240.216.66
Labels: name=simmemleak Labels: name=simmemleak
Status: Running Status: Running
Reason: Reason:
@ -254,11 +254,11 @@ Conditions:
Ready False Ready False
Events: Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message FirstSeen LastSeen Count From SubobjectPath Reason Message
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-minion-tf0f Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {scheduler } scheduled Successfully assigned simmemleak-hra99 to kubernetes-node-tf0f
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD created Created with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD created Created with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} implicitly required container POD started Started with docker id 6a41280f516d Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} implicitly required container POD started Started with docker id 6a41280f516d
Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-minion-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a Tue, 07 Jul 2015 12:53:51 -0700 Tue, 07 Jul 2015 12:53:51 -0700 1 {kubelet kubernetes-node-tf0f} spec.containers{simmemleak} created Created with docker id 87348f12526a
``` ```
The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times. The `Restart Count: 5` indicates that the `simmemleak` container in this pod was terminated and restarted 5 times.

View File

@ -88,8 +88,8 @@ This makes it accessible from any node in your cluster. Check the nodes the pod
```console ```console
$ kubectl create -f ./nginxrc.yaml $ kubectl create -f ./nginxrc.yaml
$ kubectl get pods -l app=nginx -o wide $ kubectl get pods -l app=nginx -o wide
my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-93ly my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-node-93ly
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-node-93ly
``` ```
Check your pods' IPs: Check your pods' IPs:

View File

@ -93,7 +93,7 @@ We can retrieve a lot more information about each of these pods using `kubectl d
$ kubectl describe pod my-nginx-gy1ij $ kubectl describe pod my-nginx-gy1ij
Name: my-nginx-gy1ij Name: my-nginx-gy1ij
Image(s): nginx Image(s): nginx
Node: kubernetes-minion-y3vk/10.240.154.168 Node: kubernetes-node-y3vk/10.240.154.168
Labels: app=nginx Labels: app=nginx
Status: Running Status: Running
Reason: Reason:
@ -115,13 +115,13 @@ Conditions:
Ready True Ready True
Events: Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-minion-y3vk Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {scheduler } scheduled Successfully assigned my-nginx-gy1ij to kubernetes-node-y3vk
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD created Created with docker id cd1644065066 Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD created Created with docker id cd1644065066
Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-minion-y3vk} implicitly required container POD started Started with docker id cd1644065066 Thu, 09 Jul 2015 15:32:58 -0700 Thu, 09 Jul 2015 15:32:58 -0700 1 {kubelet kubernetes-node-y3vk} implicitly required container POD started Started with docker id cd1644065066
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx" Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} pulled Successfully pulled image "nginx"
Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac Thu, 09 Jul 2015 15:33:06 -0700 Thu, 09 Jul 2015 15:33:06 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} created Created with docker id 56d7a7b14dac
Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-minion-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac Thu, 09 Jul 2015 15:33:07 -0700 Thu, 09 Jul 2015 15:33:07 -0700 1 {kubelet kubernetes-node-y3vk} spec.containers{nginx} started Started with docker id 56d7a7b14dac
``` ```
Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.) Here you can see configuration information about the container(s) and Pod (labels, resource requirements, etc.), as well as status information about the container(s) and Pod (state, readiness, restart count, events, etc.)
@ -231,7 +231,7 @@ spec:
name: default-token-zkhkk name: default-token-zkhkk
readOnly: true readOnly: true
dnsPolicy: ClusterFirst dnsPolicy: ClusterFirst
nodeName: kubernetes-minion-u619 nodeName: kubernetes-node-u619
restartPolicy: Always restartPolicy: Always
serviceAccountName: default serviceAccountName: default
volumes: volumes:
@ -266,14 +266,14 @@ Sometimes when debugging it can be useful to look at the status of a node -- for
```console ```console
$ kubectl get nodes $ kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
kubernetes-minion-861h kubernetes.io/hostname=kubernetes-minion-861h NotReady kubernetes-node-861h kubernetes.io/hostname=kubernetes-node-861h NotReady
kubernetes-minion-bols kubernetes.io/hostname=kubernetes-minion-bols Ready kubernetes-node-bols kubernetes.io/hostname=kubernetes-node-bols Ready
kubernetes-minion-st6x kubernetes.io/hostname=kubernetes-minion-st6x Ready kubernetes-node-st6x kubernetes.io/hostname=kubernetes-node-st6x Ready
kubernetes-minion-unaj kubernetes.io/hostname=kubernetes-minion-unaj Ready kubernetes-node-unaj kubernetes.io/hostname=kubernetes-node-unaj Ready
$ kubectl describe node kubernetes-minion-861h $ kubectl describe node kubernetes-node-861h
Name: kubernetes-minion-861h Name: kubernetes-node-861h
Labels: kubernetes.io/hostname=kubernetes-minion-861h Labels: kubernetes.io/hostname=kubernetes-node-861h
CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700 CreationTimestamp: Fri, 10 Jul 2015 14:32:29 -0700
Conditions: Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message Type Status LastHeartbeatTime LastTransitionTime Reason Message
@ -295,28 +295,28 @@ Pods: (0 in total)
Namespace Name Namespace Name
Events: Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message FirstSeen LastSeen Count From SubobjectPath Reason Message
Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-minion-861h} NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady Fri, 10 Jul 2015 14:32:28 -0700 Fri, 10 Jul 2015 14:32:28 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-minion-861h} NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady Fri, 10 Jul 2015 14:32:30 -0700 Fri, 10 Jul 2015 14:32:30 -0700 1 {kubelet kubernetes-node-861h} NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-minion-861h} starting Starting kubelet. Fri, 10 Jul 2015 14:33:00 -0700 Fri, 10 Jul 2015 14:33:00 -0700 1 {kubelet kubernetes-node-861h} starting Starting kubelet.
Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-minion-861h} NodeReady Node kubernetes-minion-861h status is now: NodeReady Fri, 10 Jul 2015 14:33:02 -0700 Fri, 10 Jul 2015 14:33:02 -0700 1 {kubelet kubernetes-node-861h} NodeReady Node kubernetes-node-861h status is now: NodeReady
Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-minion-861h status is now: NodeNotReady Fri, 10 Jul 2015 14:35:15 -0700 Fri, 10 Jul 2015 14:35:15 -0700 1 {controllermanager } NodeNotReady Node kubernetes-node-861h status is now: NodeNotReady
$ kubectl get node kubernetes-minion-861h -o yaml $ kubectl get node kubernetes-node-861h -o yaml
apiVersion: v1 apiVersion: v1
kind: Node kind: Node
metadata: metadata:
creationTimestamp: 2015-07-10T21:32:29Z creationTimestamp: 2015-07-10T21:32:29Z
labels: labels:
kubernetes.io/hostname: kubernetes-minion-861h kubernetes.io/hostname: kubernetes-node-861h
name: kubernetes-minion-861h name: kubernetes-node-861h
resourceVersion: "757" resourceVersion: "757"
selfLink: /api/v1/nodes/kubernetes-minion-861h selfLink: /api/v1/nodes/kubernetes-node-861h
uid: 2a69374e-274b-11e5-a234-42010af0d969 uid: 2a69374e-274b-11e5-a234-42010af0d969
spec: spec:
externalID: "15233045891481496305" externalID: "15233045891481496305"
podCIDR: 10.244.0.0/24 podCIDR: 10.244.0.0/24
providerID: gce://striped-torus-760/us-central1-b/kubernetes-minion-861h providerID: gce://striped-torus-760/us-central1-b/kubernetes-node-861h
status: status:
addresses: addresses:
- address: 10.240.115.55 - address: 10.240.115.55

View File

@ -120,7 +120,7 @@ $ kubectl describe pods frontend
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager * [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra on 24-Nov-2015 ###### Auto generated by spf13/cobra on 3-Dec-2015
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_describe.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_describe.md?pixel)]()

View File

@ -107,10 +107,10 @@ At the bottom of the *kubectl describe* output there are messages indicating tha
```console ```console
$ kubectl describe pods liveness-exec $ kubectl describe pods liveness-exec
[...] [...]
Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory Sat, 27 Jun 2015 13:43:03 +0200 Sat, 27 Jun 2015 13:44:34 +0200 4 {kubelet kubernetes-node-6fbi} spec.containers{liveness} unhealthy Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635 Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} killing Killing with docker id 65b52d62c635
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10 Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} created Created with docker id ed6bb004ee10
Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-minion-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10 Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kubelet kubernetes-node-6fbi} spec.containers{liveness} started Started with docker id ed6bb004ee10
``` ```