Purge cluster/kubectl.sh from nearly all docs.

Mark cluster/kubectl.sh as deprecated.
This commit is contained in:
Brendan Burns
2015-06-05 14:50:11 -07:00
parent 6a979704b7
commit 9e198a6ed9
22 changed files with 149 additions and 140 deletions

View File

@@ -110,7 +110,7 @@ spec:
--------------------------------------------------
cluster/kubectl.sh get pv
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Pending
@@ -140,7 +140,7 @@ spec:
--------------------------------------------------
cluster/kubectl.sh get pvc
kubectl get pvc
NAME LABELS STATUS VOLUME
@@ -155,13 +155,13 @@ myclaim-1 map[] pending
```
cluster/kubectl.sh get pv
kubectl get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e
cluster/kubectl.sh get pvc
kubectl get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e
@@ -205,7 +205,7 @@ When a claim holder is finished with their data, they can delete their claim.
```
cluster/kubectl.sh delete pvc myclaim-1
kubectl delete pvc myclaim-1
```

View File

@@ -33,7 +33,9 @@ spec:
```
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
```./cluster/kubectl.sh create -f controller.yaml```
```
kubectl create -f controller.yaml
```
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently.
@@ -52,7 +54,7 @@ grep "Exited ([^0])" output.txt
Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running:
```sh
./cluster/kubectl.sh stop replicationcontroller flakecontroller
kubectl stop replicationcontroller flakecontroller
```
If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller.

View File

@@ -116,7 +116,7 @@ virsh -c qemu:///system list
You can check that the kubernetes cluster is working with:
```
$ ./cluster/kubectl.sh get minions
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
@@ -173,7 +173,7 @@ KUBE_PUSH=local cluster/kube-push.sh
Interact with the cluster
```
cluster/kubectl.sh
kubectl ...
```
### Troubleshooting

View File

@@ -34,11 +34,6 @@ $ export CONTAINER_RUNTIME=rkt
$ hack/local-up-cluster.sh
```
After this, you can launch some pods in another terminal:
```shell
$ cluster/kubectl.sh create -f example/pod.yaml
```
### CoreOS cluster on GCE
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
@@ -88,6 +83,10 @@ $ kube-up.sh
Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
### Getting started with your cluster
See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/rkt/README.md?pixel)]()

View File

@@ -7,7 +7,7 @@ Kubernetes has an extensible user interface with default functionality that desc
Assuming that you have a cluster running locally at `localhost:8080`, as described [here](getting-started-guides/locally.md), you can run the UI against it with kubectl:
```sh
cluster/kubectl.sh proxy --www=www/app --www-prefix=/
kubectl proxy --www=www/app --www-prefix=/
```
You should now be able to access it by visiting [localhost:8001](http://localhost:8001/).