diff --git a/docs/getting-started-guides/binary_release.md b/docs/getting-started-guides/binary_release.md index 0fa09cd840f..b59ca5ceb97 100644 --- a/docs/getting-started-guides/binary_release.md +++ b/docs/getting-started-guides/binary_release.md @@ -1,6 +1,6 @@ ## Getting a Binary Release -You can either build a release from sources or download a pre-built release. If you don't plan on developing Kubernetes itself, we suggest a pre-built release. +You can either build a release from sources or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest a pre-built release. ### Prebuilt Binary Release diff --git a/docs/getting-started-guides/juju.md b/docs/getting-started-guides/juju.md index 01290833e01..c6146956f62 100644 --- a/docs/getting-started-guides/juju.md +++ b/docs/getting-started-guides/juju.md @@ -128,9 +128,9 @@ Get info on the pod: kubectl get pods -To test the hello app, we'll need to locate which minion is hosting +To test the hello app, we need to locate which minion is hosting the container. Better tooling for using juju to introspect container -is in the works but for let'suse `juju run` and `juju status` to find +is in the works but for we can use `juju run` and `juju status` to find our hello app. Exit out of our ssh session and run: @@ -161,17 +161,14 @@ We can add minion units like so: juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2 +## Launch the "k8petstore" example app -## Launch the "petstore" example app - -The petstore example is available as a +The [k8petstore example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/k8petstore) is available as a [juju action](https://jujucharms.com/docs/devel/actions). juju action do kubernetes-master/0 - -Note: this example includes curl statements to exercise the app. - +Note: this example includes curl statements to exercise the app, which automatically generates "petstore" transactions written to redis, and allows you to visualize the throughput in your browswer. ## Tear down cluster @@ -181,7 +178,6 @@ or juju destroy-environment --force `juju env` - ## More Info Kubernetes Bundle on Github diff --git a/docs/getting-started-guides/locally.md b/docs/getting-started-guides/locally.md index e486f03fc05..6fcbe1183b7 100644 --- a/docs/getting-started-guides/locally.md +++ b/docs/getting-started-guides/locally.md @@ -69,7 +69,7 @@ cluster/kubectl.sh get replicationcontrollers Note the difference between a [container](http://docs.k8s.io/containers.md) and a [pod](http://docs.k8s.io/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you. -However you can't view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). +However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`). You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein: @@ -81,11 +81,12 @@ Congratulations! ### Troubleshooting -#### I can't reach service IPs on the network. +#### I cannot reach service IPs on the network. Some firewall software that uses iptables may not interact well with -kubernetes. If you're having trouble around networking, try disabling any -firewall or other iptables-using systems, first. +kubernetes. If you have trouble around networking, try disabling any +firewall or other iptables-using systems, first. Also, you can check +if SELinux is blocking anything by running a command such as `journalctl --since yesterday | grep avc`. By default the IP range for service cluster IPs is 10.0.*.* - depending on your docker installation, this may conflict with IPs for containers. If you find diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index 7cdf423c3c5..db0421ce7b0 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -33,7 +33,7 @@ The current cluster design is inspired by: There is a specific `cluster/rackspace` directory with the scripts for the following steps: 1. A cloud network will be created and all instances will be attached to this network. - flanneld uses this network for next hop routing. These routes allow the containers running on each node to communicate with one another on this private network. -2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines since we won't capture the password. +2. A SSH key will be created and uploaded if needed. This key must be used to ssh into the machines (we do not capture the password). 3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. 4. We then boot as many nodes as defined via `$NUM_MINIONS`. diff --git a/docs/getting-started-guides/vagrant.md b/docs/getting-started-guides/vagrant.md index b94b75af413..381c90f27d6 100644 --- a/docs/getting-started-guides/vagrant.md +++ b/docs/getting-started-guides/vagrant.md @@ -196,7 +196,7 @@ kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dbe79bf6e25b nginx:latest "nginx" 21 seconds ago Up 19 seconds k8s--mynginx.8c5b8a3a--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--fcfa837f fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b - aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor - 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 + aa2ee3ed844a google/cadvisor:latest "/usr/bin/cadvisor" 38 minutes ago Up 38 minutes k8s--cadvisor.9e90d182--cadvisor_-_agent.file--4626b3a2 65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561 ``` @@ -270,7 +270,7 @@ If this is your first time creating the cluster, the kubelet on each minion sche To set up a vagrant cluster for hacking, follow the [vagrant developer guide](../devel/developer-guides/vagrant.md). -#### I have brought Vagrant up but the nodes won't validate! +#### I have brought Vagrant up but the nodes cannot validate! Log on to one of the nodes (`vagrant ssh minion-1`) and inspect the salt minion log (`sudo cat /var/log/salt/minion`). @@ -299,7 +299,7 @@ export KUBERNETES_MINION_MEMORY=2048 ``` #### I ran vagrant suspend and nothing works! -```vagrant suspend``` seems to mess up the network. It's not supported at this time. +```vagrant suspend``` seems to mess up the network. This is not supported at this time. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/vagrant.md?pixel)]()