diff --git a/docs/design/secrets.md b/docs/design/secrets.md index a6d2591f93c..c1643a9d182 100644 --- a/docs/design/secrets.md +++ b/docs/design/secrets.md @@ -261,7 +261,7 @@ Kubelet volume plugin API will be changed so that a volume plugin receives the s a volume along with the volume spec. This will allow volume plugins to implement setting the security context of volumes they manage. -## Community work: +## Community work Several proposals / upstream patches are notable as background for this proposal: diff --git a/docs/design/security.md b/docs/design/security.md index 1d1373d2138..90dc3237188 100644 --- a/docs/design/security.md +++ b/docs/design/security.md @@ -32,7 +32,7 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo ## Use cases -### Roles: +### Roles We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories: @@ -46,7 +46,7 @@ Automated process users fall into the following categories: 2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles -### Description of roles: +### Description of roles * Developers: * write pod specs. diff --git a/docs/getting-started-guides/docker-multinode/testing.md b/docs/getting-started-guides/docker-multinode/testing.md index 99005e1b28f..d795621ad0b 100644 --- a/docs/getting-started-guides/docker-multinode/testing.md +++ b/docs/getting-started-guides/docker-multinode/testing.md @@ -37,7 +37,7 @@ kubectl -s http://localhost:8080 run nginx --image=nginx --port=80 now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. -### Expose it as a service: +### Expose it as a service ```sh kubectl expose rc nginx --port=80 ``` diff --git a/docs/getting-started-guides/docker-multinode/worker.md b/docs/getting-started-guides/docker-multinode/worker.md index a3d51036860..119b5e9a5dc 100644 --- a/docs/getting-started-guides/docker-multinode/worker.md +++ b/docs/getting-started-guides/docker-multinode/worker.md @@ -26,7 +26,7 @@ For each worker node, there are three steps: ### Set up Flanneld on the worker node As before, the Flannel daemon is going to provide network connectivity. -#### Set up a bootstrap docker: +#### Set up a bootstrap docker As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking. Run: diff --git a/docs/getting-started-guides/docker.md b/docs/getting-started-guides/docker.md index fc58483a67a..aaded1662b9 100644 --- a/docs/getting-started-guides/docker.md +++ b/docs/getting-started-guides/docker.md @@ -88,7 +88,7 @@ kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80 now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled. -### Expose it as a service: +### Expose it as a service ```sh kubectl expose rc nginx --port=80 ``` diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index ad899c278f4..6ff3ce54490 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -64,7 +64,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo 3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems. 4. We then boot as many nodes as defined via `$NUM_MINIONS`. -## Some notes: +## Some notes - The scripts expect `eth2` to be the cloud network that the containers will communicate across. - A number of the items in `config-default.sh` are overridable via environment variables. - For older versions please either: diff --git a/docs/proposals/high-availability.md b/docs/proposals/high-availability.md index e7f77288e57..4525f709077 100644 --- a/docs/proposals/high-availability.md +++ b/docs/proposals/high-availability.md @@ -56,7 +56,7 @@ There is a short window after a new master acquires the lease, during which data 5. When the API server makes the corresponding write to etcd, it includes it in a transaction that does a compare-and-swap on the "current master" entry (old value == new value == host:port and sequence number from the replica that sent the mutating operation). This basically guarantees that if we elect the new master, all transactions coming from the old master will fail. You can think of this as the master attaching a "precondition" of its belief about who is the latest master. -## Open Questions: +## Open Questions * Is there a desire to keep track of all nodes for a specific component type? diff --git a/docs/service_accounts_admin.md b/docs/service_accounts_admin.md index 41992d5a4c0..eb299f9dce3 100644 --- a/docs/service_accounts_admin.md +++ b/docs/service_accounts_admin.md @@ -90,7 +90,7 @@ $ kubectl create -f secret.json $ kubectl describe secret mysecretname ``` -#### To delete/invalidate a service account token: +#### To delete/invalidate a service account token ``` kubectl delete secret mysecretname ``` diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 6d44fc977a2..376ef20d1a9 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -97,7 +97,7 @@ You should now be able to curl the nginx Service on `10.0.208.159:80` from any n Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](../../cluster/addons/dns/README.md). -### Environment Variables: +### Environment Variables When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods: ```shell $ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE @@ -119,7 +119,7 @@ KUBERNETES_SERVICE_HOST=10.0.0.1 NGINXSVC_SERVICE_PORT=80 ``` -### DNS: +### DNS Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster: ```shell $ kubectl get services kube-dns --namespace=kube-system diff --git a/docs/volumes.md b/docs/volumes.md index a0006c1e40f..5a3d03aa165 100644 --- a/docs/volumes.md +++ b/docs/volumes.md @@ -206,7 +206,7 @@ aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2 Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume type are suitable for your use!) -#### AWS EBS Example configuration: +#### AWS EBS Example configuration ```yaml apiVersion: v1