From 3364a32dc6ad8559724249fa5dc9e549cee00139 Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 05:43:48 +0000 Subject: [PATCH] Improve admin docs syntax highlighting. --- docs/admin/cluster-large.md | 16 ++++++++-------- docs/admin/cluster-management.md | 2 +- docs/admin/cluster-troubleshooting.md | 2 +- docs/admin/etcd.md | 2 +- docs/admin/high-availability.md | 8 ++++---- docs/admin/networking.md | 6 +++--- docs/admin/node.md | 4 ++-- docs/admin/resource-quota.md | 2 +- docs/admin/salt.md | 6 +++--- docs/admin/service-accounts-admin.md | 10 ++++++---- 10 files changed, 30 insertions(+), 28 deletions(-) diff --git a/docs/admin/cluster-large.md b/docs/admin/cluster-large.md index 853e28272ee..7baf753f28e 100644 --- a/docs/admin/cluster-large.md +++ b/docs/admin/cluster-large.md @@ -68,14 +68,14 @@ To prevent memory leaks or other resource issues in [cluster addons](../../clust For example: -```YAML - containers: - - image: gcr.io/google_containers/heapster:v0.15.0 - name: heapster - resources: - limits: - cpu: 100m - memory: 200Mi +```yaml +containers: + - image: gcr.io/google_containers/heapster:v0.15.0 + name: heapster + resources: + limits: + cpu: 100m + memory: 200Mi ``` These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](https://github.com/GoogleCloudPlatform/kubernetes/issues/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](https://github.com/GoogleCloudPlatform/kubernetes/issues/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits. diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md index bfe93922d97..66849ea6447 100644 --- a/docs/admin/cluster-management.md +++ b/docs/admin/cluster-management.md @@ -62,7 +62,7 @@ KUBE_API_VERSIONS env var controls the API versions that are supported in the cl You can use the kube-version-change utility to convert config files between different API versions. -``` +```console $ hack/build-go.sh cmd/kube-version-change $ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml ``` diff --git a/docs/admin/cluster-troubleshooting.md b/docs/admin/cluster-troubleshooting.md index 744241033fd..aa5cdefbd00 100644 --- a/docs/admin/cluster-troubleshooting.md +++ b/docs/admin/cluster-troubleshooting.md @@ -44,7 +44,7 @@ The first thing to debug in your cluster is if your nodes are all registered cor Run -``` +```sh kubectl get nodes ``` diff --git a/docs/admin/etcd.md b/docs/admin/etcd.md index f37827e84fe..dfe13f0ba7f 100644 --- a/docs/admin/etcd.md +++ b/docs/admin/etcd.md @@ -73,7 +73,7 @@ To test whether `etcd` is running correctly, you can try writing a value to a test key. On your master VM (or somewhere with firewalls configured such that you can talk to your cluster's etcd), try: -``` +```sh curl -fs -X PUT "http://${host}:${port}/v2/keys/_test" ``` diff --git a/docs/admin/high-availability.md b/docs/admin/high-availability.md index aae0fb6cf5c..12b722d9c85 100644 --- a/docs/admin/high-availability.md +++ b/docs/admin/high-availability.md @@ -141,13 +141,13 @@ for ```${NODE_IP}``` on each machine. Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with -``` +```sh etcdctl member list ``` and -``` +```sh etcdctl cluster-health ``` @@ -179,7 +179,7 @@ Once you have replicated etcd set up correctly, we will also install the apiserv First you need to create the initial log file, so that Docker mounts a file instead of a directory: -``` +```sh touch /var/log/kube-apiserver.log ``` @@ -231,7 +231,7 @@ In the future, we expect to more tightly integrate this lease-locking into the s First, create empty log files on each node, so that Docker will mount the files not make new directories: -``` +```sh touch /var/log/kube-scheduler.log touch /var/log/kube-controller-manager.log ``` diff --git a/docs/admin/networking.md b/docs/admin/networking.md index 18f72eb1829..d9de18bfbdf 100644 --- a/docs/admin/networking.md +++ b/docs/admin/networking.md @@ -155,7 +155,7 @@ on that subnet, and is passed to docker's `--bridge` flag. We start Docker with: -``` +```sh DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false" ``` @@ -172,14 +172,14 @@ masquerade (aka SNAT - to make it seem as if packets came from the `Node` itself) traffic that is bound for IPs outside the GCE project network (10.0.0.0/8). -``` +```sh iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE ``` Lastly we enable IP forwarding in the kernel (so the kernel will process packets for bridged containers): -``` +```sh sysctl net.ipv4.ip_forward=1 ``` diff --git a/docs/admin/node.md b/docs/admin/node.md index 64f47e42e9e..22d908287a8 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -209,7 +209,7 @@ node, but will not affect any existing pods on the node. This is useful as a preparatory step before a node reboot, etc. For example, to mark a node unschedulable, run this command: -``` +```sh kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}' ``` @@ -228,7 +228,7 @@ processes not in containers. If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder pod. Use the following template: -``` +```yaml apiVersion: v1 kind: Pod metadata: diff --git a/docs/admin/resource-quota.md b/docs/admin/resource-quota.md index 21bec1eb75b..8b6031aaf10 100644 --- a/docs/admin/resource-quota.md +++ b/docs/admin/resource-quota.md @@ -89,7 +89,7 @@ This means the resource must have a fully-qualified name (i.e. mycompany.org/shi Kubectl supports creating, updating, and viewing quotas -``` +```console $ kubectl namespace myspace $ cat < quota.json { diff --git a/docs/admin/salt.md b/docs/admin/salt.md index 5807c281171..566a07e595f 100644 --- a/docs/admin/salt.md +++ b/docs/admin/salt.md @@ -45,7 +45,7 @@ The **salt-minion** service runs on the kubernetes-master node and each kubernet Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce). -``` +```console [root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf master: kubernetes-master ``` @@ -66,7 +66,7 @@ All remaining sections that refer to master/minion setups should be ignored for Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.) -``` +```console [root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf open_mode: True auto_accept: True @@ -78,7 +78,7 @@ Each minion in the salt cluster has an associated configuration that instructs t An example file is presented below using the Vagrant based environment. -``` +```console [root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf grains: etcd_servers: $MASTER_IP diff --git a/docs/admin/service-accounts-admin.md b/docs/admin/service-accounts-admin.md index defa272a006..ff4cbad808f 100644 --- a/docs/admin/service-accounts-admin.md +++ b/docs/admin/service-accounts-admin.md @@ -93,7 +93,7 @@ account. To create additional API tokens for a service account, create a secret of type `ServiceAccountToken` with an annotation referencing the service account, and the controller will update it with a generated token: -``` +```json secret.json: { "kind": "Secret", @@ -105,14 +105,16 @@ secret.json: } "type": "kubernetes.io/service-account-token" } +``` -$ kubectl create -f ./secret.json -$ kubectl describe secret mysecretname +```sh +kubectl create -f ./secret.json +kubectl describe secret mysecretname ``` #### To delete/invalidate a service account token -``` +```sh kubectl delete secret mysecretname ```