From 4c308f070304ef4c387b8ae8638d234c82b0d7be Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 08:46:02 +0000 Subject: [PATCH 1/2] Improve design docs syntax highlighting. --- docs/design/admission_control_limit_range.md | 4 +- .../admission_control_resource_quota.md | 4 +- docs/design/clustering/README.md | 4 +- docs/design/event_compression.md | 2 +- docs/design/namespaces.md | 14 +++---- docs/design/networking.md | 2 +- docs/design/persistent-storage.md | 38 +++++++------------ docs/design/resources.md | 17 +++++---- docs/design/simple-rolling-update.md | 2 +- 9 files changed, 39 insertions(+), 48 deletions(-) diff --git a/docs/design/admission_control_limit_range.md b/docs/design/admission_control_limit_range.md index ccdb44d8870..48a7880f6a0 100644 --- a/docs/design/admission_control_limit_range.md +++ b/docs/design/admission_control_limit_range.md @@ -128,7 +128,7 @@ The server is updated to be aware of **LimitRange** objects. The constraints are only enforced if the kube-apiserver is started as follows: -``` +```console $ kube-apiserver -admission_control=LimitRanger ``` @@ -140,7 +140,7 @@ kubectl is modified to support the **LimitRange** resource. For example, -```shell +```console $ kubectl namespace myspace $ kubectl create -f docs/user-guide/limitrange/limits.yaml $ kubectl get limits diff --git a/docs/design/admission_control_resource_quota.md b/docs/design/admission_control_resource_quota.md index 99d5431a157..a3781d645d8 100644 --- a/docs/design/admission_control_resource_quota.md +++ b/docs/design/admission_control_resource_quota.md @@ -140,7 +140,7 @@ The server is updated to be aware of **ResourceQuota** objects. The quota is only enforced if the kube-apiserver is started as follows: -``` +```console $ kube-apiserver -admission_control=ResourceQuota ``` @@ -167,7 +167,7 @@ kubectl is modified to support the **ResourceQuota** resource. For example, -``` +```console $ kubectl namespace myspace $ kubectl create -f docs/user-guide/resourcequota/quota.yaml $ kubectl get quota diff --git a/docs/design/clustering/README.md b/docs/design/clustering/README.md index 53649a31b49..d02b7d50e2a 100644 --- a/docs/design/clustering/README.md +++ b/docs/design/clustering/README.md @@ -34,7 +34,7 @@ This directory contains diagrams for the clustering design doc. This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with -```bash +```sh pip install seqdiag ``` @@ -44,7 +44,7 @@ Just call `make` to regenerate the diagrams. If you are on a Mac or your pip install is messed up, you can easily build with docker. -``` +```sh make docker ``` diff --git a/docs/design/event_compression.md b/docs/design/event_compression.md index 29e659170fd..3b988048aaf 100644 --- a/docs/design/event_compression.md +++ b/docs/design/event_compression.md @@ -90,7 +90,7 @@ Each binary that generates events: Sample kubectl output -``` +```console FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. diff --git a/docs/design/namespaces.md b/docs/design/namespaces.md index 1f1a767c6c4..da3bb2c5b0b 100644 --- a/docs/design/namespaces.md +++ b/docs/design/namespaces.md @@ -74,7 +74,7 @@ The Namespace provides a unique scope for: A *Namespace* defines a logically named group for multiple *Kind*s of resources. -``` +```go type Namespace struct { TypeMeta `json:",inline"` ObjectMeta `json:"metadata,omitempty"` @@ -125,7 +125,7 @@ See [Admission control: Resource Quota](admission_control_resource_quota.md) Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects. -``` +```go type FinalizerName string // These are internal finalizers to Kubernetes, must be qualified name unless defined here @@ -154,7 +154,7 @@ set by default. A *Namespace* may exist in the following phases. -``` +```go type NamespacePhase string const( NamespaceActive NamespacePhase = "Active" @@ -262,7 +262,7 @@ to take part in Namespace termination. OpenShift creates a Namespace in Kubernetes -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -287,7 +287,7 @@ own storage associated with the "development" namespace unknown to Kubernetes. User deletes the Namespace in Kubernetes, and Namespace now has following state: -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -312,7 +312,7 @@ and begins to terminate all of the content in the namespace that it knows about. success, it executes a *finalize* action that modifies the *Namespace* by removing *kubernetes* from the list of finalizers: -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -340,7 +340,7 @@ from the list of finalizers. This results in the following state: -``` +```json { "apiVersion":"v1", "kind": "Namespace", diff --git a/docs/design/networking.md b/docs/design/networking.md index d7822d4d85f..b1d5a460101 100644 --- a/docs/design/networking.md +++ b/docs/design/networking.md @@ -131,7 +131,7 @@ differentiate it from `docker0`) is set up outside of Docker proper. Example of GCE's advanced routing rules: -``` +```sh gcloud compute routes add "${MINION_NAMES[$i]}" \ --project "${PROJECT}" \ --destination-range "${MINION_IP_RANGES[$i]}" \ diff --git a/docs/design/persistent-storage.md b/docs/design/persistent-storage.md index 3e9edd3ef81..9b0cd0d768e 100644 --- a/docs/design/persistent-storage.md +++ b/docs/design/persistent-storage.md @@ -127,7 +127,7 @@ Events that communicate the state of a mounted volume are left to the volume plu An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs. -``` +```yaml POST: kind: PersistentVolume @@ -140,15 +140,13 @@ spec: persistentDisk: pdName: "abc123" fsType: "ext4" +``` --------------------------------------------------- - -kubectl get pv +```console +$ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 map[] 10737418240 RWO Pending - - ``` #### Users request storage @@ -157,9 +155,9 @@ A user requests storage by posting a PVC to the API. Their request contains the The user must be within a namespace to create PVCs. -``` - +```yaml POST: + kind: PersistentVolumeClaim apiVersion: v1 metadata: @@ -170,15 +168,13 @@ spec: resources: requests: storage: 3 +``` --------------------------------------------------- - -kubectl get pvc - +```console +$ kubectl get pvc NAME LABELS STATUS VOLUME myclaim-1 map[] pending - ``` @@ -186,9 +182,8 @@ myclaim-1 map[] pending The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found. -``` - -kubectl get pv +```console +$ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e @@ -198,8 +193,6 @@ kubectl get pvc NAME LABELS STATUS VOLUME myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e - - ``` #### Claim usage @@ -208,7 +201,7 @@ The claim holder can use their claim as a volume. The ```PersistentVolumeClaimV The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods. -``` +```yaml POST: kind: Pod @@ -229,17 +222,14 @@ spec: accessMode: ReadWriteOnce claimRef: name: myclaim-1 - ``` #### Releasing a claim and Recycling a volume When a claim holder is finished with their data, they can delete their claim. -``` - -kubectl delete pvc myclaim-1 - +```console +$ kubectl delete pvc myclaim-1 ``` The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'. diff --git a/docs/design/resources.md b/docs/design/resources.md index 055c5d86ed5..7bcce84a86c 100644 --- a/docs/design/resources.md +++ b/docs/design/resources.md @@ -89,7 +89,7 @@ Both users and a number of system components, such as schedulers, (horizontal) a Resource requirements for a container or pod should have the following form: -``` +```yaml resourceRequirementSpec: [ request: [ cpu: 2.5, memory: "40Mi" ], limit: [ cpu: 4.0, memory: "99Mi" ], @@ -103,7 +103,7 @@ Where: Total capacity for a node should have a similar structure: -``` +```yaml resourceCapacitySpec: [ total: [ cpu: 12, memory: "128Gi" ] ] @@ -159,15 +159,16 @@ rather than decimal ones: "64MiB" rather than "64MB". A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example: -``` +```yaml resourceTypes: [ "kubernetes.io/memory": [ isCompressible: false, ... ] "kubernetes.io/cpu": [ - isCompressible: true, internalScaleExponent: 3, ... + isCompressible: true, + internalScaleExponent: 3, ... ] - "kubernetes.io/disk-space": [ ... } + "kubernetes.io/disk-space": [ ... ] ] ``` @@ -195,7 +196,7 @@ Because resource usage and related metrics change continuously, need to be track Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information: -``` +```yaml resourceStatus: [ usage: [ cpu: , memory: ], maxusage: [ cpu: , memory: ], @@ -205,7 +206,7 @@ resourceStatus: [ where a `` or `` structure looks like this: -``` +```yaml { mean: # arithmetic mean max: # minimum value @@ -218,7 +219,7 @@ where a `` or `` structure looks like this: "99.9": <99.9th-percentile-value>, ... ] - } +} ``` All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_ diff --git a/docs/design/simple-rolling-update.md b/docs/design/simple-rolling-update.md index 80bc656666d..f5ef348ab51 100644 --- a/docs/design/simple-rolling-update.md +++ b/docs/design/simple-rolling-update.md @@ -62,7 +62,7 @@ To facilitate recovery in the case of a crash of the updating process itself, we Recovery is achieved by issuing the same command again: -``` +```sh kubectl rolling-update foo [foo-v2] --image=myimage:v2 ``` From 4182c3b3941044447bfda006c63c5145eb9c103c Mon Sep 17 00:00:00 2001 From: Alex Robinson Date: Sun, 19 Jul 2015 08:54:49 +0000 Subject: [PATCH 2/2] Improve devel docs syntax highlighting. --- docs/devel/api-conventions.md | 2 +- docs/devel/api_changes.md | 8 ++-- docs/devel/cherry-picks.md | 2 +- docs/devel/developer-guides/vagrant.md | 34 +++++++------- docs/devel/development.md | 62 +++++++++++++------------- docs/devel/flaky-tests.md | 2 +- docs/devel/getting-builds.md | 4 +- docs/devel/profiling.md | 14 +++--- docs/devel/releasing.md | 26 +++++------ 9 files changed, 77 insertions(+), 77 deletions(-) diff --git a/docs/devel/api-conventions.md b/docs/devel/api-conventions.md index c2d71078a80..64509dae4cb 100644 --- a/docs/devel/api-conventions.md +++ b/docs/devel/api-conventions.md @@ -524,7 +524,7 @@ The status object is encoded as JSON and provided as the body of the response. **Example:** -``` +```console $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https://10.240.122.184:443/api/v1/namespaces/default/pods/grafana > GET /api/v1/namespaces/default/pods/grafana HTTP/1.1 diff --git a/docs/devel/api_changes.md b/docs/devel/api_changes.md index 7a0418e83b2..d8e20014e3f 100644 --- a/docs/devel/api_changes.md +++ b/docs/devel/api_changes.md @@ -284,8 +284,8 @@ Once all the necessary manually written conversions are added, you need to regenerate auto-generated ones. To regenerate them: - run -``` - $ hack/update-generated-conversions.sh +```sh +hack/update-generated-conversions.sh ``` If running the above script is impossible due to compile errors, the easiest @@ -359,8 +359,8 @@ an example to illustrate your change. Make sure you update the swagger API spec by running: -```shell -$ hack/update-swagger-spec.sh +```sh +hack/update-swagger-spec.sh ``` The API spec changes should be in a commit separate from your other changes. diff --git a/docs/devel/cherry-picks.md b/docs/devel/cherry-picks.md index 7ed63d088ac..c36741c42e2 100644 --- a/docs/devel/cherry-picks.md +++ b/docs/devel/cherry-picks.md @@ -40,7 +40,7 @@ Kubernetes projects. Any contributor can propose a cherry pick of any pull request, like so: -``` +```sh hack/cherry_pick_pull.sh upstream/release-3.14 98765 ``` diff --git a/docs/devel/developer-guides/vagrant.md b/docs/devel/developer-guides/vagrant.md index e704bf3bd52..c1e02ff407e 100644 --- a/docs/devel/developer-guides/vagrant.md +++ b/docs/devel/developer-guides/vagrant.md @@ -86,8 +86,8 @@ vagrant ssh minion-3 To view the service status and/or logs on the kubernetes-master: -```sh -vagrant ssh master +```console +$ vagrant ssh master [vagrant@kubernetes-master ~] $ sudo systemctl status kube-apiserver [vagrant@kubernetes-master ~] $ sudo journalctl -r -u kube-apiserver @@ -100,8 +100,8 @@ vagrant ssh master To view the services on any of the nodes: -```sh -vagrant ssh minion-1 +```console +$ vagrant ssh minion-1 [vagrant@kubernetes-minion-1] $ sudo systemctl status docker [vagrant@kubernetes-minion-1] $ sudo journalctl -r -u docker [vagrant@kubernetes-minion-1] $ sudo systemctl status kubelet @@ -135,7 +135,7 @@ Once your Vagrant machines are up and provisioned, the first thing to do is to c You may need to build the binaries first, you can do this with ```make``` -```sh +```console $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS @@ -182,8 +182,8 @@ Interact with the cluster When using the vagrant provider in Kubernetes, the `cluster/kubectl.sh` script will cache your credentials in a `~/.kubernetes_vagrant_auth` file so you will not be prompted for them in the future. -```sh -cat ~/.kubernetes_vagrant_auth +```console +$ cat ~/.kubernetes_vagrant_auth { "User": "vagrant", "Password": "vagrant" "CAFile": "/home/k8s_user/.kubernetes.vagrant.ca.crt", @@ -202,7 +202,7 @@ You should now be set to use the `cluster/kubectl.sh` script. For example try to Your cluster is running, you can list the nodes in your cluster: -```sh +```console $ ./cluster/kubectl.sh get nodes NAME LABELS STATUS @@ -216,7 +216,7 @@ Now start running some containers! You can now use any of the cluster/kube-*.sh commands to interact with your VM machines. Before starting a container there will be no pods, services and replication controllers. -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE @@ -229,7 +229,7 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS Start a container running nginx with a replication controller and three replicas -``` +```console $ cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS my-nginx my-nginx nginx run=my-nginx 3 @@ -237,7 +237,7 @@ my-nginx my-nginx nginx run=my-nginx 3 When listing the pods, you will see that three containers have been started and are in Waiting state: -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE my-nginx-389da 1/1 Waiting 0 33s @@ -247,7 +247,7 @@ my-nginx-nyj3x 1/1 Waiting 0 33s You need to wait for the provisioning to complete, you can monitor the minions by doing: -```sh +```console $ sudo salt '*minion-1' cmd.run 'docker images' kubernetes-minion-1: REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE @@ -257,7 +257,7 @@ kubernetes-minion-1: Once the docker image for nginx has been downloaded, the container will start and you can list it: -```sh +```console $ sudo salt '*minion-1' cmd.run 'docker ps' kubernetes-minion-1: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -267,7 +267,7 @@ kubernetes-minion-1: Going back to listing the pods, services and replicationcontrollers, you now have: -``` +```console $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE my-nginx-389da 1/1 Running 0 33s @@ -286,7 +286,7 @@ We did not start any services, hence there are none listed. But we see three rep Check the [guestbook](../../../examples/guestbook/README.md) application to learn how to create a service. You can already play with scaling the replicas with: -```sh +```console $ ./cluster/kubectl.sh scale rc my-nginx --replicas=2 $ ./cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE @@ -327,8 +327,8 @@ rm ~/.kubernetes_vagrant_auth After using kubectl.sh make sure that the correct credentials are set: -```sh -cat ~/.kubernetes_vagrant_auth +```console +$ cat ~/.kubernetes_vagrant_auth { "User": "vagrant", "Password": "vagrant" diff --git a/docs/devel/development.md b/docs/devel/development.md index 6822ab5e18b..bb2330513f6 100644 --- a/docs/devel/development.md +++ b/docs/devel/development.md @@ -58,40 +58,40 @@ Below, we outline one of the more common git workflows that core developers use. The commands below require that you have $GOPATH set ([$GOPATH docs](https://golang.org/doc/code.html#GOPATH)). We highly recommend you put kubernetes' code into your GOPATH. Note: the commands below will not work if there is more than one directory in your `$GOPATH`. -``` -$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ -$ cd $GOPATH/src/github.com/GoogleCloudPlatform/ +```sh +mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ +cd $GOPATH/src/github.com/GoogleCloudPlatform/ # Replace "$YOUR_GITHUB_USERNAME" below with your github username -$ git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git -$ cd kubernetes -$ git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' +git clone https://github.com/$YOUR_GITHUB_USERNAME/kubernetes.git +cd kubernetes +git remote add upstream 'https://github.com/GoogleCloudPlatform/kubernetes.git' ``` ### Create a branch and make changes -``` -$ git checkout -b myfeature +```sh +git checkout -b myfeature # Make your code changes ``` ### Keeping your development fork in sync -``` -$ git fetch upstream -$ git rebase upstream/master +```sh +git fetch upstream +git rebase upstream/master ``` Note: If you have write access to the main repository at github.com/GoogleCloudPlatform/kubernetes, you should modify your git configuration so that you can't accidentally push to upstream: -``` +```sh git remote set-url --push upstream no_push ``` ### Commiting changes to your fork -``` -$ git commit -$ git push -f origin myfeature +```sh +git commit +git push -f origin myfeature ``` ### Creating a pull request @@ -114,7 +114,7 @@ directly from mercurial. 2) Create a new GOPATH for your tools and install godep: -``` +```sh export GOPATH=$HOME/go-tools mkdir -p $GOPATH go get github.com/tools/godep @@ -122,7 +122,7 @@ go get github.com/tools/godep 3) Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: -``` +```sh export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` @@ -133,7 +133,7 @@ Here's a quick walkthrough of one way to use godeps to add or update a Kubernete 1) Devote a directory to this endeavor: -``` +```sh export KPATH=$HOME/code/kubernetes mkdir -p $KPATH/src/github.com/GoogleCloudPlatform/kubernetes cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes @@ -143,7 +143,7 @@ git clone https://path/to/your/fork . 2) Set up your GOPATH. -``` +```sh # Option A: this will let your builds see packages that exist elsewhere on your system. export GOPATH=$KPATH:$GOPATH # Option B: This will *not* let your local builds see packages that exist elsewhere on your system. @@ -153,14 +153,14 @@ export GOPATH=$KPATH 3) Populate your new GOPATH. -``` +```sh cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes godep restore ``` 4) Next, you can either add a new dependency or update an existing one. -``` +```sh # To add a new dependency, do: cd $KPATH/src/github.com/GoogleCloudPlatform/kubernetes go get path/to/dependency @@ -185,28 +185,28 @@ Please send dependency updates in separate commits within your PR, for easier re Before committing any changes, please link/copy these hooks into your .git directory. This will keep you from accidentally committing non-gofmt'd go code. -``` +```sh cd kubernetes/.git/hooks/ ln -s ../../hooks/pre-commit . ``` ## Unit tests -``` +```sh cd kubernetes hack/test-go.sh ``` Alternatively, you could also run: -``` +```sh cd kubernetes godep go test ./... ``` If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: -``` +```console $ cd kubernetes # step into kubernetes' directory. $ cd pkg/kubelet $ godep go test @@ -221,7 +221,7 @@ Currently, collecting coverage is only supported for the Go unit tests. To run all unit tests and generate an HTML coverage report, run the following: -``` +```sh cd kubernetes KUBE_COVER=y hack/test-go.sh ``` @@ -230,7 +230,7 @@ At the end of the run, an the HTML report will be generated with the path printe To run tests and collect coverage in only one package, pass its relative path under the `kubernetes` directory as an argument, for example: -``` +```sh cd kubernetes KUBE_COVER=y hack/test-go.sh pkg/kubectl ``` @@ -243,7 +243,7 @@ Coverage results for the project can also be viewed on [Coveralls](https://cover You need an [etcd](https://github.com/coreos/etcd/releases/tag/v2.0.0) in your path, please make sure it is installed and in your ``$PATH``. -``` +```sh cd kubernetes hack/test-integration.sh ``` @@ -252,14 +252,14 @@ hack/test-integration.sh You can run an end-to-end test which will bring up a master and two nodes, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than "gce". -``` +```sh cd kubernetes hack/e2e-test.sh ``` Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with this command: -``` +```sh go run hack/e2e.go --down ``` @@ -332,7 +332,7 @@ See [conformance-test.sh](../../hack/conformance-test.sh). ## Regenerating the CLI documentation -``` +```sh hack/run-gendocs.sh ``` diff --git a/docs/devel/flaky-tests.md b/docs/devel/flaky-tests.md index 1e7f5fcb1d2..1568baed817 100644 --- a/docs/devel/flaky-tests.md +++ b/docs/devel/flaky-tests.md @@ -69,7 +69,7 @@ spec: Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default. -``` +```sh kubectl create -f ./controller.yaml ``` diff --git a/docs/devel/getting-builds.md b/docs/devel/getting-builds.md index 4c92a446764..4265b77a6d4 100644 --- a/docs/devel/getting-builds.md +++ b/docs/devel/getting-builds.md @@ -35,7 +35,7 @@ Documentation for other releases can be found at You can use [hack/get-build.sh](../../hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build). -``` +```console usage: ./hack/get-build.sh [stable|release|latest|latest-green] @@ -47,7 +47,7 @@ usage: You can also use the gsutil tool to explore the Google Cloud Storage release bucket. Here are some examples: -``` +```sh gsutil cat gs://kubernetes-release/ci/latest.txt # output the latest ci version number gsutil cat gs://kubernetes-release/ci/latest-green.txt # output the latest ci version number that passed gce e2e gsutil ls gs://kubernetes-release/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release diff --git a/docs/devel/profiling.md b/docs/devel/profiling.md index d36885dd697..36bbfbae452 100644 --- a/docs/devel/profiling.md +++ b/docs/devel/profiling.md @@ -43,10 +43,10 @@ Go comes with inbuilt 'net/http/pprof' profiling library and profiling web servi TL;DR: Add lines: -``` - m.mux.HandleFunc("/debug/pprof/", pprof.Index) - m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) - m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) +```go +m.mux.HandleFunc("/debug/pprof/", pprof.Index) +m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) +m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) ``` to the init(c *Config) method in 'pkg/master/master.go' and import 'net/http/pprof' package. @@ -57,13 +57,13 @@ In most use cases to use profiler service it's enough to do 'import _ net/http/p Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: -``` - ssh kubernetes_master -L:localhost:8080 +```sh +ssh kubernetes_master -L:localhost:8080 ``` or analogous one for you Cloud provider. Afterwards you can e.g. run -``` +```sh go tool pprof http://localhost:/debug/pprof/profile ``` diff --git a/docs/devel/releasing.md b/docs/devel/releasing.md index 65db081d05a..9950e6e4f05 100644 --- a/docs/devel/releasing.md +++ b/docs/devel/releasing.md @@ -65,7 +65,7 @@ to make sure they're solid around then as well. Once you find some greens, you can find the Git hash for a build by looking at the "Console Log", then look for `githash=`. You should see a line line: -``` +```console + githash=v0.20.2-322-g974377b ``` @@ -80,7 +80,7 @@ oncall. Before proceeding to the next step: -``` +```sh export BRANCHPOINT=v0.20.2-322-g974377b ``` @@ -230,11 +230,11 @@ present. We are using `pkg/version/base.go` as the source of versioning in absence of information from git. Here is a sample of that file's contents: -``` - var ( - gitVersion string = "v0.4-dev" // version from git, output of $(git describe) - gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) - ) +```go +var ( + gitVersion string = "v0.4-dev" // version from git, output of $(git describe) + gitCommit string = "" // sha1 from git, output of $(git rev-parse HEAD) +) ``` This means a build with `go install` or `go get` or a build from a tarball will @@ -313,14 +313,14 @@ projects seem to live with that and it does not really become a large problem. As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is not present in Docker `v1.2.0`: -``` - $ git describe a327d9b91edf - v1.1.1-822-ga327d9b91edf +```console +$ git describe a327d9b91edf +v1.1.1-822-ga327d9b91edf - $ git log --oneline v1.2.0..a327d9b91edf - a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB +$ git log --oneline v1.2.0..a327d9b91edf +a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB - (Non-empty output here means the commit is not present on v1.2.0.) +(Non-empty output here means the commit is not present on v1.2.0.) ``` ## Release Notes