diff --git a/docs/design/admission_control_limit_range.md b/docs/design/admission_control_limit_range.md index ccdb44d8870..48a7880f6a0 100644 --- a/docs/design/admission_control_limit_range.md +++ b/docs/design/admission_control_limit_range.md @@ -128,7 +128,7 @@ The server is updated to be aware of **LimitRange** objects. The constraints are only enforced if the kube-apiserver is started as follows: -``` +```console $ kube-apiserver -admission_control=LimitRanger ``` @@ -140,7 +140,7 @@ kubectl is modified to support the **LimitRange** resource. For example, -```shell +```console $ kubectl namespace myspace $ kubectl create -f docs/user-guide/limitrange/limits.yaml $ kubectl get limits diff --git a/docs/design/admission_control_resource_quota.md b/docs/design/admission_control_resource_quota.md index 99d5431a157..a3781d645d8 100644 --- a/docs/design/admission_control_resource_quota.md +++ b/docs/design/admission_control_resource_quota.md @@ -140,7 +140,7 @@ The server is updated to be aware of **ResourceQuota** objects. The quota is only enforced if the kube-apiserver is started as follows: -``` +```console $ kube-apiserver -admission_control=ResourceQuota ``` @@ -167,7 +167,7 @@ kubectl is modified to support the **ResourceQuota** resource. For example, -``` +```console $ kubectl namespace myspace $ kubectl create -f docs/user-guide/resourcequota/quota.yaml $ kubectl get quota diff --git a/docs/design/clustering/README.md b/docs/design/clustering/README.md index 53649a31b49..d02b7d50e2a 100644 --- a/docs/design/clustering/README.md +++ b/docs/design/clustering/README.md @@ -34,7 +34,7 @@ This directory contains diagrams for the clustering design doc. This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html). Assuming you have a non-borked python install, this should be installable with -```bash +```sh pip install seqdiag ``` @@ -44,7 +44,7 @@ Just call `make` to regenerate the diagrams. If you are on a Mac or your pip install is messed up, you can easily build with docker. -``` +```sh make docker ``` diff --git a/docs/design/event_compression.md b/docs/design/event_compression.md index 29e659170fd..3b988048aaf 100644 --- a/docs/design/event_compression.md +++ b/docs/design/event_compression.md @@ -90,7 +90,7 @@ Each binary that generates events: Sample kubectl output -``` +```console FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-minion-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Starting kubelet. Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-minion-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-minion-1.c.saad-dev-vms.internal} Starting kubelet. diff --git a/docs/design/namespaces.md b/docs/design/namespaces.md index 1f1a767c6c4..da3bb2c5b0b 100644 --- a/docs/design/namespaces.md +++ b/docs/design/namespaces.md @@ -74,7 +74,7 @@ The Namespace provides a unique scope for: A *Namespace* defines a logically named group for multiple *Kind*s of resources. -``` +```go type Namespace struct { TypeMeta `json:",inline"` ObjectMeta `json:"metadata,omitempty"` @@ -125,7 +125,7 @@ See [Admission control: Resource Quota](admission_control_resource_quota.md) Upon creation of a *Namespace*, the creator may provide a list of *Finalizer* objects. -``` +```go type FinalizerName string // These are internal finalizers to Kubernetes, must be qualified name unless defined here @@ -154,7 +154,7 @@ set by default. A *Namespace* may exist in the following phases. -``` +```go type NamespacePhase string const( NamespaceActive NamespacePhase = "Active" @@ -262,7 +262,7 @@ to take part in Namespace termination. OpenShift creates a Namespace in Kubernetes -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -287,7 +287,7 @@ own storage associated with the "development" namespace unknown to Kubernetes. User deletes the Namespace in Kubernetes, and Namespace now has following state: -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -312,7 +312,7 @@ and begins to terminate all of the content in the namespace that it knows about. success, it executes a *finalize* action that modifies the *Namespace* by removing *kubernetes* from the list of finalizers: -``` +```json { "apiVersion":"v1", "kind": "Namespace", @@ -340,7 +340,7 @@ from the list of finalizers. This results in the following state: -``` +```json { "apiVersion":"v1", "kind": "Namespace", diff --git a/docs/design/networking.md b/docs/design/networking.md index d7822d4d85f..b1d5a460101 100644 --- a/docs/design/networking.md +++ b/docs/design/networking.md @@ -131,7 +131,7 @@ differentiate it from `docker0`) is set up outside of Docker proper. Example of GCE's advanced routing rules: -``` +```sh gcloud compute routes add "${MINION_NAMES[$i]}" \ --project "${PROJECT}" \ --destination-range "${MINION_IP_RANGES[$i]}" \ diff --git a/docs/design/persistent-storage.md b/docs/design/persistent-storage.md index 3e9edd3ef81..9b0cd0d768e 100644 --- a/docs/design/persistent-storage.md +++ b/docs/design/persistent-storage.md @@ -127,7 +127,7 @@ Events that communicate the state of a mounted volume are left to the volume plu An administrator provisions storage by posting PVs to the API. Various way to automate this task can be scripted. Dynamic provisioning is a future feature that can maintain levels of PVs. -``` +```yaml POST: kind: PersistentVolume @@ -140,15 +140,13 @@ spec: persistentDisk: pdName: "abc123" fsType: "ext4" +``` --------------------------------------------------- - -kubectl get pv +```console +$ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 map[] 10737418240 RWO Pending - - ``` #### Users request storage @@ -157,9 +155,9 @@ A user requests storage by posting a PVC to the API. Their request contains the The user must be within a namespace to create PVCs. -``` - +```yaml POST: + kind: PersistentVolumeClaim apiVersion: v1 metadata: @@ -170,15 +168,13 @@ spec: resources: requests: storage: 3 +``` --------------------------------------------------- - -kubectl get pvc - +```console +$ kubectl get pvc NAME LABELS STATUS VOLUME myclaim-1 map[] pending - ``` @@ -186,9 +182,8 @@ myclaim-1 map[] pending The ```PersistentVolumeClaimBinder``` attempts to find an available volume that most closely matches the user's request. If one exists, they are bound by putting a reference on the PV to the PVC. Requests can go unfulfilled if a suitable match is not found. -``` - -kubectl get pv +```console +$ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON pv0001 map[] 10737418240 RWO Bound myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e @@ -198,8 +193,6 @@ kubectl get pvc NAME LABELS STATUS VOLUME myclaim-1 map[] Bound b16e91d6-c0ef-11e4-8be4-80e6500a981e - - ``` #### Claim usage @@ -208,7 +201,7 @@ The claim holder can use their claim as a volume. The ```PersistentVolumeClaimV The claim holder owns the claim and its data for as long as the claim exists. The pod using the claim can be deleted, but the claim remains in the user's namespace. It can be used again and again by many pods. -``` +```yaml POST: kind: Pod @@ -229,17 +222,14 @@ spec: accessMode: ReadWriteOnce claimRef: name: myclaim-1 - ``` #### Releasing a claim and Recycling a volume When a claim holder is finished with their data, they can delete their claim. -``` - -kubectl delete pvc myclaim-1 - +```console +$ kubectl delete pvc myclaim-1 ``` The ```PersistentVolumeClaimBinder``` will reconcile this by removing the claim reference from the PV and change the PVs status to 'Released'. diff --git a/docs/design/resources.md b/docs/design/resources.md index 055c5d86ed5..7bcce84a86c 100644 --- a/docs/design/resources.md +++ b/docs/design/resources.md @@ -89,7 +89,7 @@ Both users and a number of system components, such as schedulers, (horizontal) a Resource requirements for a container or pod should have the following form: -``` +```yaml resourceRequirementSpec: [ request: [ cpu: 2.5, memory: "40Mi" ], limit: [ cpu: 4.0, memory: "99Mi" ], @@ -103,7 +103,7 @@ Where: Total capacity for a node should have a similar structure: -``` +```yaml resourceCapacitySpec: [ total: [ cpu: 12, memory: "128Gi" ] ] @@ -159,15 +159,16 @@ rather than decimal ones: "64MiB" rather than "64MB". A resource type may have an associated read-only ResourceType structure, that contains metadata about the type. For example: -``` +```yaml resourceTypes: [ "kubernetes.io/memory": [ isCompressible: false, ... ] "kubernetes.io/cpu": [ - isCompressible: true, internalScaleExponent: 3, ... + isCompressible: true, + internalScaleExponent: 3, ... ] - "kubernetes.io/disk-space": [ ... } + "kubernetes.io/disk-space": [ ... ] ] ``` @@ -195,7 +196,7 @@ Because resource usage and related metrics change continuously, need to be track Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information: -``` +```yaml resourceStatus: [ usage: [ cpu: , memory: ], maxusage: [ cpu: , memory: ], @@ -205,7 +206,7 @@ resourceStatus: [ where a `` or `` structure looks like this: -``` +```yaml { mean: # arithmetic mean max: # minimum value @@ -218,7 +219,7 @@ where a `` or `` structure looks like this: "99.9": <99.9th-percentile-value>, ... ] - } +} ``` All parts of this structure are optional, although we strongly encourage including quantities for 50, 90, 95, 99, 99.5, and 99.9 percentiles. _[In practice, it will be important to include additional info such as the length of the time window over which the averages are calculated, the confidence level, and information-quality metrics such as the number of dropped or discarded data points.]_ diff --git a/docs/design/simple-rolling-update.md b/docs/design/simple-rolling-update.md index 80bc656666d..f5ef348ab51 100644 --- a/docs/design/simple-rolling-update.md +++ b/docs/design/simple-rolling-update.md @@ -62,7 +62,7 @@ To facilitate recovery in the case of a crash of the updating process itself, we Recovery is achieved by issuing the same command again: -``` +```sh kubectl rolling-update foo [foo-v2] --image=myimage:v2 ```