diff --git a/examples/examples_test.go b/examples/examples_test.go index 2c037b61bbd..db02faa8e9b 100644 --- a/examples/examples_test.go +++ b/examples/examples_test.go @@ -252,7 +252,8 @@ func TestExampleObjectSchemas(t *testing.T) { }, "../examples/limitrange": { "invalid-pod": &api.Pod{}, - "limit-range": &api.LimitRange{}, + "limits": &api.LimitRange{}, + "namespace": &api.Namespace{}, "valid-pod": &api.Pod{}, }, "../examples/logging-demo": { diff --git a/examples/limitrange/README.md b/examples/limitrange/README.md index 34541109bd0..869d00effe5 100644 --- a/examples/limitrange/README.md +++ b/examples/limitrange/README.md @@ -1,4 +1,166 @@ -Please refer to this [doc](https://github.com/GoogleCloudPlatform/kubernetes/blob/620af168920b773ade28e27211ad684903a1db21/docs/design/admission_control_limit_range.md#kubectl). +Limit Range +======================================== +By default, pods run with unbounded CPU and memory limits. This means that any pod in the +system will be able to consume as much CPU and memory on the node that executes the pod. + +Users may want to impose restrictions on the amount of resource a single pod in the system may consume +for a variety of reasons. + +For example: + +1. Each node in the cluster has 2GB of memory. The cluster operator does not want to accept pods +that require more than 2GB of memory since no node in the cluster can support the requirement. To prevent a +pod from being permanently unscheduled to a node, the operator instead chooses to reject pods that exceed 2GB +of memory as part of admission control. +2. A cluster is shared by two communities in an organization that runs production and development workloads +respectively. Production workloads may consume up to 8GB of memory, but development workloads may consume up +to 512MB of memory. The cluster operator creates a separate namespace for each workload, and applies limits to +each namespace. +3. Users may create a pod which consumes resources just below the capacity of a machine. The left over space +may be too small to be useful, but big enough for the waste to be costly over the entire cluster. As a result, +the cluster operator may want to set limits that a pod must consume at least 20% of the memory and cpu of their +average node size in order to provide for more uniform scheduling and to limit waste. + +This example demonstrates how limits can be applied to a Kubernetes namespace to control +min/max resource limits per pod. In addition, this example demonstrates how you can +apply default resource limits to pods in the absence of an end-user specified value. + +For a detailed description of the Kubernetes resource model, see [Resources](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md) + +Step 0: Prerequisites +----------------------------------------- +This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. + +Change to the `/examples/limitrange` directory if you're not already there. + +Step 1: Create a namespace +----------------------------------------- +This example will work in a custom namespace to demonstrate the concepts involved. + +Let's create a new namespace called limit-example: + +```shell +$ kubectl create -f namespace.yaml +namespaces/limit-example +$ kubectl get namespaces +NAME LABELS STATUS +default Active +limit-example Active +``` + +Step 2: Apply a limit to the namespace +----------------------------------------- +Let's create a simple limit in our namespace. + +```shell +$ kubectl create -f limits.yaml --namespace=limit-example +limitranges/mylimits +``` + +Let's describe the limits that we have imposed in our namespace. + +```shell +$ kubectl describe limits mylimits --namespace=limit-example +Name: mylimits +Type Resource Min Max Default +---- -------- --- --- --- +Pod memory 6Mi 1Gi - +Pod cpu 250m 2 - +Container memory 6Mi 1Gi 100Mi +Container cpu 250m 2 250m +``` + +In this scenario, we have said the following: + +1. The total memory usage of a pod across all of its container must fall between 6Mi and 1Gi. +2. The total cpu usage of a pod across all of its containers must fall between 250m and 2 cores. +3. A container in a pod may consume between 6Mi and 1Gi of memory. If the container does not +specify an explicit resource limit, each container in a pod will get 100Mi of memory. +4. A container in a pod may consume between 250m and 2 cores of cpu. If the container does +not specify an explicit resource limit, each container in a pod will get 250m of cpu. + +Step 3: Enforcing limits at point of creation +----------------------------------------- +The limits enumerated in a namespace are only enforced when a pod is created or updated in +the cluster. If you change the limits to a different value range, it does not affect pods that +were previously created in a namespace. + +If a resource (cpu or memory) is being restricted by a limit, the user will get an error at time +of creation explaining why. + +Let's first spin up a replication controller that creates a single container pod to demonstrate +how default values are applied to each pod. + +```shell +$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example +CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS +nginx nginx nginx run=nginx 1 +$ kubectl get pods --namespace=limit-example +POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE +nginx-ykj4j 10.246.1.3 10.245.1.3/ run=nginx Running About a minute + nginx nginx Running 54 seconds +$ kubectl get pods nginx-ykj4j --namespace=limit-example -o yaml | grep resources -C 5 + containers: + - capabilities: {} + image: nginx + imagePullPolicy: IfNotPresent + name: nginx + resources: + limits: + cpu: 250m + memory: 100Mi + terminationMessagePath: /dev/termination-log + volumeMounts: +``` + +Note that our nginx container has picked up the namespace default cpu and memory resource limits. + +Let's create a pod that exceeds our allowed limits by having it have a container that requests 3 cpu cores. + +```shell +$ kubectl create -f invalid-pod.yaml --namespace=limit-example +Error from server: Pod "invalid-pod" is forbidden: Maximum CPU usage per pod is 2, but requested 3 +``` + +Let's create a pod that falls within the allowed limit boundaries. + +```shell +$ kubectl create -f valid-pod.yaml --namespace=limit-example +pods/valid-pod +$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 5 resources + containers: + - capabilities: {} + image: gcr.io/google_containers/serve_hostname + imagePullPolicy: IfNotPresent + name: nginx + resources: + limits: + cpu: "1" + memory: 512Mi + securityContext: + capabilities: {} +``` + +Note that this pod specifies explicit resource limits so it did not pick up the namespace default values. + +Step 4: Cleanup +---------------------------- +To remove the resources used by this example, you can just delete the limit-example namespace. + +```shell +$ kubectl delete namespace limit-example +namespaces/limit-example +$ kubectl get namespaces +NAME LABELS STATUS +default Active +``` + +Summary +---------------------------- +Cluster operators that want to restrict the amount of resources a single container or pod may consume +are able to define allowable ranges per Kubernetes namespace. In the absence of any hard limits, +the Kubernetes system is able to apply default resource limits if desired in order to constrain the +amount of resource a pod consumes on a node. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/limitrange/README.md?pixel)]() diff --git a/examples/limitrange/invalid-pod.json b/examples/limitrange/invalid-pod.json deleted file mode 100644 index 1ab85f00738..00000000000 --- a/examples/limitrange/invalid-pod.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion":"v1", - "kind": "Pod", - "metadata": { - "name": "invalid-pod", - "labels": { - "name": "invalid-pod" - } - }, - "spec": { - "containers": [{ - "name": "kubernetes-serve-hostname", - "image": "gcr.io/google_containers/serve_hostname", - "resources": { - "limits": { - "cpu": "10m", - "memory": "5Mi" - } - } - }] - } -} diff --git a/examples/limitrange/invalid-pod.yaml b/examples/limitrange/invalid-pod.yaml new file mode 100644 index 00000000000..b63f25debab --- /dev/null +++ b/examples/limitrange/invalid-pod.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Pod +metadata: + name: invalid-pod +spec: + containers: + - name: kubernetes-serve-hostname + image: gcr.io/google_containers/serve_hostname + resources: + limits: + cpu: "3" + memory: 100Mi diff --git a/examples/limitrange/limit-range.json b/examples/limitrange/limit-range.json deleted file mode 100644 index 23b5bf8a7c8..00000000000 --- a/examples/limitrange/limit-range.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "LimitRange", - "metadata": { - "name": "limits" - }, - "spec": { - "limits": [ - { - "type": "Pod", - "max": { - "memory": "1Gi", - "cpu": "2" - }, - "min": { - "memory": "6Mi", - "cpu": "250m" - } - }, - { - "type": "Container", - "max": { - "memory": "1Gi", - "cpu": "2" - }, - "min": { - "memory": "6Mi", - "cpu": "250m" - }, - "default": { - "memory": "6Mi", - "cpu": "250m" - } - } - ] - } -} diff --git a/examples/limitrange/limits.yaml b/examples/limitrange/limits.yaml new file mode 100644 index 00000000000..ebd63dd5af8 --- /dev/null +++ b/examples/limitrange/limits.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: LimitRange +metadata: + name: mylimits +spec: + limits: + - max: + cpu: "2" + memory: 1Gi + min: + cpu: 250m + memory: 6Mi + type: Pod + - default: + cpu: 250m + memory: 100Mi + max: + cpu: "2" + memory: 1Gi + min: + cpu: 250m + memory: 6Mi + type: Container diff --git a/examples/limitrange/namespace.yaml b/examples/limitrange/namespace.yaml new file mode 100644 index 00000000000..200a894b0b5 --- /dev/null +++ b/examples/limitrange/namespace.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: limit-example diff --git a/examples/limitrange/valid-pod.json b/examples/limitrange/valid-pod.json deleted file mode 100644 index 4ffcf86fc19..00000000000 --- a/examples/limitrange/valid-pod.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion":"v1", - "kind": "Pod", - "metadata": { - "name": "valid-pod", - "labels": { - "name": "valid-pod" - } - }, - "spec": { - "containers": [{ - "name": "kubernetes-serve-hostname", - "image": "gcr.io/google_containers/serve_hostname", - "resources": { - "limits": { - "cpu": "1", - "memory": "6Mi" - } - } - }] - } -} diff --git a/examples/limitrange/valid-pod.yaml b/examples/limitrange/valid-pod.yaml new file mode 100644 index 00000000000..c1ec54183be --- /dev/null +++ b/examples/limitrange/valid-pod.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: valid-pod + labels: + name: valid-pod +spec: + containers: + - name: kubernetes-serve-hostname + image: gcr.io/google_containers/serve_hostname + resources: + limits: + cpu: "1" + memory: 512Mi diff --git a/hack/test-cmd.sh b/hack/test-cmd.sh index 2fd0a42046f..7d08e78d740 100755 --- a/hack/test-cmd.sh +++ b/hack/test-cmd.sh @@ -225,7 +225,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create "${kube_flags[@]}" -f examples/limitrange/valid-pod.json + kubectl create "${kube_flags[@]}" -f examples/limitrange/valid-pod.yaml # Post-condition: valid-pod POD is running kubectl get "${kube_flags[@]}" pods -o json kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -258,7 +258,7 @@ runTests() { # Pre-condition: valid-pod POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' # Command - kubectl delete -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl delete -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" # Post-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' @@ -266,7 +266,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" # Post-condition: valid-pod POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -282,7 +282,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" # Post-condition: valid-pod POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -314,7 +314,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" kubectl create -f examples/redis/redis-proxy.yaml "${kube_flags[@]}" # Post-condition: valid-pod and redis-proxy PODs are running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'redis-proxy:valid-pod:' @@ -331,7 +331,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" kubectl create -f examples/redis/redis-proxy.yaml "${kube_flags[@]}" # Post-condition: valid-pod and redis-proxy PODs are running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'redis-proxy:valid-pod:' @@ -348,7 +348,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" # Post-condition: valid-pod POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -372,7 +372,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" # Post-condition: valid-pod POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -438,7 +438,7 @@ runTests() { # Pre-condition: no POD is running kube::test::get_object_assert 'pods --namespace=other' "{{range.items}}{{$id_field}}:{{end}}" '' # Command - kubectl create "${kube_flags[@]}" --namespace=other -f examples/limitrange/valid-pod.json + kubectl create "${kube_flags[@]}" --namespace=other -f examples/limitrange/valid-pod.yaml # Post-condition: valid-pod POD is running kube::test::get_object_assert 'pods --namespace=other' "{{range.items}}{{$id_field}}:{{end}}" 'valid-pod:' @@ -630,7 +630,7 @@ __EOF__ # Post-condition: service exists kube::test::get_object_assert 'service frontend-2' "{{$port_field}}" '443' # Command - kubectl create -f examples/limitrange/valid-pod.json "${kube_flags[@]}" + kubectl create -f examples/limitrange/valid-pod.yaml "${kube_flags[@]}" kubectl expose pod valid-pod --port=444 --name=frontend-3 "${kube_flags[@]}" # Post-condition: service exists kube::test::get_object_assert 'service frontend-3' "{{$port_field}}" '444'