mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-05 11:12:03 +00:00
rename resize to scale
This commit is contained in:
@@ -20,9 +20,9 @@ kubectl_logs.md
|
||||
kubectl_namespace.md
|
||||
kubectl_port-forward.md
|
||||
kubectl_proxy.md
|
||||
kubectl_resize.md
|
||||
kubectl_rolling-update.md
|
||||
kubectl_run.md
|
||||
kubectl_scale.md
|
||||
kubectl_stop.md
|
||||
kubectl_update.md
|
||||
kubectl_version.md
|
||||
|
@@ -189,7 +189,7 @@ These are verbs which change the fundamental type of data returned (watch return
|
||||
|
||||
Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [accessing-the-cluster.md](accessing-the-cluster.md).
|
||||
|
||||
When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "resize" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body.
|
||||
When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/<name>/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body.
|
||||
|
||||
TODO: more documentation of Watch
|
||||
|
||||
|
@@ -243,10 +243,10 @@ myNginx nginx name=my-nginx 3
|
||||
|
||||
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with resizing the replicas with:
|
||||
You can already play with scaling the replicas with:
|
||||
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
|
@@ -88,7 +88,7 @@ redis-slave-controller-gziey 10.2.1.4 slave brendanburns/redis
|
||||
|
||||
## Scaling
|
||||
|
||||
Two single-core minions are certainly not enough for a production system of today, and, as you can see, there is one _unassigned_ pod. Let's resize the cluster by adding a couple of bigger nodes.
|
||||
Two single-core minions are certainly not enough for a production system of today, and, as you can see, there is one _unassigned_ pod. Let's scale the cluster by adding a couple of bigger nodes.
|
||||
|
||||
You will need to open another terminal window on your machine and go to the same working directory (e.g. `~/Workspace/weave-demos/coreos-azure`).
|
||||
|
||||
@@ -96,9 +96,9 @@ First, lets set the size of new VMs:
|
||||
```
|
||||
export AZ_VM_SIZE=Large
|
||||
```
|
||||
Now, run resize script with state file of the previous deployment and number of minions to add:
|
||||
Now, run scale script with state file of the previous deployment and number of minions to add:
|
||||
```
|
||||
./resize-kubernetes-cluster.js ./output/kubernetes_1c1496016083b4_deployment.yml 2
|
||||
./scale-kubernetes-cluster.js ./output/kubernetes_1c1496016083b4_deployment.yml 2
|
||||
...
|
||||
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kubernetes_8f984af944f572_ssh_conf <hostname>`
|
||||
azure_wrapper/info: The hosts in this deployment are:
|
||||
@@ -124,7 +124,7 @@ kube-03 environment=production Ready
|
||||
kube-04 environment=production Ready
|
||||
```
|
||||
|
||||
You can see that two more minions joined happily. Let's resize the number of Guestbook instances now.
|
||||
You can see that two more minions joined happily. Let's scale the number of Guestbook instances now.
|
||||
|
||||
First, double-check how many replication controllers there are:
|
||||
|
||||
@@ -134,12 +134,12 @@ CONTROLLER CONTAINER(S) IMAGE(S)
|
||||
frontend-controller php-redis kubernetes/example-guestbook-php-redis name=frontend 3
|
||||
redis-slave-controller slave brendanburns/redis-slave name=redisslave 2
|
||||
```
|
||||
As there are 4 minions, let's resize proportionally:
|
||||
As there are 4 minions, let's scale proportionally:
|
||||
```
|
||||
core@kube-00 ~ $ kubectl resize --replicas=4 rc redis-slave-controller
|
||||
resized
|
||||
core@kube-00 ~ $ kubectl resize --replicas=4 rc frontend-controller
|
||||
resized
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave-controller
|
||||
scaled
|
||||
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend-controller
|
||||
scaled
|
||||
```
|
||||
Check what you have now:
|
||||
```
|
||||
@@ -182,7 +182,7 @@ If you don't wish care about the Azure bill, you can tear down the cluster. It's
|
||||
./destroy-cluster.js ./output/kubernetes_8f984af944f572_deployment.yml
|
||||
```
|
||||
|
||||
> Note: make sure to use the _latest state file_, as after resizing there is a new one.
|
||||
> Note: make sure to use the _latest state file_, as after scaling there is a new one.
|
||||
|
||||
By the way, with the scripts shown, you can deploy multiple clusters, if you like :)
|
||||
|
||||
|
@@ -46,7 +46,7 @@ Note that you will need run this curl command on your boot2docker VM if you are
|
||||
Now try to scale up the nginx you created before:
|
||||
|
||||
```sh
|
||||
kubectl resize rc nginx --replicas=3
|
||||
kubectl scale rc nginx --replicas=3
|
||||
```
|
||||
|
||||
And list the pods
|
||||
|
@@ -219,10 +219,10 @@ myNginx nginx name=my-nginx 3
|
||||
|
||||
We did not start any services, hence there are none listed. But we see three replicas displayed properly.
|
||||
Check the [guestbook](../../examples/guestbook/README.md) application to learn how to create a service.
|
||||
You can already play with resizing the replicas with:
|
||||
You can already play with scaling the replicas with:
|
||||
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh resize rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh scale rc my-nginx --replicas=2
|
||||
$ ./cluster/kubectl.sh get pods
|
||||
NAME IMAGE(S) HOST LABELS STATUS
|
||||
7813c8bd-3ffe-11e4-9036-0800279696e1 nginx 10.245.2.2/10.245.2.2 name=myNginx Running
|
||||
|
@@ -58,9 +58,9 @@ kubectl
|
||||
* [kubectl namespace](kubectl_namespace.md) - SUPERCEDED: Set and view the current Kubernetes namespace
|
||||
* [kubectl port-forward](kubectl_port-forward.md) - Forward one or more local ports to a pod.
|
||||
* [kubectl proxy](kubectl_proxy.md) - Run a proxy to the Kubernetes API server
|
||||
* [kubectl resize](kubectl_resize.md) - Set a new size for a Replication Controller.
|
||||
* [kubectl rolling-update](kubectl_rolling-update.md) - Perform a rolling update of the given ReplicationController.
|
||||
* [kubectl run](kubectl_run.md) - Run a particular image on the cluster.
|
||||
* [kubectl scale](kubectl_scale.md) - Set a new size for a Replication Controller.
|
||||
* [kubectl stop](kubectl_stop.md) - Gracefully shut down a resource by id or filename.
|
||||
* [kubectl update](kubectl_update.md) - Update a resource by filename or stdin.
|
||||
* [kubectl version](kubectl_version.md) - Print the client and server version information.
|
||||
|
@@ -1,4 +1,4 @@
|
||||
## kubectl resize
|
||||
## kubectl scale
|
||||
|
||||
Set a new size for a Replication Controller.
|
||||
|
||||
@@ -7,32 +7,32 @@ Set a new size for a Replication Controller.
|
||||
|
||||
Set a new size for a Replication Controller.
|
||||
|
||||
Resize also allows users to specify one or more preconditions for the resize action.
|
||||
Scale also allows users to specify one or more preconditions for the scale action.
|
||||
If --current-replicas or --resource-version is specified, it is validated before the
|
||||
resize is attempted, and it is guaranteed that the precondition holds true when the
|
||||
resize is sent to the server.
|
||||
scale is attempted, and it is guaranteed that the precondition holds true when the
|
||||
scale is sent to the server.
|
||||
|
||||
```
|
||||
kubectl resize [--resource-version=version] [--current-replicas=count] --replicas=COUNT RESOURCE ID
|
||||
kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT RESOURCE ID
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
```
|
||||
// Resize replication controller named 'foo' to 3.
|
||||
$ kubectl resize --replicas=3 replicationcontrollers foo
|
||||
// Scale replication controller named 'foo' to 3.
|
||||
$ kubectl scale --replicas=3 replicationcontrollers foo
|
||||
|
||||
// If the replication controller named foo's current size is 2, resize foo to 3.
|
||||
$ kubectl resize --current-replicas=2 --replicas=3 replicationcontrollers foo
|
||||
// If the replication controller named foo's current size is 2, scale foo to 3.
|
||||
$ kubectl scale --current-replicas=2 --replicas=3 replicationcontrollers foo
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--current-replicas=-1: Precondition for current size. Requires that the current size of the replication controller match this value in order to resize.
|
||||
-h, --help=false: help for resize
|
||||
--current-replicas=-1: Precondition for current size. Requires that the current size of the replication controller match this value in order to scale.
|
||||
-h, --help=false: help for scale
|
||||
--replicas=-1: The new desired number of replicas. Required.
|
||||
--resource-version="": Precondition for resource version. Requires that the current resource version match this value in order to resize.
|
||||
--resource-version="": Precondition for resource version. Requires that the current resource version match this value in order to scale.
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
@@ -69,4 +69,4 @@ $ kubectl resize --current-replicas=2 --replicas=3 replicationcontrollers foo
|
||||
|
||||
###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.185268791 +0000 UTC
|
||||
|
||||
[]()
|
||||
[]()
|
@@ -8,7 +8,7 @@ Gracefully shut down a resource by id or filename.
|
||||
Gracefully shut down a resource by id or filename.
|
||||
|
||||
Attempts to shut down and delete a resource that supports graceful termination.
|
||||
If the resource is resizable it will be resized to 0 before deletion.
|
||||
If the resource is scalable it will be scaled to 0 before deletion.
|
||||
|
||||
```
|
||||
kubectl stop (-f FILENAME | RESOURCE (ID | -l label | --all))
|
||||
|
@@ -19,9 +19,9 @@ kubectl-logs.1
|
||||
kubectl-namespace.1
|
||||
kubectl-port-forward.1
|
||||
kubectl-proxy.1
|
||||
kubectl-resize.1
|
||||
kubectl-rolling-update.1
|
||||
kubectl-run.1
|
||||
kubectl-scale.1
|
||||
kubectl-stop.1
|
||||
kubectl-update.1
|
||||
kubectl-version.1
|
||||
|
@@ -3,12 +3,12 @@
|
||||
|
||||
.SH NAME
|
||||
.PP
|
||||
kubectl resize \- Set a new size for a Replication Controller.
|
||||
kubectl scale \- Set a new size for a Replication Controller.
|
||||
|
||||
|
||||
.SH SYNOPSIS
|
||||
.PP
|
||||
\fBkubectl resize\fP [OPTIONS]
|
||||
\fBkubectl scale\fP [OPTIONS]
|
||||
|
||||
|
||||
.SH DESCRIPTION
|
||||
@@ -16,20 +16,20 @@ kubectl resize \- Set a new size for a Replication Controller.
|
||||
Set a new size for a Replication Controller.
|
||||
|
||||
.PP
|
||||
Resize also allows users to specify one or more preconditions for the resize action.
|
||||
Scale also allows users to specify one or more preconditions for the scale action.
|
||||
If \-\-current\-replicas or \-\-resource\-version is specified, it is validated before the
|
||||
resize is attempted, and it is guaranteed that the precondition holds true when the
|
||||
resize is sent to the server.
|
||||
scale is attempted, and it is guaranteed that the precondition holds true when the
|
||||
scale is sent to the server.
|
||||
|
||||
|
||||
.SH OPTIONS
|
||||
.PP
|
||||
\fB\-\-current\-replicas\fP=\-1
|
||||
Precondition for current size. Requires that the current size of the replication controller match this value in order to resize.
|
||||
Precondition for current size. Requires that the current size of the replication controller match this value in order to scale.
|
||||
|
||||
.PP
|
||||
\fB\-h\fP, \fB\-\-help\fP=false
|
||||
help for resize
|
||||
help for scale
|
||||
|
||||
.PP
|
||||
\fB\-\-replicas\fP=\-1
|
||||
@@ -37,7 +37,7 @@ resize is sent to the server.
|
||||
|
||||
.PP
|
||||
\fB\-\-resource\-version\fP=""
|
||||
Precondition for resource version. Requires that the current resource version match this value in order to resize.
|
||||
Precondition for resource version. Requires that the current resource version match this value in order to scale.
|
||||
|
||||
|
||||
.SH OPTIONS INHERITED FROM PARENT COMMANDS
|
||||
@@ -143,11 +143,11 @@ resize is sent to the server.
|
||||
.RS
|
||||
|
||||
.nf
|
||||
// Resize replication controller named 'foo' to 3.
|
||||
$ kubectl resize \-\-replicas=3 replicationcontrollers foo
|
||||
// Scale replication controller named 'foo' to 3.
|
||||
$ kubectl scale \-\-replicas=3 replicationcontrollers foo
|
||||
|
||||
// If the replication controller named foo's current size is 2, resize foo to 3.
|
||||
$ kubectl resize \-\-current\-replicas=2 \-\-replicas=3 replicationcontrollers foo
|
||||
// If the replication controller named foo's current size is 2, scale foo to 3.
|
||||
$ kubectl scale \-\-current\-replicas=2 \-\-replicas=3 replicationcontrollers foo
|
||||
|
||||
.fi
|
||||
.RE
|
@@ -17,7 +17,7 @@ Gracefully shut down a resource by id or filename.
|
||||
|
||||
.PP
|
||||
Attempts to shut down and delete a resource that supports graceful termination.
|
||||
If the resource is resizable it will be resized to 0 before deletion.
|
||||
If the resource is scalable it will be scaled to 0 before deletion.
|
||||
|
||||
|
||||
.SH OPTIONS
|
||||
|
@@ -124,7 +124,7 @@ Find more information at
|
||||
|
||||
.SH SEE ALSO
|
||||
.PP
|
||||
\fBkubectl\-get(1)\fP, \fBkubectl\-describe(1)\fP, \fBkubectl\-create(1)\fP, \fBkubectl\-update(1)\fP, \fBkubectl\-delete(1)\fP, \fBkubectl\-namespace(1)\fP, \fBkubectl\-logs(1)\fP, \fBkubectl\-rolling\-update(1)\fP, \fBkubectl\-resize(1)\fP, \fBkubectl\-exec(1)\fP, \fBkubectl\-port\-forward(1)\fP, \fBkubectl\-proxy(1)\fP, \fBkubectl\-run(1)\fP, \fBkubectl\-stop(1)\fP, \fBkubectl\-expose(1)\fP, \fBkubectl\-label(1)\fP, \fBkubectl\-config(1)\fP, \fBkubectl\-cluster\-info(1)\fP, \fBkubectl\-api\-versions(1)\fP, \fBkubectl\-version(1)\fP,
|
||||
\fBkubectl\-get(1)\fP, \fBkubectl\-describe(1)\fP, \fBkubectl\-create(1)\fP, \fBkubectl\-update(1)\fP, \fBkubectl\-delete(1)\fP, \fBkubectl\-namespace(1)\fP, \fBkubectl\-logs(1)\fP, \fBkubectl\-rolling\-update(1)\fP, \fBkubectl\-scale(1)\fP, \fBkubectl\-exec(1)\fP, \fBkubectl\-port\-forward(1)\fP, \fBkubectl\-proxy(1)\fP, \fBkubectl\-run(1)\fP, \fBkubectl\-stop(1)\fP, \fBkubectl\-expose(1)\fP, \fBkubectl\-label(1)\fP, \fBkubectl\-config(1)\fP, \fBkubectl\-cluster\-info(1)\fP, \fBkubectl\-api\-versions(1)\fP, \fBkubectl\-version(1)\fP,
|
||||
|
||||
|
||||
.SH HISTORY
|
||||
|
@@ -1,18 +1,18 @@
|
||||
## Abstract
|
||||
Auto-scaling is a data-driven feature that allows users to increase or decrease capacity as needed by controlling the
|
||||
Auto-scaling is a data-driven feature that allows users to increase or decrease capacity as needed by controlling the
|
||||
number of pods deployed within the system automatically.
|
||||
|
||||
## Motivation
|
||||
|
||||
Applications experience peaks and valleys in usage. In order to respond to increases and decreases in load, administrators
|
||||
scale their applications by adding computing resources. In the cloud computing environment this can be
|
||||
Applications experience peaks and valleys in usage. In order to respond to increases and decreases in load, administrators
|
||||
scale their applications by adding computing resources. In the cloud computing environment this can be
|
||||
done automatically based on statistical analysis and thresholds.
|
||||
|
||||
### Goals
|
||||
|
||||
* Provide a concrete proposal for implementing auto-scaling pods within Kubernetes
|
||||
* Implementation proposal should be in line with current discussions in existing issues:
|
||||
* Resize verb - [1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629)
|
||||
* Implementation proposal should be in line with current discussions in existing issues:
|
||||
* Scale verb - [1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629)
|
||||
* Config conflicts - [Config](https://github.com/GoogleCloudPlatform/kubernetes/blob/c7cb991987193d4ca33544137a5cb7d0292cf7df/docs/config.md#automated-re-configuration-processes)
|
||||
* Rolling updates - [1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353)
|
||||
* Multiple scalable types - [1624](https://github.com/GoogleCloudPlatform/kubernetes/issues/1624)
|
||||
@@ -20,45 +20,45 @@ done automatically based on statistical analysis and thresholds.
|
||||
## Constraints and Assumptions
|
||||
|
||||
* This proposal is for horizontal scaling only. Vertical scaling will be handled in [issue 2072](https://github.com/GoogleCloudPlatform/kubernetes/issues/2072)
|
||||
* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are
|
||||
* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are
|
||||
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](http://docs.k8s.io/replication-controller.md#responsibilities-of-the-replication-controller)
|
||||
* Auto-scalers will be loosely coupled with data gathering components in order to allow a wide variety of input sources
|
||||
* Auto-scalable resources will support a resize verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629))
|
||||
* Auto-scalable resources will support a scale verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629))
|
||||
such that the auto-scaler does not directly manipulate the underlying resource.
|
||||
* Initially, most thresholds will be set by application administrators. It should be possible for an autoscaler to be
|
||||
* Initially, most thresholds will be set by application administrators. It should be possible for an autoscaler to be
|
||||
written later that sets thresholds automatically based on past behavior (CPU used vs incoming requests).
|
||||
* The auto-scaler must be aware of user defined actions so it does not override them unintentionally (for instance someone
|
||||
* The auto-scaler must be aware of user defined actions so it does not override them unintentionally (for instance someone
|
||||
explicitly setting the replica count to 0 should mean that the auto-scaler does not try to scale the application up)
|
||||
* It should be possible to write and deploy a custom auto-scaler without modifying existing auto-scalers
|
||||
* Auto-scalers must be able to monitor multiple replication controllers while only targeting a single scalable
|
||||
object (for now a ReplicationController, but in the future it could be a job or any resource that implements resize)
|
||||
* Auto-scalers must be able to monitor multiple replication controllers while only targeting a single scalable
|
||||
object (for now a ReplicationController, but in the future it could be a job or any resource that implements scale)
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Scaling based on traffic
|
||||
|
||||
The current, most obvious, use case is scaling an application based on network traffic like requests per second. Most
|
||||
applications will expose one or more network endpoints for clients to connect to. Many of those endpoints will be load
|
||||
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
|
||||
The current, most obvious, use case is scaling an application based on network traffic like requests per second. Most
|
||||
applications will expose one or more network endpoints for clients to connect to. Many of those endpoints will be load
|
||||
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
|
||||
server traffic for applications. This is the primary, but not sole, source of data for making decisions.
|
||||
|
||||
Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-portals)
|
||||
Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-portals)
|
||||
running on each node directs service requests to the underlying implementation.
|
||||
|
||||
While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage
|
||||
traffic to backends. OpenShift, for instance, adds a "route" resource for defining external to internal traffic flow.
|
||||
The "routers" are HAProxy or Apache load balancers that aggregate many different services and pods and can serve as a
|
||||
While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage
|
||||
traffic to backends. OpenShift, for instance, adds a "route" resource for defining external to internal traffic flow.
|
||||
The "routers" are HAProxy or Apache load balancers that aggregate many different services and pods and can serve as a
|
||||
data source for the number of backends.
|
||||
|
||||
### Scaling based on predictive analysis
|
||||
|
||||
Scaling may also occur based on predictions of system state like anticipated load, historical data, etc. Hand in hand
|
||||
Scaling may also occur based on predictions of system state like anticipated load, historical data, etc. Hand in hand
|
||||
with scaling based on traffic, predictive analysis may be used to determine anticipated system load and scale the application automatically.
|
||||
|
||||
### Scaling based on arbitrary data
|
||||
|
||||
Administrators may wish to scale the application based on any number of arbitrary data points such as job execution time or
|
||||
duration of active sessions. There are any number of reasons an administrator may wish to increase or decrease capacity which
|
||||
duration of active sessions. There are any number of reasons an administrator may wish to increase or decrease capacity which
|
||||
means the auto-scaler must be a configurable, extensible component.
|
||||
|
||||
## Specification
|
||||
@@ -68,23 +68,23 @@ In order to facilitate talking about auto-scaling the following definitions are
|
||||
* `ReplicationController` - the first building block of auto scaling. Pods are deployed and scaled by a `ReplicationController`.
|
||||
* kube proxy - The proxy handles internal inter-pod traffic, an example of a data source to drive an auto-scaler
|
||||
* L3/L7 proxies - A routing layer handling outside to inside traffic requests, an example of a data source to drive an auto-scaler
|
||||
* auto-scaler - scales replicas up and down by using the `resize` endpoint provided by scalable resources (`ReplicationController`)
|
||||
* auto-scaler - scales replicas up and down by using the `scale` endpoint provided by scalable resources (`ReplicationController`)
|
||||
|
||||
|
||||
### Auto-Scaler
|
||||
|
||||
The Auto-Scaler is a state reconciler responsible for checking data against configured scaling thresholds
|
||||
and calling the `resize` endpoint to change the number of replicas. The scaler will
|
||||
use a client/cache implementation to receive watch data from the data aggregators and respond to them by
|
||||
scaling the application. Auto-scalers are created and defined like other resources via REST endpoints and belong to the
|
||||
The Auto-Scaler is a state reconciler responsible for checking data against configured scaling thresholds
|
||||
and calling the `scale` endpoint to change the number of replicas. The scaler will
|
||||
use a client/cache implementation to receive watch data from the data aggregators and respond to them by
|
||||
scaling the application. Auto-scalers are created and defined like other resources via REST endpoints and belong to the
|
||||
namespace just as a `ReplicationController` or `Service`.
|
||||
|
||||
Since an auto-scaler is a durable object it is best represented as a resource.
|
||||
|
||||
```go
|
||||
//The auto scaler interface
|
||||
type AutoScalerInterface interface {
|
||||
//ScaleApplication adjusts a resource's replica count. Calls resize endpoint.
|
||||
type AutoScalerInterface interface {
|
||||
//ScaleApplication adjusts a resource's replica count. Calls scale endpoint.
|
||||
//Args to this are based on what the endpoint
|
||||
//can support. See https://github.com/GoogleCloudPlatform/kubernetes/issues/1629
|
||||
ScaleApplication(num int) error
|
||||
@@ -95,162 +95,162 @@ Since an auto-scaler is a durable object it is best represented as a resource.
|
||||
TypeMeta
|
||||
//common construct
|
||||
ObjectMeta
|
||||
|
||||
//Spec defines the configuration options that drive the behavior for this auto-scaler
|
||||
|
||||
//Spec defines the configuration options that drive the behavior for this auto-scaler
|
||||
Spec AutoScalerSpec
|
||||
|
||||
|
||||
//Status defines the current status of this auto-scaler.
|
||||
Status AutoScalerStatus
|
||||
Status AutoScalerStatus
|
||||
}
|
||||
|
||||
|
||||
type AutoScalerSpec struct {
|
||||
//AutoScaleThresholds holds a collection of AutoScaleThresholds that drive the auto scaler
|
||||
AutoScaleThresholds []AutoScaleThreshold
|
||||
|
||||
|
||||
//Enabled turns auto scaling on or off
|
||||
Enabled boolean
|
||||
|
||||
Enabled boolean
|
||||
|
||||
//MaxAutoScaleCount defines the max replicas that the auto scaler can use.
|
||||
//This value must be greater than 0 and >= MinAutoScaleCount
|
||||
MaxAutoScaleCount int
|
||||
|
||||
//MinAutoScaleCount defines the minimum number replicas that the auto scaler can reduce to,
|
||||
//0 means that the application is allowed to idle
|
||||
MinAutoScaleCount int
|
||||
|
||||
//TargetSelector provides the resizeable target(s). Right now this is a ReplicationController
|
||||
//in the future it could be a job or any resource that implements resize.
|
||||
|
||||
//MinAutoScaleCount defines the minimum number replicas that the auto scaler can reduce to,
|
||||
//0 means that the application is allowed to idle
|
||||
MinAutoScaleCount int
|
||||
|
||||
//TargetSelector provides the scalable target(s). Right now this is a ReplicationController
|
||||
//in the future it could be a job or any resource that implements scale.
|
||||
TargetSelector map[string]string
|
||||
|
||||
//MonitorSelector defines a set of capacity that the auto-scaler is monitoring
|
||||
|
||||
//MonitorSelector defines a set of capacity that the auto-scaler is monitoring
|
||||
//(replication controllers). Monitored objects are used by thresholds to examine
|
||||
//statistics. Example: get statistic X for object Y to see if threshold is passed
|
||||
MonitorSelector map[string]string
|
||||
}
|
||||
|
||||
|
||||
type AutoScalerStatus struct {
|
||||
// TODO: open for discussion on what meaningful information can be reported in the status
|
||||
// The status may return the replica count here but we may want more information
|
||||
// such as if the count reflects a threshold being passed
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
||||
|
||||
//AutoScaleThresholdInterface abstracts the data analysis from the auto-scaler
|
||||
//example: scale by 1 (Increment) when RequestsPerSecond (Type) pass
|
||||
//example: scale by 1 (Increment) when RequestsPerSecond (Type) pass
|
||||
//comparison (Comparison) of 50 (Value) for 30 seconds (Duration)
|
||||
type AutoScaleThresholdInterface interface {
|
||||
//called by the auto-scaler to determine if this threshold is met or not
|
||||
ShouldScale() boolean
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
//AutoScaleThreshold is a single statistic used to drive the auto-scaler in scaling decisions
|
||||
type AutoScaleThreshold struct {
|
||||
// Type is the type of threshold being used, intention or value
|
||||
Type AutoScaleThresholdType
|
||||
|
||||
|
||||
// ValueConfig holds the config for value based thresholds
|
||||
ValueConfig AutoScaleValueThresholdConfig
|
||||
|
||||
|
||||
// IntentionConfig holds the config for intention based thresholds
|
||||
IntentionConfig AutoScaleIntentionThresholdConfig
|
||||
}
|
||||
|
||||
IntentionConfig AutoScaleIntentionThresholdConfig
|
||||
}
|
||||
|
||||
// AutoScaleIntentionThresholdConfig holds configuration for intention based thresholds
|
||||
// a intention based threshold defines no increment, the scaler will adjust by 1 accordingly
|
||||
// and maintain once the intention is reached. Also, no selector is defined, the intention
|
||||
// should dictate the selector used for statistics. Same for duration although we
|
||||
// a intention based threshold defines no increment, the scaler will adjust by 1 accordingly
|
||||
// and maintain once the intention is reached. Also, no selector is defined, the intention
|
||||
// should dictate the selector used for statistics. Same for duration although we
|
||||
// may want a configurable duration later so intentions are more customizable.
|
||||
type AutoScaleIntentionThresholdConfig struct {
|
||||
// Intent is the lexicon of what intention is requested
|
||||
Intent AutoScaleIntentionType
|
||||
|
||||
// Value is intention dependent in terms of above, below, equal and represents
|
||||
|
||||
// Value is intention dependent in terms of above, below, equal and represents
|
||||
// the value to check against
|
||||
Value float
|
||||
}
|
||||
|
||||
|
||||
// AutoScaleValueThresholdConfig holds configuration for value based thresholds
|
||||
type AutoScaleValueThresholdConfig struct {
|
||||
//Increment determines how the auot-scaler should scale up or down (positive number to
|
||||
//Increment determines how the auot-scaler should scale up or down (positive number to
|
||||
//scale up based on this threshold negative number to scale down by this threshold)
|
||||
Increment int
|
||||
//Selector represents the retrieval mechanism for a statistic value from statistics
|
||||
//storage. Once statistics are better defined the retrieval mechanism may change.
|
||||
//Ultimately, the selector returns a representation of a statistic that can be
|
||||
//Ultimately, the selector returns a representation of a statistic that can be
|
||||
//compared against the threshold value.
|
||||
Selector map[string]string
|
||||
Selector map[string]string
|
||||
//Duration is the time lapse after which this threshold is considered passed
|
||||
Duration time.Duration
|
||||
//Value is the number at which, after the duration is passed, this threshold is considered
|
||||
//Value is the number at which, after the duration is passed, this threshold is considered
|
||||
//to be triggered
|
||||
Value float
|
||||
//Comparison component to be applied to the value.
|
||||
Comparison string
|
||||
}
|
||||
|
||||
|
||||
// AutoScaleThresholdType is either intention based or value based
|
||||
type AutoScaleThresholdType string
|
||||
|
||||
// AutoScaleIntentionType is a lexicon for intentions such as "cpu-utilization",
|
||||
type AutoScaleThresholdType string
|
||||
|
||||
// AutoScaleIntentionType is a lexicon for intentions such as "cpu-utilization",
|
||||
// "max-rps-per-endpoint"
|
||||
type AutoScaleIntentionType string
|
||||
```
|
||||
|
||||
#### Boundary Definitions
|
||||
|
||||
#### Boundary Definitions
|
||||
The `AutoScaleThreshold` definitions provide the boundaries for the auto-scaler. By defining comparisons that form a range
|
||||
along with positive and negative increments you may define bi-directional scaling. For example the upper bound may be
|
||||
specified as "when requests per second rise above 50 for 30 seconds scale the application up by 1" and a lower bound may
|
||||
along with positive and negative increments you may define bi-directional scaling. For example the upper bound may be
|
||||
specified as "when requests per second rise above 50 for 30 seconds scale the application up by 1" and a lower bound may
|
||||
be specified as "when requests per second fall below 25 for 30 seconds scale the application down by 1 (implemented by using -1)".
|
||||
|
||||
### Data Aggregator
|
||||
|
||||
This section has intentionally been left empty. I will defer to folks who have more experience gathering and analyzing
|
||||
This section has intentionally been left empty. I will defer to folks who have more experience gathering and analyzing
|
||||
time series statistics.
|
||||
|
||||
Data aggregation is opaque to the the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds`
|
||||
that know how to work with the underlying data in order to know if an application must be scaled up or down. Data aggregation
|
||||
must feed a common data structure to ease the development of `AutoScaleThreshold`s but it does not matter to the
|
||||
Data aggregation is opaque to the the auto-scaler resource. The auto-scaler is configured to use `AutoScaleThresholds`
|
||||
that know how to work with the underlying data in order to know if an application must be scaled up or down. Data aggregation
|
||||
must feed a common data structure to ease the development of `AutoScaleThreshold`s but it does not matter to the
|
||||
auto-scaler whether this occurs in a push or pull implementation, whether or not the data is stored at a granular level,
|
||||
or what algorithm is used to determine the final statistics value. Ultimately, the auto-scaler only requires that a statistic
|
||||
or what algorithm is used to determine the final statistics value. Ultimately, the auto-scaler only requires that a statistic
|
||||
resolves to a value that can be checked against a configured threshold.
|
||||
|
||||
Of note: If the statistics gathering mechanisms can be initialized with a registry other components storing statistics can
|
||||
potentially piggyback on this registry.
|
||||
|
||||
### Multi-target Scaling Policy
|
||||
If multiple resizable targets satisfy the `TargetSelector` criteria the auto-scaler should be configurable as to which
|
||||
target(s) are resized. To begin with, if multiple targets are found the auto-scaler will scale the largest target up
|
||||
If multiple scalable targets satisfy the `TargetSelector` criteria the auto-scaler should be configurable as to which
|
||||
target(s) are scaled. To begin with, if multiple targets are found the auto-scaler will scale the largest target up
|
||||
or down as appropriate. In the future this may be more configurable.
|
||||
|
||||
### Interactions with a deployment
|
||||
|
||||
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](http://docs.k8s.io/replication-controller.md#rolling-updates)
|
||||
there will be multiple replication controllers, with one scaling up and another scaling down. This means that an
|
||||
auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector`
|
||||
is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity
|
||||
of multiple replication controllers and check that capacity against the `AutoScalerSpec.MaxAutoScaleCount` and
|
||||
there will be multiple replication controllers, with one scaling up and another scaling down. This means that an
|
||||
auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector`
|
||||
is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity
|
||||
of multiple replication controllers and check that capacity against the `AutoScalerSpec.MaxAutoScaleCount` and
|
||||
`AutoScalerSpec.MinAutoScaleCount` while still only targeting a specific set of `ReplicationController`s with `TargetSelector`.
|
||||
|
||||
In the course of a deployment it is up to the deployment orchestration to decide how to manage the labels
|
||||
on the replication controllers if it needs to ensure that only specific replication controllers are targeted by
|
||||
on the replication controllers if it needs to ensure that only specific replication controllers are targeted by
|
||||
the auto-scaler. By default, the auto-scaler will scale the largest replication controller that meets the target label
|
||||
selector criteria.
|
||||
|
||||
|
||||
During deployment orchestration the auto-scaler may be making decisions to scale its target up or down. In order to prevent
|
||||
the scaler from fighting with a deployment process that is scaling one replication controller up and scaling another one
|
||||
down the deployment process must assume that the current replica count may be changed by objects other than itself and
|
||||
down the deployment process must assume that the current replica count may be changed by objects other than itself and
|
||||
account for this in the scale up or down process. Therefore, the deployment process may no longer target an exact number
|
||||
of instances to be deployed. It must be satisfied that the replica count for the deployment meets or exceeds the number
|
||||
of requested instances.
|
||||
|
||||
Auto-scaling down in a deployment scenario is a special case. In order for the deployment to complete successfully the
|
||||
Auto-scaling down in a deployment scenario is a special case. In order for the deployment to complete successfully the
|
||||
deployment orchestration must ensure that the desired number of instances that are supposed to be deployed has been met.
|
||||
If the auto-scaler is trying to scale the application down (due to no traffic, or other statistics) then the deployment
|
||||
process and auto-scaler are fighting to increase and decrease the count of the targeted replication controller. In order
|
||||
to prevent this, deployment orchestration should notify the auto-scaler that a deployment is occurring. This will
|
||||
temporarily disable negative decrement thresholds until the deployment process is completed. It is more important for
|
||||
an auto-scaler to be able to grow capacity during a deployment than to shrink the number of instances precisely.
|
||||
to prevent this, deployment orchestration should notify the auto-scaler that a deployment is occurring. This will
|
||||
temporarily disable negative decrement thresholds until the deployment process is completed. It is more important for
|
||||
an auto-scaler to be able to grow capacity during a deployment than to shrink the number of instances precisely.
|
||||
|
||||
|
||||
|
||||
|
@@ -36,7 +36,7 @@ The replication controller simply ensures that the desired number of pods matche
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, resize, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
## Common usage patterns
|
||||
|
||||
@@ -52,7 +52,7 @@ Replication controller makes it easy to scale the number of replicas up or down,
|
||||
|
||||
Replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353), the recommended approach is to create a new replication controller with 1 replica, resize the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
As explained in [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.
|
||||
|
||||
|
Reference in New Issue
Block a user