Merge pull request #8602 from caesarxuchao/rC-to-rc

in docs, update replicationController to replicationcontroller
This commit is contained in:
Dawn Chen 2015-05-21 09:49:35 -07:00
commit fdb44a9ad2
14 changed files with 26 additions and 26 deletions

View File

@ -41,7 +41,7 @@ The scheduler binds unscheduled pods to nodes via the `/binding` API. The schedu
All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable.
The [`replicationController`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
The [`replicationcontroller`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]()

View File

@ -184,7 +184,7 @@ NAME IMAGE(S) HOST LABELS STATUS
$ cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ cluster/kubectl.sh get replicationControllers
$ cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
```
@ -224,7 +224,7 @@ kubernetes-minion-1:
fa0e29c94501 kubernetes/pause:latest "/pause" 8 minutes ago Up 8 minutes 0.0.0.0:8080->80/tcp k8s--net.a90e7ce4--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1.etcd--7813c8bd_-_3ffe_-_11e4_-_9036_-_0800279696e1--baf5b21b
```
Going back to listing the pods, services and replicationControllers, you now have:
Going back to listing the pods, services and replicationcontrollers, you now have:
```
$ cluster/kubectl.sh get pods
@ -236,7 +236,7 @@ NAME IMAGE(S) HOST
$ cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ cluster/kubectl.sh get replicationControllers
$ cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
myNginx nginx name=my-nginx 3
```

View File

@ -89,7 +89,7 @@ No pods will be available before starting a container:
kubectl get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS
kubectl get replicationControllers
kubectl get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
We'll follow the aws-coreos example. Create a pod manifest: `pod.json`

View File

@ -46,7 +46,7 @@ You can now use any of the cluster/kubectl.sh commands to interact with your loc
```
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationControllers
cluster/kubectl.sh get replicationcontrollers
cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=2 --port=80
@ -61,7 +61,7 @@ cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=2 --port=80
## introspect kubernetes!
cluster/kubectl.sh get pods
cluster/kubectl.sh get services
cluster/kubectl.sh get replicationControllers
cluster/kubectl.sh get replicationcontrollers
```

View File

@ -157,7 +157,7 @@ NAME IMAGE(S) HOST LABELS STATUS
$ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ ./cluster/kubectl.sh get replicationControllers
$ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
```
@ -200,7 +200,7 @@ kubernetes-minion-1:
65a3a926f357 kubernetes/pause:latest "/pause" 39 minutes ago Up 39 minutes 0.0.0.0:4194->8080/tcp k8s--net.c5ba7f0e--cadvisor_-_agent.file--342fd561
```
Going back to listing the pods, services and replicationControllers, you now have:
Going back to listing the pods, services and replicationcontrollers, you now have:
```sh
$ ./cluster/kubectl.sh get pods
@ -212,7 +212,7 @@ NAME IMAGE(S) HOST
$ ./cluster/kubectl.sh get services
NAME LABELS SELECTOR IP PORT
$ ./cluster/kubectl.sh get replicationControllers
$ ./cluster/kubectl.sh get replicationcontrollers
NAME IMAGE(S SELECTOR REPLICAS
myNginx nginx name=my-nginx 3
```

View File

@ -24,7 +24,7 @@ kubectl get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/N
$ kubectl get pods
// List a single replication controller with specified NAME in ps output format.
$ kubectl get replicationController web
$ kubectl get replicationcontroller web
// List a single pod in JSON output format.
$ kubectl get -o json pod web-pod-13je7

View File

@ -83,13 +83,13 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
- _equality-based_ requirements: `?label-selector=key1%3Dvalue1,key2%3Dvalue2`
- _set-based_ requirements: `?label-selector=key+in+%28value1%2Cvalue2%29%2Ckey2+notin+%28value3`
Kubernetes also currently supports two objects that use label selectors to keep track of their members, `service`s and `replicationController`s:
Kubernetes also currently supports two objects that use label selectors to keep track of their members, `service`s and `replicationcontroller`s:
- `service`: A [service](/docs/services.md) is a configuration unit for the proxies that run on every worker node. It is named and points to one or more pods.
- `replicationController`: A [replication controller](/docs/replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time.
- `replicationcontroller`: A [replication controller](/docs/replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time.
The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationController` is monitoring is also defined with a label selector. For management convenience and consistency, `services` and `replicationControllers` may themselves have labels and would generally carry the labels their corresponding pods have in common.
The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` is monitoring is also defined with a label selector. For management convenience and consistency, `services` and `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common.
Sets identified by labels could be overlapping (think Venn diagrams). For instance, a service might target all pods with `"tier": "frontend"` and `"environment" : "prod"`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationController` (with `replicas` set to 9) for the bulk of the replicas with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "stable"` and another `replicationController` (with `replicas` set to 1) for the canary with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "canary"`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationControllers` separately to test things out, monitor the results, etc.
Sets identified by labels could be overlapping (think Venn diagrams). For instance, a service might target all pods with `"tier": "frontend"` and `"environment" : "prod"`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationcontroller` (with `replicas` set to 9) for the bulk of the replicas with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "stable"` and another `replicationcontroller` (with `replicas` set to 1) for the canary with labels `"tier" : "frontend"` and `"environment" : "prod"` and `"track" : "canary"`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationcontrollers` separately to test things out, monitor the results, etc.
Note that the superset described in the previous example is also heterogeneous. In long-lived, highly available, horizontally scaled, distributed, continuously evolving service applications, heterogeneity is inevitable, due to canaries, incremental rollouts, live reconfiguration, simultaneous updates and auto-scaling, hardware upgrades, and so on.

View File

@ -9,7 +9,7 @@ kube-apiserver \- Provides the API for kubernetes orchestration.
# DESCRIPTION
The **kubernetes** API server validates and configures data for 3 types of objects: pods, services, and replicationControllers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration.
The **kubernetes** API server validates and configures data for 3 types of objects: pods, services, and replicationcontrollers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration.
The the kube-apiserver several options.

View File

@ -9,7 +9,7 @@ kube-controller-manager \- Enforces kubernetes services.
# DESCRIPTION
The **kubernetes** controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationController is actually broken out into another server. This server watches etcd for changes to replicationController objects and then uses the public Kubernetes API to implement the replication algorithm.
The **kubernetes** controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationcontroller is actually broken out into another server. This server watches etcd for changes to replicationcontroller objects and then uses the public Kubernetes API to implement the replication algorithm.
The kube-controller-manager has several options.

View File

@ -10,7 +10,7 @@ kube\-apiserver \- Provides the API for kubernetes orchestration.
.SH DESCRIPTION
.PP
The \fBkubernetes\fP API server validates and configures data for 3 types of objects: pods, services, and replicationControllers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration.
The \fBkubernetes\fP API server validates and configures data for 3 types of objects: pods, services, and replicationcontrollers. Beyond just servicing REST operations, the API Server does two other things as well: 1. Schedules pods to worker nodes. Right now the scheduler is very simple. 2. Synchronize pod information (where they are, what ports they are exposing) with the service configuration.
.PP
The the kube\-apiserver several options.

View File

@ -10,7 +10,7 @@ kube\-controller\-manager \- Enforces kubernetes services.
.SH DESCRIPTION
.PP
The \fBkubernetes\fP controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationController is actually broken out into another server. This server watches etcd for changes to replicationController objects and then uses the public Kubernetes API to implement the replication algorithm.
The \fBkubernetes\fP controller manager is really a service that is layered on top of the simple pod API. To enforce this layering, the logic for the replicationcontroller is actually broken out into another server. This server watches etcd for changes to replicationcontroller objects and then uses the public Kubernetes API to implement the replication algorithm.
.PP
The kube\-controller\-manager has several options.

View File

@ -166,7 +166,7 @@ of the \-\-template flag, you can filter the attributes of the fetched resource(
$ kubectl get pods
// List a single replication controller with specified NAME in ps output format.
$ kubectl get replicationController web
$ kubectl get replicationcontroller web
// List a single pod in JSON output format.
$ kubectl get \-o json pod web\-pod\-13je7

View File

@ -4,7 +4,7 @@
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. Replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
As discussed in [life of a pod](pod-states.md), `replicationController` is *only* appropriate for pods with `RestartPolicy = Always`. `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
As discussed in [life of a pod](pod-states.md), `replicationcontroller` is *only* appropriate for pods with `RestartPolicy = Always`. `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service. Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
@ -12,7 +12,7 @@ A replication controller will never terminate on its own, but it isn't expected
### Pod template
A replication controller creates new pods from a template, which is currently inline in the `replicationController` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170).
A replication controller creates new pods from a template, which is currently inline in the `replicationcontroller` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170).
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive, as demonstrated by the use cases explained below.
@ -20,11 +20,11 @@ Pods created by a replication controller are intended to be fungible and semanti
### Labels
The population of pods that a `replicationController` is monitoring is defined with a [label selector](labels.md), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system.
The population of pods that a `replicationcontroller` is monitoring is defined with a [label selector](labels.md), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system.
The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets.
Note that `replicationControllers` may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
Note that `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
@ -62,7 +62,7 @@ The two replication controllers would need to create pods with at least one diff
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationController` with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another `replicationController` with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationControllers` separately to test things out, monitor the results, etc.
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationcontroller` with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another `replicationcontroller` with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationcontrollers` separately to test things out, monitor the results, etc.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/replication-controller.md?pixel)]()

View File

@ -40,7 +40,7 @@ of the --template flag, you can filter the attributes of the fetched resource(s)
$ kubectl get pods
// List a single replication controller with specified NAME in ps output format.
$ kubectl get replicationController web
$ kubectl get replicationcontroller web
// List a single pod in JSON output format.
$ kubectl get -o json pod web-pod-13je7