upgrade example/walkthrough to v1beta3

This commit is contained in:
Chao Xu
2015-05-07 17:55:36 -07:00
parent 738f403eea
commit ab356c9fa0
14 changed files with 184 additions and 451 deletions

View File

@@ -18,58 +18,57 @@ Lists all pods who name label matches 'nginx'. Labels are discussed in detail [
OK, now you have an awesome, multi-container, labelled pod and you want to use it to build an application, you might be tempted to just start building a whole bunch of individual pods, but if you do that, a whole host of operational concerns pop up. For example: how will you scale the number of pods up or down and how will you ensure that all pods are homogenous?
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replica controllers is discussed in detail [elsewhere](http://docs.k8s.io/replication-controller.md).
An example replica controller that instantiates two pods running nginx looks like:
Replication controllers are the objects to answer these questions. A replication controller combines a template for pod creation (a "cookie-cutter" if you will) and a number of desired replicas, into a single API object. The replication controller also contains a label selector that identifies the set of objects managed by the replication controller. The replication controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods. The design of replication controllers is discussed in detail [elsewhere](http://docs.k8s.io/replication-controller.md).
An example replication controller that instantiates two pods running nginx looks like:
```yaml
id: nginx-controller
apiVersion: v1beta1
apiVersion: v1beta3
kind: ReplicationController
desiredState:
metadata:
name: nginx-controller
spec:
replicas: 2
# replicaSelector identifies the set of Pods that this
# replicaController is responsible for managing
replicaSelector:
# selector identifies the set of Pods that this
# replication controller is responsible for managing
selector:
name: nginx
# podTemplate defines the 'cookie cutter' used for creating
# new pods when necessary
podTemplate:
desiredState:
manifest:
version: v1beta1
id: nginx
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# Important: these labels need to match the selector above
# The api server enforces this constraint.
labels:
name: nginx
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
### Services
Once you have a replicated set of pods, you need an abstraction that enables connectivity between the layers of your application. For example, if you have a replication controller managing your backend jobs, you don't want to have to reconfigure your front-ends whenever you re-scale your backends. Likewise, if the pods in your backends are scheduled (or rescheduled) onto different machines, you can't be required to re-configure your front-ends. In Kubernetes the Service API object achieves these goals. A Service basically combines an IP address and a label selector together to form a simple, static rallying point for connecting to a micro-service in your application.
For example, here is a service that balances across the pods created in the previous nginx replication controller example:
```yaml
apiVersion: v1beta3
kind: Service
apiVersion: v1beta1
# must be a DNS compatible name
id: nginx-example
# the port that this service should serve on
port: 8000
# just like the selector in the replication controller,
# but this time it identifies the set of pods to load balance
# traffic to.
selector:
name: nginx
# the container on each pod to connect to, can be a name
# (e.g. 'www') or a number (e.g. 80)
containerPort: 80
metadata:
name: nginx-example
spec:
ports:
- port: 8000 # the port that this service should serve on
# the container on each pod to connect to, can be a name
# (e.g. 'www') or a number (e.g. 80)
targetPort: 80
protocol: TCP
# just like the selector in the replication controller,
# but this time it identifies the set of pods to load balance
# traffic to.
selector:
name: nginx
```
When created, each service is assigned a unique IP address. This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the service, and know that communication to the service will be automatically load-balanced out to some pod that is a member of the set identified by the label selector in the Service. Services are described in detail [elsewhere](http://docs.k8s.io/services.md).
@@ -125,31 +124,27 @@ In all cases, if the Kubelet discovers a failure, the container is restarted.
The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check:
```yaml
apiVersion: v1beta3
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
# defines the health checking
livenessProbe:
# an http probe
httpGet:
path: /_status/healthz
port: 8080
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
timeoutSeconds: 1
ports:
- containerPort: 80
```
### What's next?