mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-29 14:37:00 +00:00
Replace `` with
when emphasizing something inline in docs/
This commit is contained in:
parent
b80c0e5177
commit
68d6e3a8ae
@ -79,7 +79,7 @@ support all the features you expect.
|
||||
|
||||
## How do I turn on an admission control plug-in?
|
||||
|
||||
The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited,
|
||||
The Kubernetes API server supports a flag, `admission_control` that takes a comma-delimited,
|
||||
ordered list of admission control choices to invoke prior to modifying objects in the cluster.
|
||||
|
||||
## What does each plug-in do?
|
||||
@ -102,16 +102,16 @@ commands in those containers, we strongly encourage enabling this plug-in.
|
||||
### ServiceAccount
|
||||
|
||||
This plug-in implements automation for [serviceAccounts](../user-guide/service-accounts.md).
|
||||
We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects.
|
||||
We strongly recommend using this plug-in if you intend to make use of Kubernetes `ServiceAccount` objects.
|
||||
|
||||
### SecurityContextDeny
|
||||
|
||||
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the ```Container```.
|
||||
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the `Container`.
|
||||
|
||||
### ResourceQuota
|
||||
|
||||
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
|
||||
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
|
||||
enumerated in the `ResourceQuota` object in a `Namespace`. If you are using `ResourceQuota`
|
||||
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
|
||||
|
||||
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more details.
|
||||
@ -122,35 +122,35 @@ so that quota is not prematurely incremented only for the request to be rejected
|
||||
### LimitRanger
|
||||
|
||||
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
|
||||
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
|
||||
enumerated in the `LimitRange` object in a `Namespace`. If you are using `LimitRange` objects in
|
||||
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
|
||||
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
|
||||
applies a 0.1 CPU requirement to all Pods in the ```default``` namespace.
|
||||
applies a 0.1 CPU requirement to all Pods in the `default` namespace.
|
||||
|
||||
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/) for more details.
|
||||
|
||||
### NamespaceExists
|
||||
|
||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace```
|
||||
and reject the request if the ```Namespace``` was not previously created. We strongly recommend running
|
||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
|
||||
and reject the request if the `Namespace` was not previously created. We strongly recommend running
|
||||
this plug-in to ensure integrity of your data.
|
||||
|
||||
### NamespaceAutoProvision (deprecated)
|
||||
|
||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace```
|
||||
and create a new ```Namespace``` if one did not already exist previously.
|
||||
This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes `Namespace`
|
||||
and create a new `Namespace` if one did not already exist previously.
|
||||
|
||||
We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```.
|
||||
We strongly recommend `NamespaceExists` over `NamespaceAutoProvision`.
|
||||
|
||||
### NamespaceLifecycle
|
||||
|
||||
This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new objects created in it.
|
||||
This plug-in enforces that a `Namespace` that is undergoing termination cannot have new objects created in it.
|
||||
|
||||
A ```Namespace``` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
|
||||
A `Namespace` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
|
||||
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.
|
||||
|
||||
Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will
|
||||
be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes.
|
||||
Once `NamespaceAutoProvision` is deprecated, we anticipate `NamespaceLifecycle` and `NamespaceExists` will
|
||||
be merged into a single plug-in that enforces the life-cycle of a `Namespace` in Kubernetes.
|
||||
|
||||
## Is there a recommended set of plug-ins to use?
|
||||
|
||||
|
@ -135,15 +135,15 @@ network rules on the host and performing connection forwarding.
|
||||
|
||||
### docker
|
||||
|
||||
```docker``` is of course used for actually running containers.
|
||||
`docker` is of course used for actually running containers.
|
||||
|
||||
### rkt
|
||||
|
||||
```rkt``` is supported experimentally as an alternative to docker.
|
||||
`rkt` is supported experimentally as an alternative to docker.
|
||||
|
||||
### monit
|
||||
|
||||
```monit``` is a lightweight process babysitting system for keeping kubelet and docker
|
||||
`monit` is a lightweight process babysitting system for keeping kubelet and docker
|
||||
running.
|
||||
|
||||
|
||||
|
@ -48,12 +48,12 @@ Run
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the ```Ready``` state.
|
||||
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
|
||||
|
||||
## Looking at logs
|
||||
|
||||
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
|
||||
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)
|
||||
of the relevant log files. (note that on systemd-based systems, you may need to use `journalctl` instead)
|
||||
|
||||
### Master
|
||||
|
||||
|
@ -99,21 +99,21 @@ describe easy installation for single-master clusters on a variety of platforms.
|
||||
|
||||
On each master node, we are going to run a number of processes that implement the Kubernetes API. The first step in making these reliable is
|
||||
to make sure that each automatically restarts when it fails. To achieve this, we need to install a process watcher. We choose to use
|
||||
the ```kubelet``` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
the `kubelet` that we run on each of the worker nodes. This is convenient, since we can use containers to distribute our binaries, we can
|
||||
establish resource limits, and introspect the resource usage of each daemon. Of course, we also need something to monitor the kubelet
|
||||
itself (insert who watches the watcher jokes here). For Debian systems, we choose monit, but there are a number of alternate
|
||||
choices. For example, on systemd-based systems (e.g. RHEL, CentOS), you can run 'systemctl enable kubelet'.
|
||||
|
||||
If you are extending from a standard Kubernetes installation, the ```kubelet``` binary should already be present on your system. You can run
|
||||
```which kubelet``` to determine if the binary is in fact installed. If it is not installed,
|
||||
If you are extending from a standard Kubernetes installation, the `kubelet` binary should already be present on your system. You can run
|
||||
`which kubelet` to determine if the binary is in fact installed. If it is not installed,
|
||||
you should install the [kubelet binary](https://storage.googleapis.com/kubernetes-release/release/v0.19.3/bin/linux/amd64/kubelet), the
|
||||
[kubelet init file](../../cluster/saltbase/salt/kubelet/initd) and [high-availability/default-kubelet](high-availability/default-kubelet)
|
||||
scripts.
|
||||
|
||||
If you are using monit, you should also install the monit daemon (```apt-get install monit```) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
|
||||
If you are using monit, you should also install the monit daemon (`apt-get install monit`) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
|
||||
[high-availability/monit-docker](high-availability/monit-docker) configs.
|
||||
|
||||
On systemd systems you ```systemctl enable kubelet``` and ```systemctl enable docker```.
|
||||
On systemd systems you `systemctl enable kubelet` and `systemctl enable docker`.
|
||||
|
||||
|
||||
## Establishing a redundant, reliable data storage layer
|
||||
@ -140,14 +140,14 @@ First, hit the etcd discovery service to create a new token:
|
||||
curl https://discovery.etcd.io/new?size=3
|
||||
```
|
||||
|
||||
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into ```/etc/kubernetes/manifests/etcd.yaml```
|
||||
On each node, copy the [etcd.yaml](high-availability/etcd.yaml) file into `/etc/kubernetes/manifests/etcd.yaml`
|
||||
|
||||
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the ```etcd```
|
||||
server from the definition of the pod specified in ```etcd.yaml```.
|
||||
The kubelet on each node actively monitors the contents of that directory, and it will create an instance of the `etcd`
|
||||
server from the definition of the pod specified in `etcd.yaml`.
|
||||
|
||||
Note that in ```etcd.yaml``` you should substitute the token URL you got above for ```${DISCOVERY_TOKEN}``` on all three machines,
|
||||
and you should substitute a different name (e.g. ```node-1```) for ${NODE_NAME} and the correct IP address
|
||||
for ```${NODE_IP}``` on each machine.
|
||||
Note that in `etcd.yaml` you should substitute the token URL you got above for `${DISCOVERY_TOKEN}` on all three machines,
|
||||
and you should substitute a different name (e.g. `node-1`) for ${NODE_NAME} and the correct IP address
|
||||
for `${NODE_IP}` on each machine.
|
||||
|
||||
|
||||
#### Validating your cluster
|
||||
@ -164,7 +164,7 @@ and
|
||||
etcdctl cluster-health
|
||||
```
|
||||
|
||||
You can also validate that this is working with ```etcdctl set foo bar``` on one node, and ```etcd get foo```
|
||||
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
|
||||
on a different node.
|
||||
|
||||
### Even more reliable storage
|
||||
@ -181,7 +181,7 @@ Alternatively, you can run a clustered file system like Gluster or Ceph. Finall
|
||||
|
||||
Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
|
||||
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
|
||||
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in ```/var/etcd/data```
|
||||
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in `/var/etcd/data`
|
||||
|
||||
|
||||
## Replicated API Servers
|
||||
@ -196,7 +196,7 @@ First you need to create the initial log file, so that Docker mounts a file inst
|
||||
touch /var/log/kube-apiserver.log
|
||||
```
|
||||
|
||||
Next, you need to create a ```/srv/kubernetes/``` directory on each node. This directory includes:
|
||||
Next, you need to create a `/srv/kubernetes/` directory on each node. This directory includes:
|
||||
* basic_auth.csv - basic auth user and password
|
||||
* ca.crt - Certificate Authority cert
|
||||
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
|
||||
@ -209,9 +209,9 @@ The easiest way to create this directory, may be to copy it from the master node
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.
|
||||
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into `/etc/kubernetes/manifests/` on each master node.
|
||||
|
||||
The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
|
||||
The kubelet monitors this directory, and will automatically create an instance of the `kube-apiserver` container using the pod definition specified
|
||||
in the file.
|
||||
|
||||
### Load balancing
|
||||
@ -224,9 +224,9 @@ Platform can be found [here](https://cloud.google.com/compute/docs/load-balancin
|
||||
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
|
||||
in addition to the IP addresses of the individual nodes.
|
||||
|
||||
For pods that you deploy into the cluster, the ```kubernetes``` service/dns name should provide a load balanced endpoint for the master automatically.
|
||||
For pods that you deploy into the cluster, the `kubernetes` service/dns name should provide a load balanced endpoint for the master automatically.
|
||||
|
||||
For external users of the API (e.g. the ```kubectl``` command line interface, continuous build pipelines, or other clients) you will want to configure
|
||||
For external users of the API (e.g. the `kubectl` command line interface, continuous build pipelines, or other clients) you will want to configure
|
||||
them to talk to the external load balancer's IP address.
|
||||
|
||||
## Master elected components
|
||||
@ -234,7 +234,7 @@ them to talk to the external load balancer's IP address.
|
||||
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
|
||||
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
|
||||
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
|
||||
master election. On each of the three apiserver nodes, we run a small utility application named ```podmaster```. It's job is to implement a master
|
||||
master election. On each of the three apiserver nodes, we run a small utility application named `podmaster`. It's job is to implement a master
|
||||
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
|
||||
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
|
||||
|
||||
@ -250,14 +250,14 @@ touch /var/log/kube-controller-manager.log
|
||||
```
|
||||
|
||||
Next, set up the descriptions of the scheduler and controller manager pods on each node.
|
||||
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the ```/srv/kubernetes/```
|
||||
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the `/srv/kubernetes/`
|
||||
directory.
|
||||
|
||||
### Running the podmaster
|
||||
|
||||
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```
|
||||
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into `/etc/kubernetes/manifests/`
|
||||
|
||||
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
|
||||
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in `podmaster.yaml`.
|
||||
|
||||
Now you will have one instance of the scheduler process running on a single master node, and likewise one
|
||||
controller-manager process running on a single (possibly different) master node. If either of these processes fail,
|
||||
@ -272,7 +272,7 @@ If you have an existing cluster, this is as simple as reconfiguring your kubelet
|
||||
restarting the kubelets on each node.
|
||||
|
||||
If you are turning up a fresh cluster, you will need to install the kubelet and kube-proxy on each worker node, and
|
||||
set the ```--apiserver``` flag to your replicated endpoint.
|
||||
set the `--apiserver` flag to your replicated endpoint.
|
||||
|
||||
## Vagrant up!
|
||||
|
||||
|
@ -136,7 +136,7 @@ $ kube-apiserver -admission_control=LimitRanger
|
||||
|
||||
kubectl is modified to support the **LimitRange** resource.
|
||||
|
||||
```kubectl describe``` provides a human-readable output of limits.
|
||||
`kubectl describe` provides a human-readable output of limits.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -163,7 +163,7 @@ this being the resource most closely running at the prescribed quota limits.
|
||||
|
||||
kubectl is modified to support the **ResourceQuota** resource.
|
||||
|
||||
```kubectl describe``` provides a human-readable output of quota.
|
||||
`kubectl describe` provides a human-readable output of quota.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -38,41 +38,41 @@ This document captures the design of event compression.
|
||||
|
||||
## Background
|
||||
|
||||
Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate ```image_not_existing``` and ```container_is_waiting``` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)).
|
||||
Kubernetes components can get into a state where they generate tons of events which are identical except for the timestamp. For example, when pulling a non-existing image, Kubelet will repeatedly generate `image_not_existing` and `container_is_waiting` events until upstream components correct the image. When this happens, the spam from the repeated events makes the entire event mechanism useless. It also appears to cause memory pressure in etcd (see [#3853](https://github.com/GoogleCloudPlatform/kubernetes/issues/3853)).
|
||||
|
||||
## Proposal
|
||||
|
||||
Each binary that generates events (for example, ```kubelet```) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event.
|
||||
Each binary that generates events (for example, `kubelet`) should keep track of previously generated events so that it can collapse recurring events into a single event instead of creating a new instance for each new event.
|
||||
|
||||
Event compression should be best effort (not guaranteed). Meaning, in the worst case, ```n``` identical (minus timestamp) events may still result in ```n``` event entries.
|
||||
Event compression should be best effort (not guaranteed). Meaning, in the worst case, `n` identical (minus timestamp) events may still result in `n` event entries.
|
||||
|
||||
## Design
|
||||
|
||||
Instead of a single Timestamp, each event object [contains](../../pkg/api/types.go#L1111) the following fields:
|
||||
* ```FirstTimestamp util.Time```
|
||||
* `FirstTimestamp util.Time`
|
||||
* The date/time of the first occurrence of the event.
|
||||
* ```LastTimestamp util.Time```
|
||||
* `LastTimestamp util.Time`
|
||||
* The date/time of the most recent occurrence of the event.
|
||||
* On first occurrence, this is equal to the FirstTimestamp.
|
||||
* ```Count int```
|
||||
* `Count int`
|
||||
* The number of occurrences of this event between FirstTimestamp and LastTimestamp
|
||||
* On first occurrence, this is 1.
|
||||
|
||||
Each binary that generates events:
|
||||
* Maintains a historical record of previously generated events:
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [```pkg/client/record/events_cache.go```](../../pkg/client/record/events_cache.go).
|
||||
* Implemented with ["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go) in [`pkg/client/record/events_cache.go`](../../pkg/client/record/events_cache.go).
|
||||
* The key in the cache is generated from the event object minus timestamps/count/transient fields, specifically the following events fields are used to construct a unique key for an event:
|
||||
* ```event.Source.Component```
|
||||
* ```event.Source.Host```
|
||||
* ```event.InvolvedObject.Kind```
|
||||
* ```event.InvolvedObject.Namespace```
|
||||
* ```event.InvolvedObject.Name```
|
||||
* ```event.InvolvedObject.UID```
|
||||
* ```event.InvolvedObject.APIVersion```
|
||||
* ```event.Reason```
|
||||
* ```event.Message```
|
||||
* `event.Source.Component`
|
||||
* `event.Source.Host`
|
||||
* `event.InvolvedObject.Kind`
|
||||
* `event.InvolvedObject.Namespace`
|
||||
* `event.InvolvedObject.Name`
|
||||
* `event.InvolvedObject.UID`
|
||||
* `event.InvolvedObject.APIVersion`
|
||||
* `event.Reason`
|
||||
* `event.Message`
|
||||
* The LRU cache is capped at 4096 events. That means if a component (e.g. kubelet) runs for a long period of time and generates tons of unique events, the previously generated events cache will not grow unchecked in memory. Instead, after 4096 unique events are generated, the oldest events are evicted from the cache.
|
||||
* When an event is generated, the previously generated events cache is checked (see [```pkg/client/record/event.go```](../../pkg/client/record/event.go)).
|
||||
* When an event is generated, the previously generated events cache is checked (see [`pkg/client/record/event.go`](../../pkg/client/record/event.go)).
|
||||
* If the key for the new event matches the key for a previously generated event (meaning all of the above fields match between the new event and some previously generated event), then the event is considered to be a duplicate and the existing event entry is updated in etcd:
|
||||
* The new PUT (update) event API is called to update the existing event entry in etcd with the new last seen timestamp and count.
|
||||
* The event is also updated in the previously generated events cache with an incremented count, updated last seen timestamp, name, and new resource version (all required to issue a future event update).
|
||||
|
@ -65,7 +65,7 @@ Kubernetes makes no guarantees at runtime that the underlying storage exists or
|
||||
|
||||
#### Describe available storage
|
||||
|
||||
Cluster administrators use the API to manage *PersistentVolumes*. A custom store ```NewPersistentVolumeOrderedIndex``` will index volumes by access modes and sort by storage capacity. The ```PersistentVolumeClaimBinder``` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request.
|
||||
Cluster administrators use the API to manage *PersistentVolumes*. A custom store `NewPersistentVolumeOrderedIndex` will index volumes by access modes and sort by storage capacity. The `PersistentVolumeClaimBinder` watches for new claims for storage and binds them to an available volume by matching the volume's characteristics (AccessModes and storage size) to the user's request.
|
||||
|
||||
PVs are system objects and, thus, have no namespace.
|
||||
|
||||
|
@ -297,7 +297,7 @@ storing it. Secrets contain multiple pieces of data that are presented as differ
|
||||
the secret volume (example: SSH key pair).
|
||||
|
||||
In order to remove the burden from the end user in specifying every file that a secret consists of,
|
||||
it should be possible to mount all files provided by a secret with a single ```VolumeMount``` entry
|
||||
it should be possible to mount all files provided by a secret with a single `VolumeMount` entry
|
||||
in the container specification.
|
||||
|
||||
### Secret API Resource
|
||||
@ -349,7 +349,7 @@ finer points of secrets and resource allocation are fleshed out.
|
||||
|
||||
### Secret Volume Source
|
||||
|
||||
A new `SecretSource` type of volume source will be added to the ```VolumeSource``` struct in the
|
||||
A new `SecretSource` type of volume source will be added to the `VolumeSource` struct in the
|
||||
API:
|
||||
|
||||
```go
|
||||
|
@ -33,15 +33,15 @@ Documentation for other releases can be found at
|
||||
|
||||
## Simple rolling update
|
||||
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in ```kubectl```.
|
||||
This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in `kubectl`.
|
||||
|
||||
Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
|
||||
|
||||
### Lightweight rollout
|
||||
|
||||
Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1```
|
||||
Assume that we have a current replication controller named `foo` and it is running image `image:v1`
|
||||
|
||||
```kubectl rolling-update foo [foo-v2] --image=myimage:v2```
|
||||
`kubectl rolling-update foo [foo-v2] --image=myimage:v2`
|
||||
|
||||
If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to
|
||||
the name of the original replication controller.
|
||||
@ -50,15 +50,15 @@ Obviously there is a race here, where if you kill the client between delete foo,
|
||||
See [Recovery](#recovery) below
|
||||
|
||||
If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name,
|
||||
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers.
|
||||
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
|
||||
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label `kubernetes.io/deployment` to both the `foo` and `foo-next` replication controllers.
|
||||
The value of that label is the hash of the complete JSON representation of the`foo-next` or`foo` replication controller. The name of this label can be overridden by the user with the `--deployment-label-key` flag.
|
||||
|
||||
#### Recovery
|
||||
|
||||
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
|
||||
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace:
|
||||
* ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero)
|
||||
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the `kubernetes.io/` annotation namespace:
|
||||
* `desired-replicas` The desired number of replicas for this replication controller (either N or zero)
|
||||
* `update-partner` A pointer to the replication controller resource that is the other half of this update (syntax `<name>` the namespace is assumed to be identical to the namespace of this replication controller.)
|
||||
|
||||
Recovery is achieved by issuing the same command again:
|
||||
|
||||
@ -66,70 +66,70 @@ Recovery is achieved by issuing the same command again:
|
||||
kubectl rolling-update foo [foo-v2] --image=myimage:v2
|
||||
```
|
||||
|
||||
Whenever the rolling update command executes, the kubectl client looks for replication controllers called ```foo``` and ```foo-next```, if they exist, an attempt is
|
||||
made to roll ```foo``` to ```foo-next```. If ```foo-next``` does not exist, then it is created, and the rollout is a new rollout. If ```foo``` doesn't exist, then
|
||||
it is assumed that the rollout is nearly completed, and ```foo-next``` is renamed to ```foo```. Details of the execution flow are given below.
|
||||
Whenever the rolling update command executes, the kubectl client looks for replication controllers called `foo` and `foo-next`, if they exist, an attempt is
|
||||
made to roll `foo` to `foo-next`. If `foo-next` does not exist, then it is created, and the rollout is a new rollout. If `foo` doesn't exist, then
|
||||
it is assumed that the rollout is nearly completed, and `foo-next` is renamed to `foo`. Details of the execution flow are given below.
|
||||
|
||||
|
||||
### Aborting a rollout
|
||||
|
||||
Abort is assumed to want to reverse a rollout in progress.
|
||||
|
||||
```kubectl rolling-update foo [foo-v2] --rollback```
|
||||
`kubectl rolling-update foo [foo-v2] --rollback`
|
||||
|
||||
This is really just semantic sugar for:
|
||||
|
||||
```kubectl rolling-update foo-v2 foo```
|
||||
`kubectl rolling-update foo-v2 foo`
|
||||
|
||||
With the added detail that it moves the ```desired-replicas``` annotation from ```foo-v2``` to ```foo```
|
||||
With the added detail that it moves the `desired-replicas` annotation from `foo-v2` to `foo`
|
||||
|
||||
|
||||
### Execution Details
|
||||
|
||||
For the purposes of this example, assume that we are rolling from ```foo``` to ```foo-next``` where the only change is an image update from `v1` to `v2`
|
||||
For the purposes of this example, assume that we are rolling from `foo` to `foo-next` where the only change is an image update from `v1` to `v2`
|
||||
|
||||
If the user doesn't specify a ```foo-next``` name, then it is either discovered from the ```update-partner``` annotation on ```foo```. If that annotation doesn't exist,
|
||||
then ```foo-next``` is synthesized using the pattern ```<controller-name>-<hash-of-next-controller-JSON>```
|
||||
If the user doesn't specify a `foo-next` name, then it is either discovered from the `update-partner` annotation on `foo`. If that annotation doesn't exist,
|
||||
then `foo-next` is synthesized using the pattern `<controller-name>-<hash-of-next-controller-JSON>`
|
||||
|
||||
#### Initialization
|
||||
|
||||
* If ```foo``` and ```foo-next``` do not exist:
|
||||
* If `foo` and `foo-next` do not exist:
|
||||
* Exit, and indicate an error to the user, that the specified controller doesn't exist.
|
||||
* If ```foo``` exists, but ```foo-next``` does not:
|
||||
* Create ```foo-next``` populate it with the ```v2``` image, set ```desired-replicas``` to ```foo.Spec.Replicas```
|
||||
* If `foo` exists, but `foo-next` does not:
|
||||
* Create `foo-next` populate it with the `v2` image, set `desired-replicas` to `foo.Spec.Replicas`
|
||||
* Goto Rollout
|
||||
* If ```foo-next``` exists, but ```foo``` does not:
|
||||
* If `foo-next` exists, but `foo` does not:
|
||||
* Assume that we are in the rename phase.
|
||||
* Goto Rename
|
||||
* If both ```foo``` and ```foo-next``` exist:
|
||||
* If both `foo` and `foo-next` exist:
|
||||
* Assume that we are in a partial rollout
|
||||
* If ```foo-next``` is missing the ```desired-replicas``` annotation
|
||||
* Populate the ```desired-replicas``` annotation to ```foo-next``` using the current size of ```foo```
|
||||
* If `foo-next` is missing the `desired-replicas` annotation
|
||||
* Populate the `desired-replicas` annotation to `foo-next` using the current size of `foo`
|
||||
* Goto Rollout
|
||||
|
||||
#### Rollout
|
||||
|
||||
* While size of ```foo-next``` < ```desired-replicas``` annotation on ```foo-next```
|
||||
* increase size of ```foo-next```
|
||||
* if size of ```foo``` > 0
|
||||
decrease size of ```foo```
|
||||
* While size of `foo-next` < `desired-replicas` annotation on `foo-next`
|
||||
* increase size of `foo-next`
|
||||
* if size of `foo` > 0
|
||||
decrease size of `foo`
|
||||
* Goto Rename
|
||||
|
||||
#### Rename
|
||||
|
||||
* delete ```foo```
|
||||
* create ```foo``` that is identical to ```foo-next```
|
||||
* delete ```foo-next```
|
||||
* delete `foo`
|
||||
* create `foo` that is identical to `foo-next`
|
||||
* delete `foo-next`
|
||||
|
||||
#### Abort
|
||||
|
||||
* If ```foo-next``` doesn't exist
|
||||
* If `foo-next` doesn't exist
|
||||
* Exit and indicate to the user that they may want to simply do a new rollout with the old version
|
||||
* If ```foo``` doesn't exist
|
||||
* If `foo` doesn't exist
|
||||
* Exit and indicate not found to the user
|
||||
* Otherwise, ```foo-next``` and ```foo``` both exist
|
||||
* Set ```desired-replicas``` annotation on ```foo``` to match the annotation on ```foo-next```
|
||||
* Goto Rollout with ```foo``` and ```foo-next``` trading places.
|
||||
* Otherwise, `foo-next` and `foo` both exist
|
||||
* Set `desired-replicas` annotation on `foo` to match the annotation on `foo-next`
|
||||
* Goto Rollout with `foo` and `foo-next` trading places.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -498,7 +498,7 @@ The following HTTP status codes may be returned by the API.
|
||||
* `429 StatusTooManyRequests`
|
||||
* Indicates that the either the client rate limit has been exceeded or the server has received more requests then it can process.
|
||||
* Suggested client recovery behavior:
|
||||
* Read the ```Retry-After``` HTTP header from the response, and wait at least that long before retrying.
|
||||
* Read the `Retry-After` HTTP header from the response, and wait at least that long before retrying.
|
||||
* `500 StatusInternalServerError`
|
||||
* Indicates that the server can be reached and understood the request, but either an unexpected internal error occurred and the outcome of the call is unknown, or the server cannot complete the action in a reasonable time (this maybe due to temporary server load or a transient communication issue with another server).
|
||||
* Suggested client recovery behavior:
|
||||
@ -514,12 +514,12 @@ The following HTTP status codes may be returned by the API.
|
||||
|
||||
## Response Status Kind
|
||||
|
||||
Kubernetes will always return the ```Status``` kind from any API endpoint when an error occurs.
|
||||
Kubernetes will always return the `Status` kind from any API endpoint when an error occurs.
|
||||
Clients SHOULD handle these types of objects when appropriate.
|
||||
|
||||
A ```Status``` kind will be returned by the API in two cases:
|
||||
A `Status` kind will be returned by the API in two cases:
|
||||
* When an operation is not successful (i.e. when the server would return a non 2xx HTTP status code).
|
||||
* When a HTTP ```DELETE``` call is successful.
|
||||
* When a HTTP `DELETE` call is successful.
|
||||
|
||||
The status object is encoded as JSON and provided as the body of the response. The status object contains fields for humans and machine consumers of the API to get more detailed information for the cause of the failure. The information in the status object supplements, but does not override, the HTTP status code's meaning. When fields in the status object have the same meaning as generally defined HTTP headers and that header is returned with the response, the header should be considered as having higher priority.
|
||||
|
||||
@ -555,17 +555,17 @@ $ curl -v -k -H "Authorization: Bearer WhCDvq4VPpYhrcfmF6ei7V9qlbqTubUc" https:/
|
||||
}
|
||||
```
|
||||
|
||||
```status``` field contains one of two possible values:
|
||||
`status` field contains one of two possible values:
|
||||
* `Success`
|
||||
* `Failure`
|
||||
|
||||
`message` may contain human-readable description of the error
|
||||
|
||||
```reason``` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it.
|
||||
`reason` may contain a machine-readable description of why this operation is in the `Failure` status. If this value is empty there is no information available. The `reason` clarifies an HTTP status code but does not override it.
|
||||
|
||||
```details``` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.
|
||||
`details` may contain extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.
|
||||
|
||||
Possible values for the ```reason``` and ```details``` fields:
|
||||
Possible values for the `reason` and `details` fields:
|
||||
* `BadRequest`
|
||||
* Indicates that the request itself was invalid, because the request doesn't make any sense, for example deleting a read-only object.
|
||||
* This is different than `status reason` `Invalid` above which indicates that the API call could possibly succeed, but the data was invalid.
|
||||
|
@ -133,7 +133,7 @@ vagrant destroy
|
||||
|
||||
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
You may need to build the binaries first, you can do this with `make`
|
||||
|
||||
```sh
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
@ -374,7 +374,7 @@ export KUBERNETES_MINION_MEMORY=2048
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
|
||||
```vagrant suspend``` seems to mess up the network. It's not supported at this time.
|
||||
`vagrant suspend` seems to mess up the network. It's not supported at this time.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -106,10 +106,10 @@ Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies.
|
||||
|
||||
### Installing godep
|
||||
|
||||
There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed:
|
||||
There are many ways to build and host go binaries. Here is an easy way to get utilities like `godep` installed:
|
||||
|
||||
1) Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial
|
||||
source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download
|
||||
source control system). Use `apt-get install mercurial` or `yum install mercurial` on Linux, or [brew.sh](http://brew.sh) on OS X, or download
|
||||
directly from mercurial.
|
||||
|
||||
2) Create a new GOPATH for your tools and install godep:
|
||||
@ -174,7 +174,7 @@ go get -u path/to/dependency
|
||||
godep update path/to/dependency
|
||||
```
|
||||
|
||||
5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: ```godep restore```
|
||||
5) Before sending your PR, it's a good idea to sanity check that your Godeps.json file is ok by re-restoring: `godep restore`
|
||||
|
||||
It is sometimes expedient to manually fix the /Godeps/godeps.json file to minimize the changes.
|
||||
|
||||
|
@ -41,7 +41,7 @@ Running a test 1000 times on your own machine can be tedious and time consuming.
|
||||
|
||||
_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_
|
||||
|
||||
There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix.
|
||||
There is a testing image `brendanburns/flake` up on the docker hub. We will use this image to test our fix.
|
||||
|
||||
Create a replication controller with the following config:
|
||||
|
||||
@ -74,7 +74,7 @@ kubectl create -f ./controller.yaml
|
||||
```
|
||||
|
||||
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
|
||||
You can examine the recent runs of the test by calling ```docker ps -a``` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently.
|
||||
You can examine the recent runs of the test by calling `docker ps -a` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently.
|
||||
You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes:
|
||||
|
||||
```sh
|
||||
@ -93,7 +93,7 @@ Eventually you will have sufficient runs for your purposes. At that point you ca
|
||||
kubectl stop replicationcontroller flakecontroller
|
||||
```
|
||||
|
||||
If you do a final check for flakes with ```docker ps -a```, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
|
||||
If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
|
||||
|
||||
Happy flake hunting!
|
||||
|
||||
|
@ -55,14 +55,14 @@ release. It is likely long and many PRs aren't worth mentioning. If any of the
|
||||
PRs were cherrypicked into patches on the last minor release, you should exclude
|
||||
them from the current release's notes.
|
||||
|
||||
Open up ```candidate-notes.md``` in your favorite editor.
|
||||
Open up `candidate-notes.md` in your favorite editor.
|
||||
|
||||
Remove, regroup, organize to your hearts content.
|
||||
|
||||
|
||||
### 4) Update CHANGELOG.md
|
||||
|
||||
With the final markdown all set, cut and paste it to the top of ```CHANGELOG.md```
|
||||
With the final markdown all set, cut and paste it to the top of `CHANGELOG.md`
|
||||
|
||||
### 5) Update the Release page
|
||||
|
||||
|
@ -71,7 +71,7 @@ to get 30 sec. CPU profile.
|
||||
|
||||
## Contention profiling
|
||||
|
||||
To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```.
|
||||
To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -98,10 +98,10 @@ AWS CloudFormation or EC2 with user data (cloud-config).
|
||||
|
||||
### Command line administration tool: `kubectl`
|
||||
|
||||
The cluster startup script will leave you with a ```kubernetes``` directory on your workstation.
|
||||
The cluster startup script will leave you with a `kubernetes` directory on your workstation.
|
||||
Alternately, you can download the latest Kubernetes release from [this page](https://github.com/GoogleCloudPlatform/kubernetes/releases).
|
||||
|
||||
Next, add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Next, add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
|
@ -89,7 +89,7 @@ can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
|
||||
The [kubectl](../../docs/user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
# OS X
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
|
@ -85,7 +85,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
|
||||
|
||||
sudo yum install tftp-server dhcp syslinux
|
||||
|
||||
2. ```vi /etc/xinetd.d/tftp``` to enable tftp service and change disable to 'no'
|
||||
2. `vi /etc/xinetd.d/tftp` to enable tftp service and change disable to 'no'
|
||||
disable = no
|
||||
|
||||
3. Copy over the syslinux images we will need.
|
||||
@ -108,7 +108,7 @@ To setup CentOS PXELINUX environment there is a complete [guide here](http://doc
|
||||
mkdir /tftpboot/pxelinux.cfg
|
||||
touch /tftpboot/pxelinux.cfg/default
|
||||
|
||||
5. Edit the menu ```vi /tftpboot/pxelinux.cfg/default```
|
||||
5. Edit the menu `vi /tftpboot/pxelinux.cfg/default`
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
@ -129,7 +129,7 @@ Now you should have a working PXELINUX setup to image CoreOS nodes. You can veri
|
||||
This section describes how to setup the CoreOS images to live alongside a pre-existing PXELINUX environment.
|
||||
|
||||
1. Find or create the TFTP root directory that everything will be based off of.
|
||||
* For this document we will assume ```/tftpboot/``` is our root directory.
|
||||
* For this document we will assume `/tftpboot/` is our root directory.
|
||||
2. Once we know and have our tftp root directory we will create a new directory structure for our CoreOS images.
|
||||
3. Download the CoreOS PXE files provided by the CoreOS team.
|
||||
|
||||
@ -143,7 +143,7 @@ This section describes how to setup the CoreOS images to live alongside a pre-ex
|
||||
gpg --verify coreos_production_pxe.vmlinuz.sig
|
||||
gpg --verify coreos_production_pxe_image.cpio.gz.sig
|
||||
|
||||
4. Edit the menu ```vi /tftpboot/pxelinux.cfg/default``` again
|
||||
4. Edit the menu `vi /tftpboot/pxelinux.cfg/default` again
|
||||
|
||||
default menu.c32
|
||||
prompt 0
|
||||
@ -176,7 +176,7 @@ This configuration file will now boot from local drive but have the option to PX
|
||||
|
||||
This section covers configuring the DHCP server to hand out our new images. In this case we are assuming that there are other servers that will boot alongside other images.
|
||||
|
||||
1. Add the ```filename``` to the _host_ or _subnet_ sections.
|
||||
1. Add the `filename` to the _host_ or _subnet_ sections.
|
||||
|
||||
filename "/tftpboot/pxelinux.0";
|
||||
|
||||
@ -217,17 +217,17 @@ We will be specifying the node configuration later in the guide.
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To deploy our configuration we need to create an ```etcd``` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
To deploy our configuration we need to create an `etcd` master. To do so we want to pxe CoreOS with a specific cloud-config.yml. There are two options we have here.
|
||||
1. Is to template the cloud config file and programmatically create new static configs for different cluster setups.
|
||||
2. Have a service discovery protocol running in our stack to do auto discovery.
|
||||
|
||||
This demo we just make a static single ```etcd``` server to host our Kubernetes and ```etcd``` master servers.
|
||||
This demo we just make a static single `etcd` server to host our Kubernetes and `etcd` master servers.
|
||||
|
||||
Since we are OFFLINE here most of the helping processes in CoreOS and Kubernetes are then limited. To do our setup we will then have to download and serve up our binaries for Kubernetes in our local environment.
|
||||
|
||||
An easy solution is to host a small web server on the DHCP/TFTP host for all our binaries to make them available to the local CoreOS PXE machines.
|
||||
|
||||
To get this up and running we are going to setup a simple ```apache``` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
To get this up and running we are going to setup a simple `apache` server to serve our binaries needed to bootstrap Kubernetes.
|
||||
|
||||
This is on the PXE server from the previous section:
|
||||
|
||||
@ -265,7 +265,7 @@ To make the setup work, you need to replace a few placeholders:
|
||||
|
||||
### master.yml
|
||||
|
||||
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-master.yml```.
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-master.yml`.
|
||||
|
||||
|
||||
#cloud-config
|
||||
@ -486,7 +486,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
|
||||
|
||||
### node.yml
|
||||
|
||||
On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-cloud-config-slave.yml```.
|
||||
On the PXE server make and fill in the variables `vi /var/www/html/coreos/pxe-cloud-config-slave.yml`.
|
||||
|
||||
#cloud-config
|
||||
---
|
||||
@ -621,7 +621,7 @@ On the PXE server make and fill in the variables ```vi /var/www/html/coreos/pxe-
|
||||
|
||||
## New pxelinux.cfg file
|
||||
|
||||
Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-slave```
|
||||
Create a pxelinux target file for a _slave_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-slave`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
@ -634,7 +634,7 @@ Create a pxelinux target file for a _slave_ node: ```vi /tftpboot/pxelinux.cfg/c
|
||||
kernel images/coreos/coreos_production_pxe.vmlinuz
|
||||
append initrd=images/coreos/coreos_production_pxe_image.cpio.gz cloud-config-url=http://<pxe-host-ip>/coreos/pxe-cloud-config-slave.yml console=tty0 console=ttyS0 coreos.autologin=tty1 coreos.autologin=ttyS0
|
||||
|
||||
And one for the _master_ node: ```vi /tftpboot/pxelinux.cfg/coreos-node-master```
|
||||
And one for the _master_ node: `vi /tftpboot/pxelinux.cfg/coreos-node-master`
|
||||
|
||||
default coreos
|
||||
prompt 1
|
||||
|
@ -48,7 +48,7 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
|
||||
|
||||
### AWS
|
||||
|
||||
*Attention:* Replace ```<ami_image_id>``` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
*Attention:* Replace `<ami_image_id>` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
@ -94,7 +94,7 @@ aws ec2 run-instances \
|
||||
|
||||
### Google Compute Engine (GCE)
|
||||
|
||||
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
*Attention:* Replace `<gce_image_id>` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
|
||||
|
||||
#### Provision the Master
|
||||
|
||||
|
@ -66,10 +66,10 @@ Here's a diagram of what the final result will look like:
|
||||
### Bootstrap Docker
|
||||
|
||||
This guide also uses a pattern of running two instances of the Docker daemon
|
||||
1) A _bootstrap_ Docker instance which is used to start system daemons like ```flanneld``` and ```etcd```
|
||||
1) A _bootstrap_ Docker instance which is used to start system daemons like `flanneld` and `etcd`
|
||||
2) A _main_ Docker instance which is used for the Kubernetes infrastructure and user's scheduled containers
|
||||
|
||||
This pattern is necessary because the ```flannel``` daemon is responsible for setting up and managing the network that interconnects
|
||||
This pattern is necessary because the `flannel` daemon is responsible for setting up and managing the network that interconnects
|
||||
all of the Docker containers created by Kubernetes. To achieve this, it must run outside of the _main_ Docker daemon. However,
|
||||
it is still useful to use containers for deployment and management, so we create a simpler _bootstrap_ daemon to achieve this.
|
||||
|
||||
|
@ -33,10 +33,10 @@ Documentation for other releases can be found at
|
||||
|
||||
## Installing a Kubernetes Master Node via Docker
|
||||
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is ```${MASTER_IP}```
|
||||
We'll begin by setting up the master node. For the purposes of illustration, we'll assume that the IP of this machine is `${MASTER_IP}`
|
||||
|
||||
There are two main phases to installing the master:
|
||||
* [Setting up ```flanneld``` and ```etcd```](#setting-up-flanneld-and-etcd)
|
||||
* [Setting up `flanneld` and `etcd`](#setting-up-flanneld-and-etcd)
|
||||
* [Starting the Kubernetes master components](#starting-the-kubernetes-master)
|
||||
|
||||
|
||||
@ -48,9 +48,9 @@ Please install Docker 1.6.2 or wait for Docker 1.7.1.
|
||||
|
||||
### Setup Docker-Bootstrap
|
||||
|
||||
We're going to use ```flannel``` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
We're going to use `flannel` to set up networking between Docker daemons. Flannel itself (and etcd on which it relies) will run inside of
|
||||
Docker containers themselves. To achieve this, we need a separate "bootstrap" instance of the Docker daemon. This daemon will be started with
|
||||
```--iptables=false``` so that it can only run containers with ```--net=host```. That's sufficient to bootstrap our system.
|
||||
`--iptables=false` so that it can only run containers with `--net=host`. That's sufficient to bootstrap our system.
|
||||
|
||||
Run:
|
||||
|
||||
@ -122,7 +122,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
@ -132,14 +132,14 @@ Regardless, you need to add the following to the docker command line:
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named ```docker0``` by default. You need to remove this:
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
@ -190,7 +190,7 @@ NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
|
||||
If the status of the node is `NotReady` or `Unknown` please check that all of the containers you created are successfully running.
|
||||
If all else fails, ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers).
|
||||
|
||||
|
||||
|
@ -47,8 +47,8 @@ NAME LABELS STATUS
|
||||
127.0.0.1 kubernetes.io/hostname=127.0.0.1 Ready
|
||||
```
|
||||
|
||||
If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
|
||||
[```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.
|
||||
If the status of any node is `Unknown` or `NotReady` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
|
||||
[`#google-containers`](http://webchat.freenode.net/?channels=google-containers) for advice.
|
||||
|
||||
### Run an application
|
||||
|
||||
@ -56,7 +56,7 @@ If the status of any node is ```Unknown``` or ```NotReady``` your cluster is bro
|
||||
kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
|
@ -37,10 +37,10 @@ Documentation for other releases can be found at
|
||||
|
||||
These instructions are very similar to the master set-up above, but they are duplicated for clarity.
|
||||
You need to repeat these instructions for each node you want to join the cluster.
|
||||
We will assume that the IP address of this node is ```${NODE_IP}``` and you have the IP address of the master in ```${MASTER_IP}``` that you created in the [master instructions](master.md).
|
||||
We will assume that the IP address of this node is `${NODE_IP}` and you have the IP address of the master in `${MASTER_IP}` that you created in the [master instructions](master.md).
|
||||
|
||||
For each worker node, there are three steps:
|
||||
* [Set up ```flanneld``` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Set up `flanneld` on the worker node](#set-up-flanneld-on-the-worker-node)
|
||||
* [Start kubernetes on the worker node](#start-kubernetes-on-the-worker-node)
|
||||
* [Add the worker to the cluster](#add-the-node-to-the-cluster)
|
||||
|
||||
@ -106,7 +106,7 @@ sudo docker -H unix:///var/run/docker-bootstrap.sock exec <really-long-hash-from
|
||||
|
||||
You now need to edit the docker configuration to activate new flags. Again, this is system specific.
|
||||
|
||||
This may be in ```/etc/default/docker``` or ```/etc/systemd/service/docker.service``` or it may be elsewhere.
|
||||
This may be in `/etc/default/docker` or `/etc/systemd/service/docker.service` or it may be elsewhere.
|
||||
|
||||
Regardless, you need to add the following to the docker command line:
|
||||
|
||||
@ -116,14 +116,14 @@ Regardless, you need to add the following to the docker command line:
|
||||
|
||||
#### Remove the existing Docker bridge
|
||||
|
||||
Docker creates a bridge named ```docker0``` by default. You need to remove this:
|
||||
Docker creates a bridge named `docker0` by default. You need to remove this:
|
||||
|
||||
```sh
|
||||
sudo /sbin/ifconfig docker0 down
|
||||
sudo brctl delbr docker0
|
||||
```
|
||||
|
||||
You may need to install the ```bridge-utils``` package for the ```brctl``` binary.
|
||||
You may need to install the `bridge-utils` package for the `brctl` binary.
|
||||
|
||||
#### Restart Docker
|
||||
|
||||
@ -143,7 +143,7 @@ systemctl start docker
|
||||
|
||||
#### Run the kubelet
|
||||
|
||||
Again this is similar to the above, but the ```--api_servers``` now points to the master we set up in the beginning.
|
||||
Again this is similar to the above, but the `--api_servers` now points to the master we set up in the beginning.
|
||||
|
||||
```sh
|
||||
sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=$(hostname -i)
|
||||
@ -151,7 +151,7 @@ sudo docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.
|
||||
|
||||
#### Run the service proxy
|
||||
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes ```Services```
|
||||
The service proxy provides load-balancing between groups of containers defined by Kubernetes `Services`
|
||||
|
||||
```sh
|
||||
sudo docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://${MASTER_IP}:8080 --v=2
|
||||
|
@ -105,7 +105,7 @@ NAME LABELS STATUS
|
||||
127.0.0.1 <none> Ready
|
||||
```
|
||||
|
||||
If you are running different kubernetes clusters, you may need to specify ```-s http://localhost:8080``` to select the local cluster.
|
||||
If you are running different kubernetes clusters, you may need to specify `-s http://localhost:8080` to select the local cluster.
|
||||
|
||||
### Run an application
|
||||
|
||||
@ -113,7 +113,7 @@ If you are running different kubernetes clusters, you may need to specify ```-s
|
||||
kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
|
||||
```
|
||||
|
||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
now run `docker ps` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||
|
||||
### Expose it as a service
|
||||
|
||||
@ -138,10 +138,10 @@ Note that you will need run this curl command on your boot2docker VM if you are
|
||||
|
||||
### A note on turning down your cluster
|
||||
|
||||
Many of these containers run under the management of the ```kubelet``` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
|
||||
Many of these containers run under the management of the `kubelet` binary, which attempts to keep containers running, even if they fail. So, in order to turn down
|
||||
the cluster, you need to first kill the kubelet container, and then any other containers.
|
||||
|
||||
You may use ```docker ps -a | awk '{print $1}' | xargs docker kill```, note this removes _all_ containers running under Docker, so use with caution.
|
||||
You may use `docker ps -a | awk '{print $1}' | xargs docker kill`, note this removes _all_ containers running under Docker, so use with caution.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -111,13 +111,13 @@ The next few steps will show you:
|
||||
|
||||
### Installing the kubernetes command line tools on your workstation
|
||||
|
||||
The cluster startup script will leave you with a running cluster and a ```kubernetes``` directory on your workstation.
|
||||
The cluster startup script will leave you with a running cluster and a `kubernetes` directory on your workstation.
|
||||
The next step is to make sure the `kubectl` tool is in your path.
|
||||
|
||||
The [kubectl](../user-guide/kubectl/kubectl.md) tool controls the Kubernetes cluster manager. It lets you inspect your cluster resources, create, delete, and update components, and much more.
|
||||
You will use it to look at your new cluster and bring up example apps.
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
# OS X
|
||||
@ -127,7 +127,7 @@ export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
```
|
||||
|
||||
**Note**: gcloud also ships with ```kubectl```, which by default is added to your path.
|
||||
**Note**: gcloud also ships with `kubectl`, which by default is added to your path.
|
||||
However the gcloud bundled kubectl version may be older than the one downloaded by the
|
||||
get.k8s.io install script. We recommend you use the downloaded binary to avoid
|
||||
potential issues with client/server version skew.
|
||||
|
@ -180,7 +180,7 @@ disown -a
|
||||
|
||||
#### Validate KM Services
|
||||
|
||||
Add the appropriate binary folder to your ```PATH``` to access kubectl:
|
||||
Add the appropriate binary folder to your `PATH` to access kubectl:
|
||||
|
||||
```bash
|
||||
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH
|
||||
|
@ -61,7 +61,7 @@ Running kubernetes with Vagrant (and VirtualBox) is an easy way to run/test/deve
|
||||
2. [VMWare Fusion](https://www.vmware.com/products/fusion/) version 5 or greater as well as the appropriate [Vagrant VMWare Fusion provider](https://www.vagrantup.com/vmware)
|
||||
3. [VMWare Workstation](https://www.vmware.com/products/workstation/) version 9 or greater as well as the [Vagrant VMWare Workstation provider](https://www.vagrantup.com/vmware)
|
||||
4. [Parallels Desktop](https://www.parallels.com/products/desktop/) version 9 or greater as well as the [Vagrant Parallels provider](https://parallels.github.io/vagrant-parallels/)
|
||||
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use ```yum install vagrant-libvirt```
|
||||
5. libvirt with KVM and enable support of hardware virtualisation. [Vagrant-libvirt](https://github.com/pradels/vagrant-libvirt). For fedora provided official rpm, and possible to use `yum install vagrant-libvirt`
|
||||
|
||||
### Setup
|
||||
|
||||
@ -170,7 +170,7 @@ vagrant destroy
|
||||
|
||||
Once your Vagrant machines are up and provisioned, the first thing to do is to check that you can use the `kubectl.sh` script.
|
||||
|
||||
You may need to build the binaries first, you can do this with ```make```
|
||||
You may need to build the binaries first, you can do this with `make`
|
||||
|
||||
```console
|
||||
$ ./cluster/kubectl.sh get nodes
|
||||
@ -375,7 +375,7 @@ export KUBERNETES_MINION_MEMORY=2048
|
||||
|
||||
#### I ran vagrant suspend and nothing works!
|
||||
|
||||
```vagrant suspend``` seems to mess up the network. This is not supported at this time.
|
||||
`vagrant suspend` seems to mess up the network. This is not supported at this time.
|
||||
|
||||
#### I want vagrant to sync folders via nfs!
|
||||
|
||||
|
@ -62,7 +62,7 @@ Before you file an issue, please search existing issues to see if your issue is
|
||||
|
||||
## IRC
|
||||
|
||||
The Kubernetes team hangs out on IRC at [```#google-containers```](https://botbot.me/freenode/google-containers/) on freenode. Feel free to come and ask any and all questions there.
|
||||
The Kubernetes team hangs out on IRC at [`#google-containers`](https://botbot.me/freenode/google-containers/) on freenode. Feel free to come and ask any and all questions there.
|
||||
|
||||
## Mailing List
|
||||
|
||||
|
@ -75,32 +75,32 @@ The first step in debugging a Pod is taking a look at it. Check the current sta
|
||||
$ kubectl describe pods ${POD_NAME}
|
||||
```
|
||||
|
||||
Look at the state of the containers in the pod. Are they all ```Running```? Have there been recent restarts?
|
||||
Look at the state of the containers in the pod. Are they all `Running`? Have there been recent restarts?
|
||||
|
||||
Continue debugging depending on the state of the pods.
|
||||
|
||||
#### My pod stays pending
|
||||
|
||||
If a Pod is stuck in ```Pending``` it means that it can not be scheduled onto a node. Generally this is because
|
||||
If a Pod is stuck in `Pending` it means that it can not be scheduled onto a node. Generally this is because
|
||||
there are insufficient resources of one type or another that prevent scheduling. Look at the output of the
|
||||
```kubectl describe ...``` command above. There should be messages from the scheduler about why it can not schedule
|
||||
`kubectl describe ...` command above. There should be messages from the scheduler about why it can not schedule
|
||||
your pod. Reasons include:
|
||||
|
||||
* **You don't have enough resources**: You may have exhausted the supply of CPU or Memory in your cluster, in this case
|
||||
you need to delete Pods, adjust resource requests, or add new nodes to your cluster. See [Compute Resources document](compute-resources.md#my-pods-are-pending-with-event-message-failedscheduling) for more information.
|
||||
|
||||
* **You are using ```hostPort```**: When you bind a Pod to a ```hostPort``` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, ```hostPort``` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
```hostPort``` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
* **You are using `hostPort`**: When you bind a Pod to a `hostPort` there are a limited number of places that pod can be
|
||||
scheduled. In most cases, `hostPort` is unnecessary, try using a Service object to expose your Pod. If you do require
|
||||
`hostPort` then you can only schedule as many Pods as there are nodes in your Kubernetes cluster.
|
||||
|
||||
|
||||
#### My pod stays waiting
|
||||
|
||||
If a Pod is stuck in the ```Waiting``` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from ```kubectl describe ...``` should be informative. The most common cause of ```Waiting``` pods is a failure to pull the image. There are three things to check:
|
||||
If a Pod is stuck in the `Waiting` state, then it has been scheduled to a worker node, but it can't run on that machine.
|
||||
Again, the information from `kubectl describe ...` should be informative. The most common cause of `Waiting` pods is a failure to pull the image. There are three things to check:
|
||||
* Make sure that you have the name of the image correct
|
||||
* Have you pushed the image to the repository?
|
||||
* Run a manual ```docker pull <image>``` on your machine to see if the image can be pulled.
|
||||
* Run a manual `docker pull <image>` on your machine to see if the image can be pulled.
|
||||
|
||||
#### My pod is crashing or otherwise unhealthy
|
||||
|
||||
@ -117,13 +117,13 @@ If your container has previously crashed, you can access the previous container'
|
||||
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
|
||||
```
|
||||
|
||||
Alternately, you can run commands inside that container with ```exec```:
|
||||
Alternately, you can run commands inside that container with `exec`:
|
||||
|
||||
```console
|
||||
$ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
|
||||
```
|
||||
|
||||
Note that ```-c ${CONTAINER_NAME}``` is optional and can be omitted for Pods that only contain a single container.
|
||||
Note that `-c ${CONTAINER_NAME}` is optional and can be omitted for Pods that only contain a single container.
|
||||
|
||||
As an example, to look at the logs from a running Cassandra pod, you might run
|
||||
|
||||
@ -141,7 +141,7 @@ feature request on GitHub describing your use case and why these tools are insuf
|
||||
Replication controllers are fairly straightforward. They can either create Pods or they can't. If they can't
|
||||
create pods, then please refer to the [instructions above](#debugging-pods) to debug your pods.
|
||||
|
||||
You can also use ```kubectl describe rc ${CONTROLLER_NAME}``` to introspect events related to the replication
|
||||
You can also use `kubectl describe rc ${CONTROLLER_NAME}` to introspect events related to the replication
|
||||
controller.
|
||||
|
||||
### Debugging Services
|
||||
@ -183,10 +183,10 @@ $ kubectl get pods --selector=name=nginx,type=frontend
|
||||
to list pods that match this selector. Verify that the list matches the Pods that you expect to provide your Service.
|
||||
|
||||
If the list of pods matches expectations, but your endpoints are still empty, it's possible that you don't
|
||||
have the right ports exposed. If your service has a ```containerPort``` specified, but the Pods that are
|
||||
have the right ports exposed. If your service has a `containerPort` specified, but the Pods that are
|
||||
selected don't have that port listed, then they won't be added to the endpoints list.
|
||||
|
||||
Verify that the pod's ```containerPort``` matches up with the Service's ```containerPort```
|
||||
Verify that the pod's `containerPort` matches up with the Service's `containerPort`
|
||||
|
||||
#### Network traffic is not forwarded
|
||||
|
||||
@ -197,7 +197,7 @@ There are three things to
|
||||
check:
|
||||
* Are your pods working correctly? Look for restart count, and [debug pods](#debugging-pods)
|
||||
* Can you connect to your pods directly? Get the IP address for the Pod, and try to connect directly to that IP
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the ```containerPort``` field needs to be 8080.
|
||||
* Is your application serving on the port that you configured? Kubernetes doesn't do port remapping, so if your application serves on 8080, the `containerPort` field needs to be 8080.
|
||||
|
||||
#### More information
|
||||
|
||||
|
@ -113,9 +113,9 @@ This hook is called immediately before a container is terminated. This event h
|
||||
|
||||
A single parameter named reason is passed to the handler which contains the reason for termination. Currently the valid values for reason are:
|
||||
|
||||
* ```Delete``` - indicating an API call to delete the pod containing this container.
|
||||
* ```Health``` - indicating that a health check of the container failed.
|
||||
* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
* `Delete` - indicating an API call to delete the pod containing this container.
|
||||
* `Health` - indicating that a health check of the container failed.
|
||||
* `Dependency` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
|
||||
Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137).
|
||||
|
||||
@ -131,7 +131,7 @@ For hooks which have parameters, these parameters are passed to the event handle
|
||||
Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
correctly.
|
||||
|
||||
We expect double delivery to be rare, but in some cases if the ```kubelet``` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up.
|
||||
We expect double delivery to be rare, but in some cases if the `kubelet` restarts in the middle of sending a hook, the hook may be resent after the kubelet comes back up.
|
||||
|
||||
Likewise, we only make a single delivery attempt. If (for example) an http hook receiver is down, and unable to take traffic, we do not make any attempts to resend.
|
||||
|
||||
|
@ -39,7 +39,7 @@ namespace using the [downward API](../downward-api.md).
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the pod
|
||||
|
@ -34,21 +34,21 @@ Documentation for other releases can be found at
|
||||
# Kubernetes User Guide: Managing Applications: Application Introspection and Debugging
|
||||
|
||||
Once your application is running, you’ll inevitably need to debug problems with it.
|
||||
Earlier we described how you can use ```kubectl get pods``` to retrieve simple status information about
|
||||
Earlier we described how you can use `kubectl get pods` to retrieve simple status information about
|
||||
your pods. But there are a number of ways to get even more information about your application.
|
||||
|
||||
**Table of Contents**
|
||||
<!-- BEGIN MUNGE: GENERATED_TOC -->
|
||||
|
||||
- [Kubernetes User Guide: Managing Applications: Application Introspection and Debugging](#kubernetes-user-guide-managing-applications-application-introspection-and-debugging)
|
||||
- [Using ```kubectl describe pod``` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
|
||||
- [Using `kubectl describe pod` to fetch details about pods](#using-kubectl-describe-pod-to-fetch-details-about-pods)
|
||||
- [Example: debugging Pending Pods](#example-debugging-pending-pods)
|
||||
- [Example: debugging a down/unreachable node](#example-debugging-a-downunreachable-node)
|
||||
- [What's next?](#whats-next)
|
||||
|
||||
<!-- END MUNGE: GENERATED_TOC -->
|
||||
|
||||
## Using ```kubectl describe pod``` to fetch details about pods
|
||||
## Using `kubectl describe pod` to fetch details about pods
|
||||
|
||||
For this example we’ll use a ReplicationController to create two pods, similar to the earlier example.
|
||||
|
||||
@ -87,7 +87,7 @@ my-nginx-gy1ij 1/1 Running 0 1m
|
||||
my-nginx-yv5cn 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
We can retrieve a lot more information about each of these pods using ```kubectl describe pod```. For example:
|
||||
We can retrieve a lot more information about each of these pods using `kubectl describe pod`. For example:
|
||||
|
||||
```console
|
||||
$ kubectl describe pod my-nginx-gy1ij
|
||||
@ -150,7 +150,7 @@ my-nginx-iichp 0/1 Running 0 8s
|
||||
my-nginx-tc2j9 0/1 Running 0 8s
|
||||
```
|
||||
|
||||
To find out why the my-nginx-9unp9 pod is not running, we can use ```kubectl describe pod``` on the pending Pod and look at its events:
|
||||
To find out why the my-nginx-9unp9 pod is not running, we can use `kubectl describe pod` on the pending Pod and look at its events:
|
||||
|
||||
```console
|
||||
$ kubectl describe pod my-nginx-9unp9
|
||||
@ -177,11 +177,11 @@ Events:
|
||||
Thu, 09 Jul 2015 23:56:21 -0700 Fri, 10 Jul 2015 00:01:30 -0700 21 {scheduler } failedScheduling Failed for reason PodFitsResources and possibly others
|
||||
```
|
||||
|
||||
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason ```PodFitsResources``` (and possibly others). ```PodFitsResources``` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
|
||||
Here you can see the event generated by the scheduler saying that the Pod failed to schedule for reason `PodFitsResources` (and possibly others). `PodFitsResources` means there were not enough resources for the Pod on any of the nodes. Due to the way the event is generated, there may be other reasons as well, hence "and possibly others."
|
||||
|
||||
To correct this situation, you can use ```kubectl scale``` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
|
||||
To correct this situation, you can use `kubectl scale` to update your Replication Controller to specify four or fewer replicas. (Or you could just leave the one Pod pending, which is harmless.)
|
||||
|
||||
In addition to ```kubectl describe pod```, another way to get extra information about a pod (beyond what is provided by ```kubectl get pod```) is to pass the ```-o yaml``` output format flag to ```kubectl get pod```. This will give you, in YAML format, even more information than ```kubectl describe pod```--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
|
||||
In addition to `kubectl describe pod`, another way to get extra information about a pod (beyond what is provided by `kubectl get pod`) is to pass the `-o yaml` output format flag to `kubectl get pod`. This will give you, in YAML format, even more information than `kubectl describe pod`--essentially all of the information the system has about the Pod. Here you will see things like annotations (which are key-value metadata without the label restrictions, that is used internally by Kubernetes system components), restart policy, ports, and volumes.
|
||||
|
||||
```yaml
|
||||
$ kubectl get pod my-nginx-i595c -o yaml
|
||||
@ -247,7 +247,7 @@ status:
|
||||
|
||||
## Example: debugging a down/unreachable node
|
||||
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that’s running on the node, or to find out why a Pod won’t schedule onto the node. As with Pods, you can use ```kubectl describe node``` and ```kubectl get node -o yaml``` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
Sometimes when debugging it can be useful to look at the status of a node -- for example, because you've noticed strange behavior of a Pod that’s running on the node, or to find out why a Pod won’t schedule onto the node. As with Pods, you can use `kubectl describe node` and `kubectl get node -o yaml` to retrieve detailed information about nodes. For example, here's what you'll see if a node is down (disconnected from the network, or kubelet dies and won't restart, etc.). Notice the events that show the node is NotReady, and also notice that the pods are no longer running (they are evicted after five minutes of NotReady status).
|
||||
|
||||
```console
|
||||
$ kubectl get nodes
|
||||
|
@ -68,7 +68,7 @@ These are just examples; you are free to develop your own conventions.
|
||||
## Syntax and character set
|
||||
|
||||
_Labels_ are key value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (`/`). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (`.`), not longer than 253 characters in total, followed by a slash (`/`).
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. ```kube-scheduler```, ```kube-controller-manager```, ```kube-apiserver```, ```kubectl```, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
If the prefix is omitted, the label key is presumed to be private to the user. Automated system components (e.g. `kube-scheduler`, `kube-controller-manager`, `kube-apiserver`, `kubectl`, or other third-party automation) which add labels to end-user objects must specify a prefix. The `kubernetes.io/` prefix is reserved for kubernetes core components.
|
||||
|
||||
Valid label values must be 63 characters or less and must be empty or begin and end with an alphanumeric character (`[a-z0-9A-Z]`) with dashes (`-`), underscores (`_`), dots (`.`), and alphanumerics between.
|
||||
|
||||
|
@ -85,8 +85,8 @@ kube-system <none> Active
|
||||
```
|
||||
|
||||
Kubernetes starts with two initial namespaces:
|
||||
* ```default``` The default namespace for objects with no other namespace
|
||||
* ```kube-system``` The namespace for objects created by the Kubernetes system
|
||||
* `default` The default namespace for objects with no other namespace
|
||||
* `kube-system` The namespace for objects created by the Kubernetes system
|
||||
|
||||
You can also get the summary of a specific namespace using:
|
||||
|
||||
@ -121,14 +121,14 @@ a *Namespace*.
|
||||
See [Admission control: Limit Range](../design/admission_control_limit_range.md)
|
||||
|
||||
A namespace can be in one of two phases:
|
||||
* ```Active``` the namespace is in use
|
||||
* `Active` the namespace is in use
|
||||
* ```Terminating`` the namespace is being deleted, and can not be used for new objects
|
||||
|
||||
See the [design doc](../design/namespaces.md#phases) for more details.
|
||||
|
||||
### Creating a new namespace
|
||||
|
||||
To create a new namespace, first create a new YAML file called ```my-namespace.yaml``` with the contents:
|
||||
To create a new namespace, first create a new YAML file called `my-namespace.yaml` with the contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
@ -139,7 +139,7 @@ metadata:
|
||||
|
||||
Note that the name of your namespace must be a DNS compatible label.
|
||||
|
||||
More information on the ```finalizers``` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
|
||||
More information on the `finalizers` field can be found in the namespace [design doc](../design/namespaces.md#finalizers).
|
||||
|
||||
Then run:
|
||||
|
||||
@ -149,7 +149,7 @@ $ kubectl create -f ./my-namespace.yaml
|
||||
|
||||
### Setting the namespace for a request
|
||||
|
||||
To temporarily set the namespace for a request, use the ```--namespace``` flag.
|
||||
To temporarily set the namespace for a request, use the `--namespace` flag.
|
||||
|
||||
For example:
|
||||
|
||||
@ -185,13 +185,13 @@ $ kubectl delete namespaces <insert-some-namespace-name>
|
||||
|
||||
**WARNING, this deletes _everything_ under the namespace!**
|
||||
|
||||
This delete is asynchronous, so for a time you will see the namespace in the ```Terminating``` state.
|
||||
This delete is asynchronous, so for a time you will see the namespace in the `Terminating` state.
|
||||
|
||||
## Namespaces and DNS
|
||||
|
||||
When you create a [Service](services.md), it creates a corresponding [DNS entry](../admin/dns.md)1.
|
||||
This entry is of the form ```<service-name>.<namespace-name>.cluster.local```, which means
|
||||
that if a container just uses ```<service-name>``` it will resolve to the service which
|
||||
This entry is of the form `<service-name>.<namespace-name>.cluster.local`, which means
|
||||
that if a container just uses `<service-name>` it will resolve to the service which
|
||||
is local to a namespace. This is useful for using the same configuration across
|
||||
multiple namespaces such as Development, Staging and Production. If you want to reach
|
||||
across namespaces, you need to use the fully qualified domain name (FQDN).
|
||||
|
@ -45,11 +45,11 @@ See [Persistent Storage design document](../../design/persistent-storage.md) for
|
||||
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
|
||||
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
|
||||
|
||||
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included
|
||||
for ease of development and testing. You'll create a local ```HostPath``` for this example.
|
||||
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. `HostPath` was included
|
||||
for ease of development and testing. You'll create a local `HostPath` for this example.
|
||||
|
||||
> IMPORTANT! For ```HostPath``` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the ```HostPath``` resides.
|
||||
> IMPORTANT! For `HostPath` to work, you will need to run a single node cluster. Kubernetes does not
|
||||
support local storage on the host at this time. There is no guarantee your pod ends up on the correct node where the `HostPath` resides.
|
||||
|
||||
|
||||
|
||||
@ -124,7 +124,7 @@ I love Kubernetes storage!
|
||||
```
|
||||
|
||||
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
|
||||
[```#google-containers```](https://botbot.me/freenode/google-containers/) on IRC and ask!
|
||||
[`#google-containers`](https://botbot.me/freenode/google-containers/) on IRC and ask!
|
||||
|
||||
Enjoy!
|
||||
|
||||
|
@ -38,7 +38,7 @@ Following this example, you will create a [secret](../secrets.md) and a [pod](..
|
||||
## Step Zero: Prerequisites
|
||||
|
||||
This example assumes you have a Kubernetes cluster installed and running, and that you have
|
||||
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
|
||||
installed the `kubectl` command line tool somewhere in your path. Please see the [getting
|
||||
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Step One: Create the secret
|
||||
|
@ -47,7 +47,7 @@ is *not* opened by default.
|
||||
|
||||
Google Compute Engine firewalls are documented [elsewhere](https://cloud.google.com/compute/docs/networking#firewalls_1).
|
||||
|
||||
You can add a firewall with the ```gcloud``` command line tool:
|
||||
You can add a firewall with the `gcloud` command line tool:
|
||||
|
||||
```console
|
||||
$ gcloud compute firewall-rules create my-rule --allow=tcp:<port>
|
||||
|
@ -85,7 +85,7 @@ $ cd kubernetes
|
||||
$ kubectl create -f ./replication.yaml
|
||||
```
|
||||
|
||||
Where ```replication.yaml``` contains:
|
||||
Where `replication.yaml` contains:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -174,7 +174,7 @@ Disk](http://cloud.google.com/compute/docs/disks) into your pod. Unlike
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be "handed off" between pods.
|
||||
|
||||
__Important: You must create a PD using ```gcloud``` or the GCE API or UI
|
||||
__Important: You must create a PD using `gcloud` or the GCE API or UI
|
||||
before you can use it__
|
||||
|
||||
There are some restrictions when using a `gcePersistentDisk`:
|
||||
@ -230,7 +230,7 @@ volume are preserved and the volume is merely unmounted. This means that an
|
||||
EBS volume can be pre-populated with data, and that data can be "handed off"
|
||||
between pods.
|
||||
|
||||
__Important: You must create an EBS volume using ```aws ec2 create-volume``` or
|
||||
__Important: You must create an EBS volume using `aws ec2 create-volume` or
|
||||
the AWS API before you can use it__
|
||||
|
||||
There are some restrictions when using an awsElasticBlockStore volume:
|
||||
|
Loading…
Reference in New Issue
Block a user