Cleanup docs.

- Fix typo in services-firewalls
- Fix incorrect comma usage
- s/minion/node/
- Link deadlock to wikipedia article
- Fix livenessProbe capitalization
- Change API reference quoting from " to \`
This commit is contained in:
Tim St. Clair 2015-07-20 15:52:14 -07:00
parent 220cc6a86c
commit ca32e651bd
3 changed files with 13 additions and 13 deletions

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Services and Firewalls # Services and Firewalls
Many cloud providers (e.g. Google Compute Engine) define firewalls that help keep prevent inadvertent Many cloud providers (e.g. Google Compute Engine) define firewalls that help prevent inadvertent
exposure to the internet. When exposing a service to the external world, you may need to open up exposure to the internet. When exposing a service to the external world, you may need to open up
one or more ports in these firewalls to serve traffic. This document describes this process, as one or more ports in these firewalls to serve traffic. This document describes this process, as
well as any provider specific details that may be necessary. well as any provider specific details that may be necessary.

View File

@ -124,7 +124,7 @@ That's great for a simple static web server, but what about persistent storage?
The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage. The container file system only lives as long as the container does. So if your app's state needs to survive relocation, reboots, and crashes, you'll need to configure some persistent storage.
For this example, we'll be creating a Redis pod, with a named volume and volume mount that defines the path to mount the volume. For this example we'll be creating a Redis pod with a named volume and volume mount that defines the path to mount the volume.
1. Define a volume: 1. Define a volume:
@ -134,7 +134,7 @@ For this example, we'll be creating a Redis pod, with a named volume and volume
emptyDir: {} emptyDir: {}
``` ```
1. Define a volume mount within a container definition: 2. Define a volume mount within a container definition:
```yaml ```yaml
volumeMounts: volumeMounts:
@ -170,7 +170,7 @@ Notes:
##### Volume Types ##### Volume Types
- **EmptyDir**: Creates a new directory that will persist across container failures and restarts. - **EmptyDir**: Creates a new directory that will persist across container failures and restarts.
- **HostPath**: Mounts an existing directory on the minion's file system (e.g. `/var/logs`). - **HostPath**: Mounts an existing directory on the node's file system (e.g. `/var/logs`).
See [volumes](../../../docs/user-guide/volumes.md) for more details. See [volumes](../../../docs/user-guide/volumes.md) for more details.

View File

@ -223,9 +223,9 @@ For more information, see [Services](../services.md).
When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise... When I write code it never crashes, right? Sadly the [Kubernetes issues list](https://github.com/GoogleCloudPlatform/kubernetes/issues) indicates otherwise...
Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking Rather than trying to write bug-free code, a better approach is to use a management system to perform periodic health checking
and repair of your application. That way, a system, outside of your application itself, is responsible for monitoring the and repair of your application. That way a system outside of your application itself is responsible for monitoring the
application and taking action to fix it. It's important that the system be outside of the application, since of course, if application and taking action to fix it. It's important that the system be outside of the application, since if
your application fails, and the health checking agent is part of your application, it may fail as well, and you'll never know. your application fails and the health checking agent is part of your application, it may fail as well and you'll never know.
In Kubernetes, the health check monitor is the Kubelet agent. In Kubernetes, the health check monitor is the Kubelet agent.
#### Process Health Checking #### Process Health Checking
@ -237,7 +237,7 @@ Kubernetes.
#### Application Health Checking #### Application Health Checking
However, in many cases, this low-level health checking is insufficient. Consider for example, the following code: However, in many cases this low-level health checking is insufficient. Consider, for example, the following code:
```go ```go
lockOne := sync.Mutex{} lockOne := sync.Mutex{}
@ -253,8 +253,8 @@ lockTwo.Lock();
lockOne.Lock(); lockOne.Lock();
``` ```
This is a classic example of a problem in computer science known as "Deadlock". From Docker's perspective your application is This is a classic example of a problem in computer science known as ["Deadlock"](https://en.wikipedia.org/wiki/Deadlock). From Docker's perspective your application is
still operating, the process is still running, but from your application's perspective, your code is locked up, and will never respond correctly. still operating and the process is still running, but from your application's perspective your code is locked up and will never respond correctly.
To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the To address this problem, Kubernetes supports user implemented application health-checks. These checks are performed by the
Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide. Kubelet to ensure that your application is operating correctly for a definition of "correctly" that _you_ provide.
@ -265,9 +265,9 @@ Currently, there are three types of application health checks that you can choos
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/). * Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure. * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted. In all cases, if the Kubelet discovers a failure the container is restarted.
The container health checks are configured in the "LivenessProbe" section of your container config. There you can also specify an "initialDelaySeconds" that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization. The container health checks are configured in the `livenessProbe` section of your container config. There you can also specify an `initialDelaySeconds` that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)): Here is an example config for a pod with an HTTP health check ([pod-with-http-healthcheck.yaml](pod-with-http-healthcheck.yaml)):