Copy edits for typos

This commit is contained in:
Ed Costello
2015-10-29 14:36:29 -04:00
parent d20ab89bd6
commit f968c593e3
21 changed files with 30 additions and 30 deletions

View File

@@ -188,7 +188,7 @@ For each pending deployment, it will:
and the old RCs have been ramped down to 0.
6. Cleanup.
DeploymentController is stateless so that it can recover incase it crashes during a deployment.
DeploymentController is stateless so that it can recover in case it crashes during a deployment.
### MinReadySeconds

View File

@@ -250,7 +250,7 @@ defined as:
> 3. It must be possible to round-trip your change (convert to different API versions and back) with
> no loss of information.
Previous versions of this proposal attempted to deal with backward compatiblity by defining
Previous versions of this proposal attempted to deal with backward compatibility by defining
the affect of setting the pod-level fields on the container-level fields. While trying to find
consensus on this design, it became apparent that this approach was going to be extremely complex
to implement, explain, and support. Instead, we will approach backward compatibility as follows:

View File

@@ -119,7 +119,7 @@ Supporting other platforms:
Protecting containers and guarantees:
- **Control loops**: The OOM score assignment is not perfect for burstable containers, and system OOM kills are expensive. TODO: Add a control loop to reduce memory pressure, while ensuring guarantees for various containers.
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all prcoesses will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all processes will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
- **OOM Assignment Races**: We cannot set OOM_SCORE_ADJ of a process until it has launched. This could lead to races. For example, suppose that a memory burstable container is using 70% of the systems memory, and another burstable container is using 30% of the systems memory. A best-effort burstable container attempts to launch on the Kubelet. Initially the best-effort container is using 2% of memory, and has an OOM_SCORE_ADJ of 20. So its OOM_SCORE is lower than the burstable pod using 70% of system memory. The burstable pod will be evicted by the best-effort pod. Short-term TODO: Implement a restart policy where best-effort pods are immediately evicted if OOM killed, but burstable pods are given a few retries. Long-term TODO: push support for OOM scores in cgroups to the upstream Linux kernel.
- **Swap Memory**: The QoS proposal assumes that swap memory is disabled. If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can start allocating memory on swap space. Eventually, if there isnt enough swap space, processes in the pods might get killed. TODO: ensure that swap space is disabled on our cluster setups scripts.
@@ -128,7 +128,7 @@ Killing and eviction mechanics:
- **Out of Resource Eviction**: If a container in a multi-container pod fails, we might want restart the entire pod instead of just restarting the container. In some cases (e.g. if a memory best-effort container is out of resource killed), we might change pods to "failed" phase and pods might need to be evicted. TODO: Draft a policy for out of resource eviction and implement it.
Maintaining CPU performance:
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resoruces. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable containers request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resources. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable containers request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
- **CPU kills**: System tasks or daemons like the Kubelet could consume more CPU, and we won't be able to guarantee containers the CPU amount they requested. If the situation persists, we might want to kill the container. TODO: Draft a policy for CPU usage killing and implement it.
- **CPU limits**: Enabling CPU limits can be problematic, because processes might be hard capped and might stall for a while. TODO: Enable CPU limits intelligently using CPU quota and core allocation.

View File

@@ -64,7 +64,7 @@ Goals of this design:
### Docker
Docker uses a base SELinux context and calculates a unique MCS label per container. The SELinux
context of a container can be overriden with the `SecurityOpt` api that allows setting the different
context of a container can be overridden with the `SecurityOpt` api that allows setting the different
parts of the SELinux context individually.
Docker has functionality to relabel bind-mounts with a usable SElinux and supports two different
@@ -73,7 +73,7 @@ use-cases:
1. The `:Z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
SELinux context
2. The `:z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
SElinux context, but remove the MCS labels, making the volume shareable beween containers
SElinux context, but remove the MCS labels, making the volume shareable between containers
We should avoid using the `:z` flag, because it relaxes the SELinux context so that any container
(from an SELinux standpoint) can use the volume.
@@ -200,7 +200,7 @@ From the above, we know that label management must be applied:
Volumes should be relabeled with the correct SELinux context. Docker has this capability today; it
is desireable for other container runtime implementations to provide similar functionality.
Relabeling should be an optional aspect of a volume plugin to accomodate:
Relabeling should be an optional aspect of a volume plugin to accommodate:
1. volume types for which generalized relabeling support is not sufficient
2. testing for each volume plugin individually

View File

@@ -45,7 +45,7 @@ Goals of this design:
1. Enumerate the different use-cases for volume usage in pods
2. Define the desired goal state for ownership and permission management in Kubernetes
3. Describe the changes necessary to acheive desired state
3. Describe the changes necessary to achieve desired state
## Constraints and Assumptions
@@ -250,7 +250,7 @@ override the primary GID and should be safe to use in images that expect GID 0.
### Setting ownership and permissions on volumes
For `EmptyDir`-based volumes and unshared storage, `chown` and `chmod` on the node are sufficient to
set ownershp and permissions. Shared storage is different because:
set ownership and permissions. Shared storage is different because:
1. Shared storage may not live on the node a pod that uses it runs on
2. Shared storage may be externally managed