mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-29 06:27:05 +00:00
Copy edits for typos
This commit is contained in:
parent
d20ab89bd6
commit
f968c593e3
@ -66,7 +66,7 @@ pod (UID, container name) pair is allowed to have, less than zero for no limit.
|
||||
`MaxContainers` is the max number of total dead containers, less than zero for no limit as well.
|
||||
|
||||
kubelet sorts out containers which are unidentified or stay out of bounds set by previous
|
||||
mentioned three flags. Gernerally the oldest containers are removed first. Since we take both
|
||||
mentioned three flags. Generally the oldest containers are removed first. Since we take both
|
||||
`MaxPerPodContainer` and `MaxContainers` into consideration, it could happen when they
|
||||
have conflict -- retaining the max number of containers per pod goes out of range set by max
|
||||
number of global dead containers. In this case, we would sacrifice the `MaxPerPodContainer`
|
||||
|
@ -25733,7 +25733,7 @@ span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { cursor: default; }
|
||||
</div>
|
||||
<div id="footer">
|
||||
<div id="footer-text">
|
||||
Last updated 2015-12-15 06:44:31 UTC
|
||||
Last updated 2015-12-22 14:29:57 UTC
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
|
@ -81,7 +81,7 @@ in more detail in the [API Changes documentation](devel/api_changes.md#alpha-bet
|
||||
- Support for the overall feature will not be dropped, though details may change.
|
||||
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
|
||||
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
|
||||
API objects. The editing process may require some thought. This may require downtime for appplications that rely on the feature.
|
||||
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
|
||||
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
|
||||
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
|
||||
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
|
||||
|
@ -95,7 +95,7 @@ you with sufficient instance storage for your needs.
|
||||
|
||||
Note: The master uses a persistent volume ([etcd](architecture.md#etcd)) to track
|
||||
its state. Similar to nodes, containers are mostly run against instance
|
||||
storage, except that we repoint some important data onto the peristent volume.
|
||||
storage, except that we repoint some important data onto the persistent volume.
|
||||
|
||||
The default storage driver for Docker images is aufs. Specifying btrfs (by passing the environment
|
||||
variable `DOCKER_STORAGE=btrfs` to kube-up) is also a good choice for a filesystem. btrfs
|
||||
@ -176,7 +176,7 @@ a distribution file, and then are responsible for attaching and detaching EBS
|
||||
volumes from itself.
|
||||
|
||||
The node policy is relatively minimal. The master policy is probably overly
|
||||
permissive. The security concious may want to lock-down the IAM policies
|
||||
permissive. The security conscious may want to lock-down the IAM policies
|
||||
further ([#11936](http://issues.k8s.io/11936)).
|
||||
|
||||
We should make it easier to extend IAM permissions and also ensure that they
|
||||
@ -275,7 +275,7 @@ Salt, for example). These objects can currently be manually created:
|
||||
|
||||
* Set the `AWS_S3_BUCKET` environment variable to use an existing S3 bucket.
|
||||
* Set the `VPC_ID` environment variable to reuse an existing VPC.
|
||||
* Set the `SUBNET_ID` environemnt variable to reuse an existing subnet.
|
||||
* Set the `SUBNET_ID` environment variable to reuse an existing subnet.
|
||||
* If your route table has a matching `KubernetesCluster` tag, it will
|
||||
be reused.
|
||||
* If your security groups are appropriately named, they will be reused.
|
||||
|
@ -65,7 +65,7 @@ The DaemonSet supports standard API features:
|
||||
- Using the pod’s nodeSelector field, DaemonSets can be restricted to operate over nodes that have a certain label. For example, suppose that in a cluster some nodes are labeled ‘app=database’. You can use a DaemonSet to launch a datastore pod on exactly those nodes labeled ‘app=database’.
|
||||
- Using the pod's nodeName field, DaemonSets can be restricted to operate on a specified node.
|
||||
- The PodTemplateSpec used by the DaemonSet is the same as the PodTemplateSpec used by the Replication Controller.
|
||||
- The initial implementation will not guarnatee that DaemonSet pods are created on nodes before other pods.
|
||||
- The initial implementation will not guarantee that DaemonSet pods are created on nodes before other pods.
|
||||
- The initial implementation of DaemonSet does not guarantee that DaemonSet pods show up on nodes (for example because of resource limitations of the node), but makes a best effort to launch DaemonSet pods (like Replication Controllers do with pods). Subsequent revisions might ensure that DaemonSet pods show up on nodes, preempting other pods if necessary.
|
||||
- The DaemonSet controller adds an annotation "kubernetes.io/created-by: \<json API object reference\>"
|
||||
- YAML example:
|
||||
|
@ -403,7 +403,7 @@ Using the `omitempty` tag causes swagger documentation to reflect that the field
|
||||
|
||||
Using a pointer allows distinguishing unset from the zero value for that type.
|
||||
There are some cases where, in principle, a pointer is not needed for an optional field
|
||||
since the zero value is forbidden, and thus imples unset. There are examples of this in the
|
||||
since the zero value is forbidden, and thus implies unset. There are examples of this in the
|
||||
codebase. However:
|
||||
|
||||
- it can be difficult for implementors to anticipate all cases where an empty value might need to be
|
||||
|
@ -558,7 +558,7 @@ New feature development proceeds through a series of stages of increasing maturi
|
||||
|
||||
- Development level
|
||||
- Object Versioning: no convention
|
||||
- Availability: not commited to main kubernetes repo, and thus not available in offical releases
|
||||
- Availability: not committed to main kubernetes repo, and thus not available in official releases
|
||||
- Audience: other developers closely collaborating on a feature or proof-of-concept
|
||||
- Upgradeability, Reliability, Completeness, and Support: no requirements or guarantees
|
||||
- Alpha level
|
||||
@ -590,7 +590,7 @@ New feature development proceeds through a series of stages of increasing maturi
|
||||
tests complete; the API has had a thorough API review and is thought to be complete, though use
|
||||
during beta may frequently turn up API issues not thought of during review
|
||||
- Upgradeability: the object schema and semantics may change in a later software release; when
|
||||
this happens, an upgrade path will be documentedr; in some cases, objects will be automatically
|
||||
this happens, an upgrade path will be documented; in some cases, objects will be automatically
|
||||
converted to the new version; in other cases, a manual upgrade may be necessary; a manual
|
||||
upgrade may require downtime for anything relying on the new feature, and may require
|
||||
manual conversion of objects to the new version; when manual conversion is necessary, the
|
||||
|
@ -35,7 +35,7 @@ Documentation for other releases can be found at
|
||||
|
||||
## Overview
|
||||
|
||||
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repeptitive, low
|
||||
Kubernetes uses a variety of automated tools in an attempt to relieve developers of repetitive, low
|
||||
brain power work. This document attempts to describe these processes.
|
||||
|
||||
|
||||
|
@ -179,7 +179,7 @@ exports.queue_storage_if_needed = function() {
|
||||
]);
|
||||
process.env['AZURE_STORAGE_ACCOUNT'] = conf.resources['storage_account'];
|
||||
} else {
|
||||
// Preserve it for resizing, so we don't create a new one by accedent,
|
||||
// Preserve it for resizing, so we don't create a new one by accident,
|
||||
// when the environment variable is unset
|
||||
conf.resources['storage_account'] = process.env['AZURE_STORAGE_ACCOUNT'];
|
||||
}
|
||||
|
@ -131,7 +131,7 @@ For more complete applications, please look in the [examples directory](../../..
|
||||
|
||||
### Debugging
|
||||
|
||||
Here are severals tips for you when you run into any issues.
|
||||
Here are several tips for you when you run into any issues.
|
||||
|
||||
##### Check logs
|
||||
|
||||
|
@ -188,7 +188,7 @@ For each pending deployment, it will:
|
||||
and the old RCs have been ramped down to 0.
|
||||
6. Cleanup.
|
||||
|
||||
DeploymentController is stateless so that it can recover incase it crashes during a deployment.
|
||||
DeploymentController is stateless so that it can recover in case it crashes during a deployment.
|
||||
|
||||
### MinReadySeconds
|
||||
|
||||
|
@ -250,7 +250,7 @@ defined as:
|
||||
> 3. It must be possible to round-trip your change (convert to different API versions and back) with
|
||||
> no loss of information.
|
||||
|
||||
Previous versions of this proposal attempted to deal with backward compatiblity by defining
|
||||
Previous versions of this proposal attempted to deal with backward compatibility by defining
|
||||
the affect of setting the pod-level fields on the container-level fields. While trying to find
|
||||
consensus on this design, it became apparent that this approach was going to be extremely complex
|
||||
to implement, explain, and support. Instead, we will approach backward compatibility as follows:
|
||||
|
@ -119,7 +119,7 @@ Supporting other platforms:
|
||||
|
||||
Protecting containers and guarantees:
|
||||
- **Control loops**: The OOM score assignment is not perfect for burstable containers, and system OOM kills are expensive. TODO: Add a control loop to reduce memory pressure, while ensuring guarantees for various containers.
|
||||
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all prcoesses will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
|
||||
- **Kubelet, Kube-proxy, Docker daemon protection**: If a system is overcommitted with memory guaranteed containers, then all processes will have an OOM_SCORE of 0. So Docker daemon could be killed instead of a container or pod being killed. TODO: Place all user-pods into a separate cgroup, and set a limit on the memory they can consume. Initially, the limits can be based on estimated memory usage of Kubelet, Kube-proxy, and CPU limits, eventually we can monitor the resources they consume.
|
||||
- **OOM Assignment Races**: We cannot set OOM_SCORE_ADJ of a process until it has launched. This could lead to races. For example, suppose that a memory burstable container is using 70% of the system’s memory, and another burstable container is using 30% of the system’s memory. A best-effort burstable container attempts to launch on the Kubelet. Initially the best-effort container is using 2% of memory, and has an OOM_SCORE_ADJ of 20. So its OOM_SCORE is lower than the burstable pod using 70% of system memory. The burstable pod will be evicted by the best-effort pod. Short-term TODO: Implement a restart policy where best-effort pods are immediately evicted if OOM killed, but burstable pods are given a few retries. Long-term TODO: push support for OOM scores in cgroups to the upstream Linux kernel.
|
||||
- **Swap Memory**: The QoS proposal assumes that swap memory is disabled. If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can start allocating memory on swap space. Eventually, if there isn’t enough swap space, processes in the pods might get killed. TODO: ensure that swap space is disabled on our cluster setups scripts.
|
||||
|
||||
@ -128,7 +128,7 @@ Killing and eviction mechanics:
|
||||
- **Out of Resource Eviction**: If a container in a multi-container pod fails, we might want restart the entire pod instead of just restarting the container. In some cases (e.g. if a memory best-effort container is out of resource killed), we might change pods to "failed" phase and pods might need to be evicted. TODO: Draft a policy for out of resource eviction and implement it.
|
||||
|
||||
Maintaining CPU performance:
|
||||
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resoruces. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable container’s request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
|
||||
- **CPU-sharing Issues** Suppose that a node is running 2 container: a container A requesting for 50% of CPU (but without a CPU limit), and a container B not requesting for resources. Suppose that both pods try to use as much CPU as possible. After the proposal is implemented, A will get 100% of the CPU, and B will get around 0% of the CPU. However, a fairer scheme would give the Burstable container 75% of the CPU and the Best-Effort container 25% of the CPU (since resources past the Burstable container’s request are not guaranteed). TODO: think about whether this issue to be solved, implement a solution.
|
||||
- **CPU kills**: System tasks or daemons like the Kubelet could consume more CPU, and we won't be able to guarantee containers the CPU amount they requested. If the situation persists, we might want to kill the container. TODO: Draft a policy for CPU usage killing and implement it.
|
||||
- **CPU limits**: Enabling CPU limits can be problematic, because processes might be hard capped and might stall for a while. TODO: Enable CPU limits intelligently using CPU quota and core allocation.
|
||||
|
||||
|
@ -64,7 +64,7 @@ Goals of this design:
|
||||
### Docker
|
||||
|
||||
Docker uses a base SELinux context and calculates a unique MCS label per container. The SELinux
|
||||
context of a container can be overriden with the `SecurityOpt` api that allows setting the different
|
||||
context of a container can be overridden with the `SecurityOpt` api that allows setting the different
|
||||
parts of the SELinux context individually.
|
||||
|
||||
Docker has functionality to relabel bind-mounts with a usable SElinux and supports two different
|
||||
@ -73,7 +73,7 @@ use-cases:
|
||||
1. The `:Z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
|
||||
SELinux context
|
||||
2. The `:z` bind-mount flag, which tells Docker to relabel a bind-mount with the container's
|
||||
SElinux context, but remove the MCS labels, making the volume shareable beween containers
|
||||
SElinux context, but remove the MCS labels, making the volume shareable between containers
|
||||
|
||||
We should avoid using the `:z` flag, because it relaxes the SELinux context so that any container
|
||||
(from an SELinux standpoint) can use the volume.
|
||||
@ -200,7 +200,7 @@ From the above, we know that label management must be applied:
|
||||
Volumes should be relabeled with the correct SELinux context. Docker has this capability today; it
|
||||
is desireable for other container runtime implementations to provide similar functionality.
|
||||
|
||||
Relabeling should be an optional aspect of a volume plugin to accomodate:
|
||||
Relabeling should be an optional aspect of a volume plugin to accommodate:
|
||||
|
||||
1. volume types for which generalized relabeling support is not sufficient
|
||||
2. testing for each volume plugin individually
|
||||
|
@ -45,7 +45,7 @@ Goals of this design:
|
||||
|
||||
1. Enumerate the different use-cases for volume usage in pods
|
||||
2. Define the desired goal state for ownership and permission management in Kubernetes
|
||||
3. Describe the changes necessary to acheive desired state
|
||||
3. Describe the changes necessary to achieve desired state
|
||||
|
||||
## Constraints and Assumptions
|
||||
|
||||
@ -250,7 +250,7 @@ override the primary GID and should be safe to use in images that expect GID 0.
|
||||
### Setting ownership and permissions on volumes
|
||||
|
||||
For `EmptyDir`-based volumes and unshared storage, `chown` and `chmod` on the node are sufficient to
|
||||
set ownershp and permissions. Shared storage is different because:
|
||||
set ownership and permissions. Shared storage is different because:
|
||||
|
||||
1. Shared storage may not live on the node a pod that uses it runs on
|
||||
2. Shared storage may be externally managed
|
||||
|
@ -243,7 +243,7 @@ __Default Backends__: An Ingress with no rules, like the one shown in the previo
|
||||
|
||||
### Loadbalancing
|
||||
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distil loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (eg: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
|
||||
|
||||
It's also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/production-pods.md#liveness-and-readiness-probes-aka-health-checks) which allow you to achieve the same end result.
|
||||
|
||||
|
@ -161,7 +161,7 @@ the same schema as a [pod](pods.md), except it is nested and does not have an `a
|
||||
`kind`.
|
||||
|
||||
In addition to required fields for a Pod, a pod template in a job must specify appropriate
|
||||
lables (see [pod selector](#pod-selector) and an appropriate restart policy.
|
||||
labels (see [pod selector](#pod-selector) and an appropriate restart policy.
|
||||
|
||||
Only a [`RestartPolicy`](pod-states.md) equal to `Never` or `OnFailure` are allowed.
|
||||
|
||||
@ -171,7 +171,7 @@ The `.spec.selector` field is a label query over a set of pods.
|
||||
|
||||
The `spec.selector` is an object consisting of two fields:
|
||||
* `matchLabels` - works the same as the `.spec.selector` of a [ReplicationController](replication-controller.md)
|
||||
* `matchExpressions` - allows to build more sophisticated selectors by specyfing key,
|
||||
* `matchExpressions` - allows to build more sophisticated selectors by specifying key,
|
||||
list of values and an operator that relates the key and values.
|
||||
|
||||
When the two are specified the result is ANDed.
|
||||
@ -215,7 +215,7 @@ restarted locally, or else specify `.spec.template.containers[].restartPolicy =
|
||||
See [pods-states](pod-states.md) for more information on `restartPolicy`.
|
||||
|
||||
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
|
||||
(node is upgraded, rebooted, delelted, etc.), or if a container of the Pod fails and the
|
||||
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
|
||||
`.spec.template.containers[].restartPolicy = "Never"`. When a Pod fails, then the Job controller
|
||||
starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
|
||||
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
|
||||
|
@ -207,7 +207,7 @@ mister-red,mister-red,2
|
||||
|
||||
Also, since we have other users who validate using **other** mechanisms, the api-server would have probably been launched with other authentication options (there are many such options, make sure you understand which ones YOU care about before crafting a kubeconfig file, as nobody needs to implement all the different permutations of possible authentication schemes).
|
||||
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in succesfully, because we are providigin the green-user's client credentials.
|
||||
- Since the user for the current context is "green-user", any client of the api-server using this kubeconfig file would naturally be able to log in successfully, because we are providing the green-user's client credentials.
|
||||
- Similarly, we can operate as the "blue-user" if we choose to change the value of current-context.
|
||||
|
||||
In the above scenario, green-user would have to log in by providing certificates, whereas blue-user would just provide the token. All this information would be handled for us by the
|
||||
|
@ -146,7 +146,7 @@ LIST and WATCH operations may specify label selectors to filter the sets of obje
|
||||
* _equality-based_ requirements: `?labelSelector=environment%3Dproduction,tier%3Dfrontend`
|
||||
* _set-based_ requirements: `?labelSelector=environment+in+%28production%2Cqa%29%2Ctier+in+%28frontend%29`
|
||||
|
||||
Both label selector styles can be used to list or watch resources via a REST client. For example targetting `apiserver` with `kubectl` and using _equality-based_ one may write:
|
||||
Both label selector styles can be used to list or watch resources via a REST client. For example targeting `apiserver` with `kubectl` and using _equality-based_ one may write:
|
||||
|
||||
```console
|
||||
$ kubectl get pods -l environment=production,tier=frontend
|
||||
|
@ -210,7 +210,7 @@ By default, all deletes are graceful within 30 seconds. The `kubectl delete` com
|
||||
|
||||
## Privileged mode for pod containers
|
||||
|
||||
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as seperate pods that don't need to be compiled into the kubelet.
|
||||
From kubernetes v1.1, any container in a pod can enable privileged mode, using the `privileged` flag on the `SecurityContext` of the container spec. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices. Processes within the container get almost the same privileges that are available to processes outside a container. With privileged mode, it should be easier to write network and volume plugins as separate pods that don't need to be compiled into the kubelet.
|
||||
|
||||
If the master is running kubernetes v1.1 or higher, and the nodes are running a version lower than v1.1, then new privileged pods will be accepted by api-server, but will not be launched. They will be pending state.
|
||||
If user calls `kubectl describe pod FooPodName`, user can see the reason why the pod is in pending state. The events table in the describe command output will say:
|
||||
|
@ -138,7 +138,7 @@ volume is safe across container crashes.
|
||||
|
||||
Some uses for an `emptyDir` are:
|
||||
|
||||
* scratch space, such as for a disk-based mergesortcw
|
||||
* scratch space, such as for a disk-based merge sort
|
||||
* checkpointing a long computation for recovery from crashes
|
||||
* holding files that a content-manager container fetches while a webserver
|
||||
container serves the data
|
||||
|
Loading…
Reference in New Issue
Block a user