mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-07 12:11:43 +00:00
Fix misspellings in documentation
This commit is contained in:
@@ -30,7 +30,7 @@ A request has 4 attributes that can be considered for authorization:
|
||||
- whether the request is readonly (GETs are readonly)
|
||||
- what resource is being accessed
|
||||
- applies only to the API endpoints, such as
|
||||
`/api/v1beta3/namespaces/default/pods`. For miscelaneous endpoints, like `/version`, the
|
||||
`/api/v1beta3/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
|
||||
resource is the empty string.
|
||||
- the namespace of the object being access, or the empty string if the
|
||||
endpoint does not support namespaced objects.
|
||||
@@ -82,7 +82,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
||||
|
||||
[Complete file example](../pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## Plugin Developement
|
||||
## Plugin Development
|
||||
|
||||
Other implementations can be developed fairly easily.
|
||||
The APIserver calls the Authorizer interface:
|
||||
|
@@ -5,7 +5,7 @@
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analagous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](./images.md) and one or more [volumes](./volumes.md).
|
||||
|
||||
|
@@ -122,7 +122,7 @@ We should consider what the best way to allow this is; there are a few different
|
||||
|
||||
3. Give secrets attributes that allow the user to express that the secret should be presented to
|
||||
the container as an environment variable. The container's environment would contain the
|
||||
desired values and the software in the container could use them without accomodation the
|
||||
desired values and the software in the container could use them without accommodation the
|
||||
command or setup script.
|
||||
|
||||
For our initial work, we will treat all secrets as files to narrow the problem space. There will
|
||||
|
@@ -149,7 +149,7 @@ First, if it finds pods which have a `Pod.Spec.ServiceAccountUsername` but no `P
|
||||
then it copies in the referenced securityContext and secrets references for the corresponding `serviceAccount`.
|
||||
|
||||
Second, if ServiceAccount definitions change, it may take some actions.
|
||||
**TODO**: decide what actions it takes when a serviceAccount defintion changes. Does it stop pods, or just
|
||||
**TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just
|
||||
allow someone to list ones that out out of spec? In general, people may want to customize this?
|
||||
|
||||
Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include
|
||||
|
@@ -28,7 +28,7 @@ any other etcd clusters that might exist, including the kubernetes master.
|
||||
## Issues
|
||||
|
||||
The skydns service is reachable directly from kubernetes nodes (outside
|
||||
of any container) and DNS resolution works if the skydns service is targetted
|
||||
of any container) and DNS resolution works if the skydns service is targeted
|
||||
explicitly. However, nodes are not configured to use the cluster DNS service or
|
||||
to search the cluster's DNS domain by default. This may be resolved at a later
|
||||
time.
|
||||
|
@@ -151,7 +151,7 @@ redis-master master redis name=r
|
||||
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 4
|
||||
```
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labled `name=frontend`, you should see one running on each node.
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
|
@@ -16,7 +16,7 @@ export LOGGING_DESTINATION=elasticsearch
|
||||
|
||||
This will instantiate a [Fluentd](http://www.fluentd.org/) instance on each node which will
|
||||
collect all the Dcoker container log files. The collected logs will
|
||||
be targetted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
|
||||
be targeted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
|
||||
local node and accepting log information on port 9200. This can be accomplished
|
||||
by writing a pod specification and service specification to define an
|
||||
Elasticsearch service (more information to follow shortly in the contrib directory).
|
||||
|
@@ -125,7 +125,7 @@ If the administrator wishes to create node objects manually, set kubelet flag
|
||||
The administrator can modify Node resources (regardless of the setting of `--register-node`).
|
||||
Modifications include setting labels on the Node, and marking it unschedulable.
|
||||
|
||||
Labels on nodes can be used in conjuction with node selectors on pods to control scheduling.
|
||||
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling.
|
||||
|
||||
Making a node unscheduleable will prevent new pods from being scheduled to that
|
||||
node, but will not affect any existing pods on the node. This is useful as a
|
||||
|
@@ -4,7 +4,7 @@ This document serves as a proposal for high availability of the scheduler and co
|
||||
## Design Options
|
||||
For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en)
|
||||
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desireable. As a result, we are **NOT** planning on this approach at this time.
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
|
||||
2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the apprach that this proposal will leverage.
|
||||
|
||||
|
Reference in New Issue
Block a user