mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-22 11:21:47 +00:00
Merge pull request #9319 from skonzem/fix_doc_typos
Fix misspellings in documentation
This commit is contained in:
commit
d326de9b6c
@ -27,7 +27,7 @@ Non kubernetes defaults in the environment files
|
||||
|
||||
Notes
|
||||
-----
|
||||
It may seem reasonable to use --option=${OPTION} in the .service file instead of only putting the command line option in the environment file. However this results in the possiblity of daemons being called with --option= if the environment file does not define a value. Whereas including the --option string inside the environment file means that nothing will be passed to the daemon. So the daemon default will be used for things unset by the environment files.
|
||||
It may seem reasonable to use --option=${OPTION} in the .service file instead of only putting the command line option in the environment file. However this results in the possibility of daemons being called with --option= if the environment file does not define a value. Whereas including the --option string inside the environment file means that nothing will be passed to the daemon. So the daemon default will be used for things unset by the environment files.
|
||||
|
||||
While some command line options to the daemons use the default when passed an empty option some cause the daemon to fail to launch. --allow_privileged= (without a value of true/false) will cause the kube-apiserver and kubelet to refuse to launch.
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
Solutions to interesting problems and unique implementations that showcase the extensibility of Kubernetes
|
||||
|
||||
- [Automated APIServer load balancing using Hipache and Fleet](docs/apiserver_hipache_registration.md)
|
||||
- [Jenkins-triggered rolling updates on sucessful "builds"](docs/rolling_updates_from_jenkins.md)
|
||||
- [Jenkins-triggered rolling updates on successful "builds"](docs/rolling_updates_from_jenkins.md)
|
||||
|
||||
|
||||
[]()
|
||||
|
@ -30,7 +30,7 @@ A request has 4 attributes that can be considered for authorization:
|
||||
- whether the request is readonly (GETs are readonly)
|
||||
- what resource is being accessed
|
||||
- applies only to the API endpoints, such as
|
||||
`/api/v1beta3/namespaces/default/pods`. For miscelaneous endpoints, like `/version`, the
|
||||
`/api/v1beta3/namespaces/default/pods`. For miscellaneous endpoints, like `/version`, the
|
||||
resource is the empty string.
|
||||
- the namespace of the object being access, or the empty string if the
|
||||
endpoint does not support namespaced objects.
|
||||
@ -82,7 +82,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
||||
|
||||
[Complete file example](../pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## Plugin Developement
|
||||
## Plugin Development
|
||||
|
||||
Other implementations can be developed fairly easily.
|
||||
The APIserver calls the Authorizer interface:
|
||||
|
@ -5,7 +5,7 @@
|
||||
This document describes the environment for Kubelet managed containers on a Kubernetes node (kNode). In contrast to the Kubernetes cluster API, which provides an API for creating and managing containers, the Kubernetes container environment provides the container access to information about what else is going on in the cluster.
|
||||
|
||||
This cluster information makes it possible to build applications that are *cluster aware*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analagous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
Additionally, the Kubernetes container environment defines a series of hooks that are surfaced to optional hook handlers defined as part of individual containers. Container hooks are somewhat analogous to operating system signals in a traditional process model. However these hooks are designed to make it easier to build reliable, scalable cloud applications in the Kubernetes cluster. Containers that participate in this cluster lifecycle become *cluster native*.
|
||||
|
||||
Another important part of the container environment is the file system that is available to the container. In Kubernetes, the filesystem is a combination of an [image](./images.md) and one or more [volumes](./volumes.md).
|
||||
|
||||
|
@ -122,7 +122,7 @@ We should consider what the best way to allow this is; there are a few different
|
||||
|
||||
3. Give secrets attributes that allow the user to express that the secret should be presented to
|
||||
the container as an environment variable. The container's environment would contain the
|
||||
desired values and the software in the container could use them without accomodation the
|
||||
desired values and the software in the container could use them without accommodation the
|
||||
command or setup script.
|
||||
|
||||
For our initial work, we will treat all secrets as files to narrow the problem space. There will
|
||||
|
@ -149,7 +149,7 @@ First, if it finds pods which have a `Pod.Spec.ServiceAccountUsername` but no `P
|
||||
then it copies in the referenced securityContext and secrets references for the corresponding `serviceAccount`.
|
||||
|
||||
Second, if ServiceAccount definitions change, it may take some actions.
|
||||
**TODO**: decide what actions it takes when a serviceAccount defintion changes. Does it stop pods, or just
|
||||
**TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just
|
||||
allow someone to list ones that out out of spec? In general, people may want to customize this?
|
||||
|
||||
Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include
|
||||
|
@ -28,7 +28,7 @@ any other etcd clusters that might exist, including the kubernetes master.
|
||||
## Issues
|
||||
|
||||
The skydns service is reachable directly from kubernetes nodes (outside
|
||||
of any container) and DNS resolution works if the skydns service is targetted
|
||||
of any container) and DNS resolution works if the skydns service is targeted
|
||||
explicitly. However, nodes are not configured to use the cluster DNS service or
|
||||
to search the cluster's DNS domain by default. This may be resolved at a later
|
||||
time.
|
||||
|
@ -151,7 +151,7 @@ redis-master master redis name=r
|
||||
redis-slave slave kubernetes/redis-slave:v2 name=redis-slave 4
|
||||
```
|
||||
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labled `name=frontend`, you should see one running on each node.
|
||||
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
|
||||
|
||||
```
|
||||
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
|
||||
|
@ -16,7 +16,7 @@ export LOGGING_DESTINATION=elasticsearch
|
||||
|
||||
This will instantiate a [Fluentd](http://www.fluentd.org/) instance on each node which will
|
||||
collect all the Dcoker container log files. The collected logs will
|
||||
be targetted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
|
||||
be targeted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
|
||||
local node and accepting log information on port 9200. This can be accomplished
|
||||
by writing a pod specification and service specification to define an
|
||||
Elasticsearch service (more information to follow shortly in the contrib directory).
|
||||
|
@ -125,7 +125,7 @@ If the administrator wishes to create node objects manually, set kubelet flag
|
||||
The administrator can modify Node resources (regardless of the setting of `--register-node`).
|
||||
Modifications include setting labels on the Node, and marking it unschedulable.
|
||||
|
||||
Labels on nodes can be used in conjuction with node selectors on pods to control scheduling.
|
||||
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling.
|
||||
|
||||
Making a node unscheduleable will prevent new pods from being scheduled to that
|
||||
node, but will not affect any existing pods on the node. This is useful as a
|
||||
|
@ -4,7 +4,7 @@ This document serves as a proposal for high availability of the scheduler and co
|
||||
## Design Options
|
||||
For complete reference see [this](https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en)
|
||||
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desireable. As a result, we are **NOT** planning on this approach at this time.
|
||||
1. Hot Standby: In this scenario, data and state are shared between the two components such that an immediate failure in one component causes the the standby deamon to take over exactly where the failed component had left off. This would be an ideal solution for kubernetes, however it poses a series of challenges in the case of controllers where component-state is cached locally and not persisted in a transactional way to a storage facility. This would also introduce additional load on the apiserver, which is not desirable. As a result, we are **NOT** planning on this approach at this time.
|
||||
|
||||
2. **Warm Standby**: In this scenario there is only one active component acting as the master and additional components running but not providing service or responding to requests. Data and state are not shared between the active and standby components. When a failure occurs, the standby component that becomes the master must determine the current state of the system before resuming functionality. This is the apprach that this proposal will leverage.
|
||||
|
||||
|
@ -146,7 +146,7 @@ ENV C_FORCE_ROOT 1
|
||||
CMD ["/bin/bash", "/usr/local/bin/run.sh"]
|
||||
```
|
||||
|
||||
The celery\_conf.py contains the defintion of a simple Celery task that adds two numbers. This last line starts the Celery worker.
|
||||
The celery\_conf.py contains the definition of a simple Celery task that adds two numbers. This last line starts the Celery worker.
|
||||
|
||||
**NOTE:** `ENV C_FORCE_ROOT 1` forces Celery to be run as the root user, which is *not* recommended in production!
|
||||
|
||||
|
@ -513,7 +513,7 @@ When you go to localhost:8000, you might not see the page at all. Testing it wi
|
||||
```shell
|
||||
==> default: curl: (56) Recv failure: Connection reset by peer
|
||||
```
|
||||
This means the web frontend isn't up yet. Specifically, the "reset by peer" message is occuring because you are trying to access the *right port*, but *nothing is bound* to that port yet. Wait a while, possibly about 2 minutes or more, depending on your set up. Also, run a *watch* on docker ps, to see if containers are cycling on and off or not starting.
|
||||
This means the web frontend isn't up yet. Specifically, the "reset by peer" message is occurring because you are trying to access the *right port*, but *nothing is bound* to that port yet. Wait a while, possibly about 2 minutes or more, depending on your set up. Also, run a *watch* on docker ps, to see if containers are cycling on and off or not starting.
|
||||
|
||||
```watch
|
||||
$> watch -n 1 docker ps
|
||||
|
@ -14,7 +14,7 @@ If you are new to kubernetes, and you haven't run guestbook yet,
|
||||
|
||||
you might want to stop here and go back and run guestbook app first.
|
||||
|
||||
The guestbook tutorial will teach you alot about the basics of kubernetes, and we've tried not to be redundant here.
|
||||
The guestbook tutorial will teach you a lot about the basics of kubernetes, and we've tried not to be redundant here.
|
||||
|
||||
## Architecture of this SOA
|
||||
|
||||
|
@ -69,9 +69,9 @@ your cluster. Edit [`meteor-controller.json`](meteor-controller.json) and make s
|
||||
points to the container you just pushed to the Docker Hub or GCR.
|
||||
|
||||
As you may know, Meteor uses MongoDB, and we'll need to provide it a
|
||||
persistant Kuberetes volume to store its data. See the [volumes
|
||||
persistent Kuberetes volume to store its data. See the [volumes
|
||||
documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md)
|
||||
for options. We're going to use Google Compute Engine persistant
|
||||
for options. We're going to use Google Compute Engine persistent
|
||||
disks. Create the MongoDB disk by running:
|
||||
```
|
||||
gcloud compute disks create --size=200GB mongo-disk
|
||||
@ -140,7 +140,7 @@ documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/doc
|
||||
for more information.
|
||||
|
||||
As mentioned above, the mongo container uses a volume which is mapped
|
||||
to a persistant disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
||||
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
||||
section specifies the volume:
|
||||
```
|
||||
"volumeMounts": [
|
||||
|
@ -80,7 +80,7 @@ rethinkdb-rc-1.16.0-manu6
|
||||
Admin
|
||||
-----
|
||||
|
||||
You need a separate pod (which labled as role:admin) to access Web Admin UI
|
||||
You need a separate pod (labeled as role:admin) to access Web Admin UI
|
||||
|
||||
```shell
|
||||
kubectl create -f admin-pod.yaml
|
||||
|
Loading…
Reference in New Issue
Block a user