Corrected some typos

This commit is contained in:
George Kuan 2015-04-26 19:37:14 -07:00
parent e061043cf1
commit 9f64d009e0
17 changed files with 25 additions and 25 deletions

View File

@ -243,7 +243,7 @@ The fuzzer can be found in `pkg/api/testing/fuzzer.go`.
## Update the semantic comparisons
VERY VERY rarely is this needed, but when it hits, it hurts. In some rare
cases we end up with objects (e.g. resource quantites) that have morally
cases we end up with objects (e.g. resource quantities) that have morally
equivalent values with different bitwise representations (e.g. value 10 with a
base-2 formatter is the same as value 0 with a base-10 formatter). The only way
Go knows how to do deep-equality is through field-by-field bitwise comparisons.
@ -278,7 +278,7 @@ At last, your change is done, all unit tests pass, e2e passes, you're done,
right? Actually, no. You just changed the API. If you are touching an
existing facet of the API, you have to try *really* hard to make sure that
*all* the examples and docs are updated. There's no easy way to do this, due
in part ot JSON and YAML silently dropping unknown fields. You're clever -
in part to JSON and YAML silently dropping unknown fields. You're clever -
you'll figure it out. Put `grep` or `ack` to good use.
If you added functionality, you should consider documenting it and/or writing

View File

@ -29,7 +29,7 @@ Maintainers will do merges of appropriately reviewed-and-approved changes during
There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred.
If a PR is considered complex or controversial, the merge of that PR should be delayed to give all interested parties in all timezones the opportunity to provide feedback. Concretely, this means that such PRs should be held for 24
hours before merging. Of course "complex" and "controversial" are left to the judgement of the people involved, but we trust that part of being a committer is the judgement required to evaluate such things honestly, and not be
hours before merging. Of course "complex" and "controversial" are left to the judgment of the people involved, but we trust that part of being a committer is the judgment required to evaluate such things honestly, and not be
motivated by your desire (or your cube-mate's desire) to get their code merged. Also see "Holds" below, any reviewer can issue a "hold" to indicate that the PR is in fact complicated or complex and deserves further review.
PRs that are incorrectly judged to be merge-able, may be reverted and subject to re-review, if subsequent reviewers believe that they in fact are controversial or complex.

View File

@ -221,7 +221,7 @@ go run hack/e2e.go -build -pushup -test -down
# seeing the output of failed commands.
# -ctl can be used to quickly call kubectl against your e2e cluster. Useful for
# cleaning up after a failed test or viewing logs. Use -v to avoid supressing
# cleaning up after a failed test or viewing logs. Use -v to avoid suppressing
# kubectl output.
go run hack/e2e.go -v -ctl='get events'
go run hack/e2e.go -v -ctl='delete pod foobar'

View File

@ -135,7 +135,7 @@ might need later - but don't implement them now.
We understand that it is hard to imagine, but sometimes we make mistakes. It's
OK to push back on changes requested during a review. If you have a good reason
for doing something a certain way, you are absolutley allowed to debate the
for doing something a certain way, you are absolutely allowed to debate the
merits of a requested change. You might be overruled, but you might also
prevail. We're mostly pretty reasonable people. Mostly.
@ -151,7 +151,7 @@ things you can do that might help kick a stalled process along:
* Ping the assignee (@username) on the PR comment stream asking for an
estimate of when they can get to it.
* Ping the assigneed by email (many of us have email addresses that are well
* Ping the assignee by email (many of us have email addresses that are well
published or are the same as our GitHub handle @google.com or @redhat.com).
If you think you have fixed all the issues in a round of review, and you haven't
@ -171,7 +171,7 @@ explanation.
## Final: Use common sense
Obviously, none of these points are hard rules. There is no document that can
take the place of common sense and good taste. Use your best judgement, but put
take the place of common sense and good taste. Use your best judgment, but put
a bit of thought into how your work can be made easier to review. If you do
these things your PRs will flow much more easily.

View File

@ -1,7 +1,7 @@
Logging Conventions
===================
The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally prefered to [log](http://golang.org/pkg/log/) for better runtime control.
The following conventions for the glog levels to use. [glog](http://godoc.org/github.com/golang/glog) is globally preferred to [log](http://golang.org/pkg/log/) for better runtime control.
* glog.Errorf() - Always an error
* glog.Warningf() - Something unexpected, but probably not an error

View File

@ -4,7 +4,7 @@ This document explain how to plug in profiler and how to profile Kubernetes serv
## Profiling library
Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formated profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result.
Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result.
## Adding profiling to services to APIserver.
@ -31,4 +31,4 @@ to get 30 sec. CPU profile.
## Contention profiling
To enable contetion profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```.
To enable contention profiling you need to add line ```rt.SetBlockProfileRate(1)``` in addition to ```m.mux.HandleFunc(...)``` added before (```rt``` stands for ```runtime``` in ```master.go```). This enables 'debug/pprof/block' subpage, which can be used as an input to ```go tool pprof```.

View File

@ -92,7 +92,7 @@ get` while in fact they do not match `v0.5` (the one that was tagged) exactly.
To handle that case, creating a new release should involve creating two adjacent
commits where the first of them will set the version to `v0.5` and the second
will set it to `v0.5-dev`. In that case, even in the presence of merges, there
will be a single comit where the exact `v0.5` version will be used and all
will be a single commit where the exact `v0.5` version will be used and all
others around it will either have `v0.4-dev` or `v0.5-dev`.
The diagram below illustrates it.

View File

@ -6,7 +6,7 @@ guide.
A Getting Started Guide is instructions on how to create a Kubernetes cluster on top of a particular
type(s) of infrastructure. Infrastructure includes: the IaaS provider for VMs;
the node OS; inter-node networking; and node Configuration Management system.
A guide refers to scripts, Configuration Manangement files, and/or binary assets such as RPMs. We call
A guide refers to scripts, Configuration Management files, and/or binary assets such as RPMs. We call
the combination of all these things needed to run on a particular type of infrastructure a
**distro**.
@ -39,7 +39,7 @@ These guidelines say *what* to do. See the Rationale section for *why*.
that are updated to the new version.
- Versioned distros should typically not modify or add code in `cluster/`. That is just scripts for developer
distros.
- If a versioned distro has not been updated for many binary releases, it may be dropped frome the Matrix.
- If a versioned distro has not been updated for many binary releases, it may be dropped from the Matrix.
If you have a cluster partially working, but doing all the above steps seems like too much work,
we still want to hear from you. We suggest you write a blog post or a Gist, and we will link to it on our wiki page.
@ -58,13 +58,13 @@ These guidelines say *what* to do. See the Rationale section for *why*.
- a development distro needs to have an organization which owns it. This organization needs to:
- Setting up and maintaining Continuous Integration that runs e2e frequently (multiple times per day) against the
Distro at head, and which notifies all devs of breakage.
- being reasonably available for questions and assiting with
- being reasonably available for questions and assisting with
refactoring and feature additions that affect code for their IaaS.
## Rationale
- We want want people to create Kubernetes clusters with whatever IaaS, Node OS,
configuration management tools, and so on, which they are familiar with. The
guidelines for **versioned distros** are designed for flexiblity.
guidelines for **versioned distros** are designed for flexibility.
- We want developers to be able to work without understanding all the permutations of
IaaS, NodeOS, and configuration management. The guidelines for **developer distros** are designed
for consistency.

View File

@ -71,7 +71,7 @@ partition
The first example selects all resources with key equal to `environment` and value equal to `production` or `qa`.
The second example selects all resources with key equal to `tier` and value other than `frontend` and `backend`.
The third example selects all resources including a label with key `partition`; no values are checked.
Similary the comma separator acts as an _AND_ operator for example filtering resource with a `partition` key (not matter the value) and with `environment` different than `qa`. For example: `partition,environment notin (qa)`.
Similarly the comma separator acts as an _AND_ operator for example filtering resource with a `partition` key (not matter the value) and with `environment` different than `qa`. For example: `partition,environment notin (qa)`.
The _set-based_ label selector is a general form of equality since `environment=production` is equivalent to `environment in (production)`; similarly for `!=` and `notin`.
_Set-based_ requirements can be mixed with _equality-based_ requirements. For example: `partition in (customerA, customerB),environment!=qa`.

View File

@ -23,7 +23,7 @@ Kibana logging dashboard will be available at https://130.211.152.93/api/v1beta1
Visiting the Kibana dashboard URL in a browser should give a display like this:
![Kibana](kibana.png)
To learn how to query, fitler etc. using Kibana you might like to look at this [tutorial](http://www.elasticsearch.org/guide/en/kibana/current/working-with-queries-and-filters.html).
To learn how to query, filter etc. using Kibana you might like to look at this [tutorial](http://www.elasticsearch.org/guide/en/kibana/current/working-with-queries-and-filters.html).
You can check to see if any logs are being ingested into Elasticsearch by curling against its URL. You will need to provide the username and password that was generated when your cluster was created. This can be found in the `kubernetes_auth` file for your cluster.
```
@ -38,7 +38,7 @@ Cluster logging can be turned on or off using the environment variable `ENABLE_N
to `false` to disable cluster logging.
The type of logging is used is specified by the environment variable `LOGGING_DESTINATION` which for the
GCE provider has the default value `elasticsearch`. If this is set to `gcp` for the GCE provder then
GCE provider has the default value `elasticsearch`. If this is set to `gcp` for the GCE provider then
logs will be sent to the Google Cloud Logging system instead.
When using Elasticsearch the number of Elasticsearch instances can be controlled by setting the

View File

@ -104,7 +104,7 @@ any binary; therefore, to
join Kubernetes cluster, you as an admin need to make sure proper services are
running in the node. In the future, we plan to automatically provision some node
services. In case of no cloud provider, Node Controller simply registers all
machines from `--machines` flag, any futher interactions need to be done manually
machines from `--machines` flag, any further interactions need to be done manually
by using `kubectl`. If you are paranoid, leave `--machines` empty and create all
machines from `kubectl` one by one - the two approaches are equivalent.
Optionally you can skip cluster-wide node synchronization with

View File

@ -4,7 +4,7 @@ Kubernetes is an open-source system for managing containerized applications acro
Today, Kubernetes supports just [Docker](http://www.docker.io) containers, but other container image formats and container runtimes will be supported in the future (e.g., [Rocket](https://coreos.com/blog/rocket/) support is in progress). Similarly, while Kubernetes currently focuses on continuously-running stateless (e.g. web server or in-memory object cache) and "cloud native" stateful applications (e.g. NoSQL datastores), in the near future it will support all the other workload types commonly found in production cluster environments, such as batch, stream processing, and traditional databases.
In Kubernetes, all containers run inside [pods](pods.md). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes.md), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient availabile capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller.md), which we discuss next.
In Kubernetes, all containers run inside [pods](pods.md). A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. A pod can also contain zero or more [volumes](volumes.md), which are directories that are private to a container or shared across containers in a pod. For each pod the user creates, the system finds a machine that is healthy and that has sufficient available capacity, and starts up the corresponding container(s) there. If a container fails it can be automatically restarted by Kubernetes' node agent, called the Kubelet. But if the pod or its machine fails, it is not automatically moved or restarted unless the user also defines a [replication controller](replication-controller.md), which we discuss next.
Users can create and manage pods themselves, but Kubernetes drastically simplifies system management by allowing users to delegate two common pod-related activities: deploying multiple pod replicas based on the same pod configuration, and creating replacement pods when a pod or its machine fails. The Kubernetes API object that manages these behaviors is called a [replication controller](replication-controller.md). It defines a pod in terms of a template, that the system then instantiates as some number of pods (specified by the user). The replicated set of pods might constitute an entire application, a micro-service, or one layer in a multi-tier application. Once the pods are created, the system continually monitors their health and that of the machines they are running on; if a pod fails due to a software problem or machine failure, the replication controller automatically creates a new pod on a healthy machine, to maintain the set of pods at the desired replication level. Multiple pods from the same or different applications can share the same machine. Note that a replication controller is needed even in the case of a single non-replicated pod if the user wants it to be re-created when it or its machine fails.

View File

@ -50,7 +50,7 @@ _Why not just run multiple programs in a single (Docker) container?_
1. Transparency. Making the containers within the pod visible to the infrastructure enables the infrastructure to provide services to those containers, such as process management and resource monitoring. This facilitates a number of conveniences for users.
2. Decoupling software dependencies. The individual containers may be rebuilt and redeployed independently. Kubernetes may even support live updates of individual containers someday.
3. Ease of use. Users don't need to run their own process managers, worry about signal and exit-code propagation, etc.
4. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighterweight.
4. Efficiency. Because the infrastructure takes on more responsibility, containers can be lighter weight.
_Why not support affinity-based co-scheduling of containers?_

View File

@ -79,7 +79,7 @@ files. Currently, if a program expects a secret to be stored in an environment
variable, then the user needs to modify the image to populate the environment
variable from the file as an step before running the main program. Future
versions of Kubernetes are expected to provide more automation for populating
enviroment variables from files.
environment variables from files.
## Changes to Secrets

View File

@ -16,7 +16,7 @@ Enter `Services`.
A Kubernetes `Service` is an abstraction which defines a logical set of `Pods`
and a policy by which to access them - sometimes called a micro-service. The
set of `Pods` targetted by a `Service` is determined by a [`Label
set of `Pods` targeted by a `Service` is determined by a [`Label
Selector`](labels.md).
As an example, consider an image-processing backend which is running with 3

View File

@ -72,7 +72,7 @@ omitted if you first run
```bash
export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referrenced above are generated on the
* The ca_file, key_file, and cert_file referenced above are generated on the
kube master at cluster turnup. They can be found on the master under
`/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master.

View File

@ -21,7 +21,7 @@ When Kubernetes is deployed, the server deploys the UI, you can visit ```/static
The Kubernetes user interface is a query-based visualization of the Kubernetes API. The user interface is defined by two functional primitives:
#### GroupBy
_GroupBy_ takes a label ```key``` as a parameter, places all objects with the same value for that key within a single group. For example ```/groups/host/selector``` groups pods by host. ```/groups/name/selector``` groups pods by name. Groups are hiearchical, for example ```/groups/name/host/selector``` first groups by pod name, and then by host.
_GroupBy_ takes a label ```key``` as a parameter, places all objects with the same value for that key within a single group. For example ```/groups/host/selector``` groups pods by host. ```/groups/name/selector``` groups pods by name. Groups are hierarchical, for example ```/groups/name/host/selector``` first groups by pod name, and then by host.
#### Select
Select takes a [label selector](./labels.md) and uses it to filter, so only resources which match that label selector are displayed. For example, ```/groups/host/selector/name=frontend```, shows pods, grouped by host, which have a label with the name `frontend`.