diff --git a/docs/api-conventions.md b/docs/api-conventions.md index 07f540cd976..fd8b5ea4420 100644 --- a/docs/api-conventions.md +++ b/docs/api-conventions.md @@ -420,7 +420,7 @@ The following HTTP status codes may be returned by the API. * Suggested client recovery behavior * Do not retry. Fix the request. * `405 StatusMethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. + * Indicates that the action the client attempted to perform on the resource was not supported by the code. * Suggested client recovery behavior * Do not retry. Fix the request. * `409 StatusConflict` @@ -570,7 +570,7 @@ Possible values for the ```reason``` and ```details``` fields: * The server should set the `Retry-After` HTTP header and return `retryAfterSeconds` in the details field of the object. A value of `0` is the default. * Http status code: `504 StatusServerTimeout` * `MethodNotAllowed` - * Indicates that that the action the client attempted to perform on the resource was not supported by the code. + * Indicates that the action the client attempted to perform on the resource was not supported by the code. * For instance, attempting to delete a resource that can only be created. * API calls that return MethodNotAllowed can never succeed. * Http status code: `405 StatusMethodNotAllowed` diff --git a/docs/availability.md b/docs/availability.md index 035201e4f42..f1ed8570e5b 100644 --- a/docs/availability.md +++ b/docs/availability.md @@ -116,7 +116,7 @@ Second, decide how many clusters should be able to be unavailable at the same ti the number that can be unavailable `U`. If you are not sure, then 1 is a fine choice. If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then -then you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a +you need `R + U` clusters. If it is not (e.g you want to ensure low latency for all users in the event of a cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone. Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then diff --git a/docs/design/service_accounts.md b/docs/design/service_accounts.md index bd10336fcb2..c6ceb6b212f 100644 --- a/docs/design/service_accounts.md +++ b/docs/design/service_accounts.md @@ -90,7 +90,7 @@ The distinction is useful for a number of reasons: Pod Object. The `secrets` field is a list of references to /secret objects that an process started as that service account should -have access to to be able to assert that role. +have access to be able to assert that role. The secrets are not inline with the serviceAccount object. This way, most or all users can have permission to `GET /serviceAccounts` so they can remind themselves what serviceAccounts are available for use. @@ -150,7 +150,7 @@ then it copies in the referenced securityContext and secrets references for the Second, if ServiceAccount definitions change, it may take some actions. **TODO**: decide what actions it takes when a serviceAccount definition changes. Does it stop pods, or just -allow someone to list ones that out out of spec? In general, people may want to customize this? +allow someone to list ones that are out of spec? In general, people may want to customize this? Third, if a new namespace is created, it may create a new serviceAccount for that namespace. This may include a new username (e.g. `NAMESPACE-default-service-account@serviceaccounts.$CLUSTERID.kubernetes.io`), a new diff --git a/docs/devel/api_changes.md b/docs/devel/api_changes.md index 17278c6ef5e..de073677dc5 100644 --- a/docs/devel/api_changes.md +++ b/docs/devel/api_changes.md @@ -177,7 +177,7 @@ need to add cases to `pkg/api//defaults.go`. Of course, since you have added code, you have to add a test: `pkg/api//defaults_test.go`. Do use pointers to scalars when you need to distinguish between an unset value -and an an automatic zero value. For example, +and an automatic zero value. For example, `PodSpec.TerminationGracePeriodSeconds` is defined as `*int64` the go type definition. A zero value means 0 seconds, and a nil value asks the system to pick a default. diff --git a/docs/devel/scheduler_algorithm.md b/docs/devel/scheduler_algorithm.md index f353a4ed0d3..2d239f2bd77 100644 --- a/docs/devel/scheduler_algorithm.md +++ b/docs/devel/scheduler_algorithm.md @@ -16,7 +16,7 @@ The details of the above predicates can be found in [plugin/pkg/scheduler/algori ## Ranking the nodes -The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) diff --git a/docs/devel/writing-a-getting-started-guide.md b/docs/devel/writing-a-getting-started-guide.md index d7452c0985e..4085236180b 100644 --- a/docs/devel/writing-a-getting-started-guide.md +++ b/docs/devel/writing-a-getting-started-guide.md @@ -62,7 +62,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. refactoring and feature additions that affect code for their IaaS. ## Rationale - - We want want people to create Kubernetes clusters with whatever IaaS, Node OS, + - We want people to create Kubernetes clusters with whatever IaaS, Node OS, configuration management tools, and so on, which they are familiar with. The guidelines for **versioned distros** are designed for flexibility. - We want developers to be able to work without understanding all the permutations of @@ -81,7 +81,7 @@ These guidelines say *what* to do. See the Rationale section for *why*. gate commits on passing CI for all distros, and since end-to-end tests are typically somewhat flaky, it would be highly likely for there to be false positives and CI backlogs with many CI pipelines. - We do not require versioned distros to do **CI** for several reasons. It is a steep - learning curve to understand our our automated testing scripts. And it is considerable effort + learning curve to understand our automated testing scripts. And it is considerable effort to fully automate setup and teardown of a cluster, which is needed for CI. And, not everyone has the time and money to run CI. We do not want to discourage people from writing and sharing guides because of this. diff --git a/docs/getting-started-guides/gce.md b/docs/getting-started-guides/gce.md index 87e57b6ac0a..d3eda30f8f5 100644 --- a/docs/getting-started-guides/gce.md +++ b/docs/getting-started-guides/gce.md @@ -130,7 +130,7 @@ $ kubectl get --all-namespaces pods ``` command. -You'll see see a list of pods that looks something like this (the name specifics will be different): +You'll see a list of pods that looks something like this (the name specifics will be different): ```shell NAMESPACE NAME READY STATUS RESTARTS AGE diff --git a/docs/getting-started-guides/logging-elasticsearch.md b/docs/getting-started-guides/logging-elasticsearch.md index a11e148812c..5de222c7c89 100644 --- a/docs/getting-started-guides/logging-elasticsearch.md +++ b/docs/getting-started-guides/logging-elasticsearch.md @@ -152,7 +152,7 @@ users: token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp ``` -Now you can can issue requests to Elasticsearch: +Now you can issue requests to Elasticsearch: ``` $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/ { diff --git a/docs/persistent-volumes.md b/docs/persistent-volumes.md index e21f615e4b0..0b57d6d7512 100644 --- a/docs/persistent-volumes.md +++ b/docs/persistent-volumes.md @@ -29,7 +29,7 @@ Claims will remain unbound indefinitely if a matching volume does not exist. Cl Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, the user specifies which mode desired when using their claim as a volume in a pod. -Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as she needs it. Users schedule Pods and access their their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes). +Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as she needs it. Users schedule Pods and access their claimed PVs by including a persistentVolumeClaim in their Pod's volumes block. [See below for syntax details](#claims-as-volumes). ### Releasing @@ -113,7 +113,7 @@ Currently, NFS and HostPath support recycling. A volume will be in one of the following phases: -* Available -- a free resource resource that is not yet bound to a claim +* Available -- a free resource that is not yet bound to a claim * Bound -- the volume is bound to a claim * Released -- the claim has been deleted, but the resource is not yet reclaimed by the cluster * Failed -- the volume has failed its automatic reclamation diff --git a/docs/replication-controller.md b/docs/replication-controller.md index 89a6c58b3fc..0de71face4e 100644 --- a/docs/replication-controller.md +++ b/docs/replication-controller.md @@ -20,7 +20,7 @@ Pods created by a replication controller are intended to be fungible and semanti ### Labels -The population of pods that a replication controller is monitoring is defined with a [label selector](labels.md#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system. +The population of pods that a replication controller is monitoring is defined with a [label selector](labels.md#label-selectors), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled to their definition. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that approach increases complexity of management operations, for both clients and the system. The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets. diff --git a/docs/services.md b/docs/services.md index b7988251a57..1a63d9eb1a2 100644 --- a/docs/services.md +++ b/docs/services.md @@ -436,7 +436,7 @@ must exist in the registry for services to get IPs, otherwise creations will fail with a message indicating an IP could not be allocated. A background controller is responsible for creating that map (to migrate from older versions of Kubernetes that used in memory locking) as well as checking for invalid -assignments due to administrator intervention and cleaning up any any IPs +assignments due to administrator intervention and cleaning up any IPs that were allocated but which no service currently uses. ### IPs and VIPs diff --git a/docs/user-guide/connecting-applications.md b/docs/user-guide/connecting-applications.md index 8bed0872a94..598a30fe2b8 100644 --- a/docs/user-guide/connecting-applications.md +++ b/docs/user-guide/connecting-applications.md @@ -113,7 +113,7 @@ NAME LABELS SELECTOR IP(S) PORT(S) kube-dns k8s-app=kube-dns 10.0.0.10 53/UDP 53/TCP ``` -If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using using standard methods (e.g. gethostbyname). Let’s create another pod to test this: +If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let’s create another pod to test this: ```yaml apiVersion: v1 diff --git a/docs/volumes.md b/docs/volumes.md index c1ed213b012..f160f8bcb07 100644 --- a/docs/volumes.md +++ b/docs/volumes.md @@ -2,8 +2,8 @@ On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. First, when a container -crashes kubelet will restart it, but the files will be lost lost - the -container starts with a clean slate. second, when running containers together +crashes kubelet will restart it, but the files will be lost - the +container starts with a clean slate. Second, when running containers together in a `Pod` it is often necessary to share files between those containers. The Kubernetes `Volume` abstraction solves both of these problems. @@ -130,7 +130,7 @@ and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed. -Using a PD on a pod controlled by a ReplicationController will will fail unless +Using a PD on a pod controlled by a ReplicationController will fail unless the PD is read-only or the replica count is 0 or 1. #### Creating a PD