diff --git a/cluster/juju/bundles/README.md b/cluster/juju/bundles/README.md index 25edd866d80..3205cf582c0 100644 --- a/cluster/juju/bundles/README.md +++ b/cluster/juju/bundles/README.md @@ -33,7 +33,7 @@ juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`). Use the 'juju quickstart' command to deploy a Kubernetes cluster to any cloud supported by Juju. -The charm store version of the Kubernetes bundle can be deployed as folllows: +The charm store version of the Kubernetes bundle can be deployed as follows: juju quickstart u/kubernetes/kubernetes-cluster diff --git a/docs/admin/authentication.md b/docs/admin/authentication.md index a09db1d9303..df5f1e0a72d 100644 --- a/docs/admin/authentication.md +++ b/docs/admin/authentication.md @@ -62,7 +62,7 @@ to the OpenID provider. will be used, which should be unique and immutable under the issuer's domain. Cluster administrator can choose other claims such as `email` to use as the user name, but the uniqueness and immutability is not guaranteed. -Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus futher changes are possible. +Please note that this flag is still experimental until we settle more on how to handle the mapping of the OpenID user to the Kubernetes user. Thus further changes are possible. Currently, the ID token will be obtained by some third-party app. This means the app and apiserver MUST share the `--oidc-client-id`. diff --git a/docs/admin/cluster-management.md b/docs/admin/cluster-management.md index 1835e838ca4..efd04abc3a4 100644 --- a/docs/admin/cluster-management.md +++ b/docs/admin/cluster-management.md @@ -126,7 +126,7 @@ The autoscaler will try to maintain the average CPU and memory utilization of no The target value can be configured by ```KUBE_TARGET_NODE_UTILIZATION``` environment variable (default: 0.7) for ``kube-up.sh`` when creating the cluster. The node utilization is the total node's CPU/memory usage (OS + k8s + user load) divided by the node's capacity. If the desired numbers of nodes in the cluster resulting from CPU utilization and memory utilization are different, -the autosclaer will choose the bigger number. +the autoscaler will choose the bigger number. The number of nodes in the cluster set by the autoscaler will be limited from ```KUBE_AUTOSCALER_MIN_NODES``` (default: 1) to ```KUBE_AUTOSCALER_MAX_NODES``` (default: the initial number of nodes in the cluster). diff --git a/docs/design/extending-api.md b/docs/design/extending-api.md index cca257bd195..bbd02a54e05 100644 --- a/docs/design/extending-api.md +++ b/docs/design/extending-api.md @@ -71,7 +71,7 @@ Kubernetes API server to provide the following features: * Watch for resource changes. The `Kind` for an instance of a third-party object (e.g. CronTab) below is expected to be -programnatically convertible to the name of the resource using +programmatically convertible to the name of the resource using the following conversion. Kinds are expected to be of the form ``, the `APIVersion` for the object is expected to be `//`. @@ -178,7 +178,7 @@ and get back: } ``` -Because all objects are expected to contain standard Kubernetes metdata fileds, these +Because all objects are expected to contain standard Kubernetes metadata fields, these list operations can also use `Label` queries to filter requests down to specific subsets. Likewise, clients can use watch endpoints to watch for changes to stored objects. diff --git a/docs/getting-started-guides/coreos/coreos_multinode_cluster.md b/docs/getting-started-guides/coreos/coreos_multinode_cluster.md index cb71683d1f2..468d476a599 100644 --- a/docs/getting-started-guides/coreos/coreos_multinode_cluster.md +++ b/docs/getting-started-guides/coreos/coreos_multinode_cluster.md @@ -215,7 +215,7 @@ where `````` is the IP address that was available from the ```nova f #### Provision Worker Nodes -Edit ```node.yaml``` and replace all instances of `````` with the private IP address of the master node. You can get this by runnning ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it. +Edit ```node.yaml``` and replace all instances of `````` with the private IP address of the master node. You can get this by running ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it. ```sh nova boot \ diff --git a/docs/getting-started-guides/docker-multinode/deployDNS.md b/docs/getting-started-guides/docker-multinode/deployDNS.md index 7d6bb84a09c..5a9a43214a3 100644 --- a/docs/getting-started-guides/docker-multinode/deployDNS.md +++ b/docs/getting-started-guides/docker-multinode/deployDNS.md @@ -55,7 +55,7 @@ $ export DNS_SERVER_IP=10.0.0.10 # specify in startup parameter `--cluster-dns` $ export KUBE_SERVER=10.10.103.250 # your master server ip, you may change it ``` -### Replace the correponding value in the template. +### Replace the corresponding value in the template. ``` $ sed -e "s/{{ pillar\['dns_replicas'\] }}/${DNS_REPLICAS}/g;s/{{ pillar\['dns_domain'\] }}/${DNS_DOMAIN}/g;s/{kube_server_url}/${KUBE_SERVER}/g;" skydns-rc.yaml.in > ./skydns-rc.yaml diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 935a0284874..0be0d273720 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -235,7 +235,7 @@ You have several choices for Kubernetes images: - The release contains files such as `./kubernetes/server/bin/kube-apiserver.tar` which can be converted into docker images using a command like `docker load -i kube-apiserver.tar` - - You can verify if the image is loaded successfully with the right reposity and tag using + - You can verify if the image is loaded successfully with the right repository and tag using command like `docker images` For etcd, you can: diff --git a/docs/getting-started-guides/ubuntu-calico.md b/docs/getting-started-guides/ubuntu-calico.md index 2bc4c6768c7..80b96341776 100644 --- a/docs/getting-started-guides/ubuntu-calico.md +++ b/docs/getting-started-guides/ubuntu-calico.md @@ -163,7 +163,7 @@ cp calico-kubernetes-ubuntu-demo-master/node/network-environment-template networ 3.) Edit `network-environment` to represent your current host's settings. -4.) Move `netework-environment` into `/etc` +4.) Move `network-environment` into `/etc` ``` sudo mv -f network-environment /etc diff --git a/docs/proposals/apiserver-watch.md b/docs/proposals/apiserver-watch.md index 6bc2d33fc29..b80690301ef 100644 --- a/docs/proposals/apiserver-watch.md +++ b/docs/proposals/apiserver-watch.md @@ -60,7 +60,7 @@ the objects (of a given type) without any filtering. The changes delivered from etcd will then be stored in a cache in apiserver. This cache is in fact a "rolling history window" that will support clients having some amount of latency between their list and watch calls. Thus it will have a limited capacity and -whenever a new change comes from etcd when a cache is full, othe oldest change +whenever a new change comes from etcd when a cache is full, the oldest change will be remove to make place for the new one. When a client sends a watch request to apiserver, instead of redirecting it to @@ -159,7 +159,7 @@ necessary. In such case, to avoid LIST requests coming from all watchers at the same time, we can introduce an additional etcd event type: [EtcdResync](../../pkg/storage/etcd/etcd_watcher.go#L36) - Whenever reslisting will be done to refresh the internal watch to etcd, + Whenever relisting will be done to refresh the internal watch to etcd, EtcdResync event will be send to all the watchers. It will contain the full list of all the objects the watcher is interested in (appropriately filtered) as the parameter of this watch event. diff --git a/docs/proposals/federation.md b/docs/proposals/federation.md index 7b642bb3836..34df0aee55b 100644 --- a/docs/proposals/federation.md +++ b/docs/proposals/federation.md @@ -518,7 +518,7 @@ thus far: approach. 1. A more monolithic architecture, where a single instance of the Kubernetes control plane itself manages a single logical cluster - composed of nodes in multiple availablity zones and cloud + composed of nodes in multiple availability zones and cloud providers. A very brief, non-exhaustive list of pro's and con's of the two @@ -563,12 +563,12 @@ prefers the Decoupled Hierarchical model for the reasons stated below). largely independently (different sets of developers, different release schedules etc). 1. **Administration complexity:** Again, I think that this could be argued - both ways. Superficially it woud seem that administration of a + both ways. Superficially it would seem that administration of a single Monolithic multi-zone cluster might be simpler by virtue of being only "one thing to manage", however in practise each of the underlying availability zones (and possibly cloud providers) has it's own capacity, pricing, hardware platforms, and possibly - bureaucratic boudaries (e.g. "our EMEA IT department manages those + bureaucratic boundaries (e.g. "our EMEA IT department manages those European clusters"). So explicitly allowing for (but not mandating) completely independent administration of each underlying Kubernetes cluster, and the Federation system itself, diff --git a/docs/proposals/horizontal-pod-autoscaler.md b/docs/proposals/horizontal-pod-autoscaler.md index 47a69b2d177..c10f54f7881 100644 --- a/docs/proposals/horizontal-pod-autoscaler.md +++ b/docs/proposals/horizontal-pod-autoscaler.md @@ -56,7 +56,7 @@ We are going to introduce Scale subresource and implement horizontal autoscaling Scale subresource will be supported for replication controllers and deployments. Scale subresource will be a Virtual Resource (will not be stored in etcd as a separate object). It will be only present in API as an interface to accessing replication controller or deployment, -and the values of Scale fields will be inferred from the corresponing replication controller/deployment object. +and the values of Scale fields will be inferred from the corresponding replication controller/deployment object. HorizontalPodAutoscaler object will be bound with exactly one Scale subresource and will be autoscaling associated replication controller/deployment through it. The main advantage of such approach is that whenever we introduce another type we want to auto-scale, @@ -132,7 +132,7 @@ type HorizontalPodAutoscaler struct { // HorizontalPodAutoscalerSpec is the specification of a horizontal pod autoscaler. type HorizontalPodAutoscalerSpec struct { // ScaleRef is a reference to Scale subresource. HorizontalPodAutoscaler will learn the current - // resource consumption from its status, and will set the desired number of pods by modyfying its spec. + // resource consumption from its status, and will set the desired number of pods by modifying its spec. ScaleRef *SubresourceReference // MinCount is the lower limit for the number of pods that can be set by the autoscaler. MinCount int @@ -151,7 +151,7 @@ type HorizontalPodAutoscalerStatus struct { CurrentReplicas int // DesiredReplicas is the desired number of replicas of pods managed by this autoscaler. - // The number may be different because pod downscaling is someteimes delayed to keep the number + // The number may be different because pod downscaling is sometimes delayed to keep the number // of pods stable. DesiredReplicas int @@ -161,7 +161,7 @@ type HorizontalPodAutoscalerStatus struct { CurrentConsumption ResourceConsumption // LastScaleTimestamp is the last time the HorizontalPodAutoscaler scaled the number of pods. - // This is used by the autoscaler to controll how often the number of pods is changed. + // This is used by the autoscaler to control how often the number of pods is changed. LastScaleTimestamp *util.Time } diff --git a/docs/proposals/rescheduler.md b/docs/proposals/rescheduler.md index b27b9bfe4bb..88747d08062 100644 --- a/docs/proposals/rescheduler.md +++ b/docs/proposals/rescheduler.md @@ -96,7 +96,7 @@ case, the nodes we move the Pods onto might have been in the system for a long t have been added by the cluster auto-scaler specifically to allow the rescheduler to rebalance utilization. -A second spreading use case is to separate antagnosits. +A second spreading use case is to separate antagonists. Sometimes the processes running in two different Pods on the same node may have unexpected antagonistic behavior towards one another. A system component might monitor for such diff --git a/docs/user-guide/compute-resources.md b/docs/user-guide/compute-resources.md index eea880b1b63..8e961d35e6e 100644 --- a/docs/user-guide/compute-resources.md +++ b/docs/user-guide/compute-resources.md @@ -196,7 +196,7 @@ TotalResourceLimits: [ ... lines removed for clarity ...] ``` -Here you can see from the `Allocated resorces` section that that a pod which ask for more than +Here you can see from the `Allocated resources` section that that a pod which ask for more than 90 millicpus or more than 1341MiB of memory will not be able to fit on this node. Looking at the `Pods` section, you can see which pods are taking up space on the node. diff --git a/docs/user-guide/deploying-applications.md b/docs/user-guide/deploying-applications.md index 667f21d2d7b..a8fa89357ad 100644 --- a/docs/user-guide/deploying-applications.md +++ b/docs/user-guide/deploying-applications.md @@ -53,7 +53,7 @@ Kubernetes creates and manages sets of replicated containers (actually, replicat A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. It’s analogous to Google Compute Engine’s [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWS’s [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies). -The replication controller created to run nginx by `kubctl run` in the [Quick start](quick-start.md) could be specified using YAML as follows: +The replication controller created to run nginx by `kubectl run` in the [Quick start](quick-start.md) could be specified using YAML as follows: ```yaml apiVersion: v1 diff --git a/docs/user-guide/environment-guide/README.md b/docs/user-guide/environment-guide/README.md index b382f335348..da6aa5ae1ba 100644 --- a/docs/user-guide/environment-guide/README.md +++ b/docs/user-guide/environment-guide/README.md @@ -81,7 +81,7 @@ Pod Name: show-rc-xxu6i Pod Namespace: default USER_VAR: important information -Kubenertes environment variables +Kubernetes environment variables BACKEND_SRV_SERVICE_HOST = 10.147.252.185 BACKEND_SRV_SERVICE_PORT = 5000 KUBERNETES_RO_SERVICE_HOST = 10.147.240.1 @@ -99,7 +99,7 @@ Backend Namespace: default ``` First the frontend pod's information is printed. The pod name and -[namespace](../../../docs/design/namespaces.md) are retreived from the +[namespace](../../../docs/design/namespaces.md) are retrieved from the [Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of an environment variable set in the [pod definition](show-rc.yaml). Then, the dynamic Kubernetes environment diff --git a/docs/user-guide/environment-guide/containers/show/show.go b/docs/user-guide/environment-guide/containers/show/show.go index 56bd988b400..9a2cfc639db 100644 --- a/docs/user-guide/environment-guide/containers/show/show.go +++ b/docs/user-guide/environment-guide/containers/show/show.go @@ -70,7 +70,7 @@ func printInfo(resp http.ResponseWriter, req *http.Request) { envvar := os.Getenv("USER_VAR") fmt.Fprintf(resp, "USER_VAR: %v \n", envvar) - fmt.Fprintf(resp, "\nKubenertes environment variables\n") + fmt.Fprintf(resp, "\nKubernetes environment variables\n") var keys []string for key := range kubeVars { keys = append(keys, key) diff --git a/docs/user-guide/secrets.md b/docs/user-guide/secrets.md index 90e4a7b9512..658b8e6ce84 100644 --- a/docs/user-guide/secrets.md +++ b/docs/user-guide/secrets.md @@ -73,7 +73,7 @@ more control over how it is used, and reduces the risk of accidental exposure. Users can create secrets, and the system also creates some secrets. To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: eithe as files in a [volume](volumes.md) mounted on one or more of +A secret can be used with a pod in two ways: either as files in a [volume](volumes.md) mounted on one or more of its containers, or used by kubelet when pulling images for the pod. ### Service Accounts Automatically Create and Attach Secrets with API Credentials diff --git a/docs/user-guide/services.md b/docs/user-guide/services.md index 44e52422eb2..99242d6a476 100644 --- a/docs/user-guide/services.md +++ b/docs/user-guide/services.md @@ -368,7 +368,7 @@ address, other services should be visible only from inside of the cluster. Kubernetes `ServiceTypes` allow you to specify what kind of service you want. The default and base type is `ClusterIP`, which exposes a service to connection from inside the cluster. `NodePort` and `LoadBalancer` are two types that expose -services to external trafic. +services to external traffic. Valid values for the `ServiceType` field are: