diff --git a/docs/admin/cluster-troubleshooting.md b/docs/admin/cluster-troubleshooting.md index b2df6af5969..66c875d619e 100644 --- a/docs/admin/cluster-troubleshooting.md +++ b/docs/admin/cluster-troubleshooting.md @@ -31,8 +31,10 @@ Documentation for other releases can be found at # Cluster Troubleshooting -Most of the time, if you encounter problems, it is your application that is the root cause. For application -problems please see the [application troubleshooting guide](../user-guide/application-troubleshooting.md). You may also visit [troubleshooting document](../troubleshooting.md) for more information. +This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the +problem you are experiencing. See +the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging. +You may also visit [troubleshooting document](../troubleshooting.md) for more information. ## Listing your cluster The first thing to debug in your cluster is if your nodes are all registered correctly. diff --git a/docs/admin/node.md b/docs/admin/node.md index ab0ded9e893..fda00842d4f 100644 --- a/docs/admin/node.md +++ b/docs/admin/node.md @@ -112,7 +112,7 @@ the following conditions mean the node is in sane state: ``` If the Status of the Ready condition -is Unknown or False for more than five minutes, then all of the Pods on the node are terminated. +is Unknown or False for more than five minutes, then all of the Pods on the node are terminated by the Node Controller. ### Node Capacity @@ -128,8 +128,8 @@ The information is gathered by Kubelet from the node. ## Node Management Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently -created by Kubernetes: it is either created from cloud providers like Google Compute Engine, -or from your physical or virtual machines. What this means is that when +created by Kubernetes: it is either taken from cloud providers like Google Compute Engine, +or from your pool of physical or virtual machines. What this means is that when Kubernetes creates a node, it is really just creating an object that represents the node in its internal state. After creation, Kubernetes will check whether the node is valid or not. For example, if you try to create a node from the following content: @@ -154,8 +154,7 @@ ignored for any cluster activity, until it becomes valid. Note that Kubernetes will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep checking to see if it becomes valid. -Currently, there are two agents that interacts with the Kubernetes node interface: -Node Controller and Kube Admin. +Currently, there are three components that interact with the Kubernetes node interface: Node Controller, Kubelet, and kubectl. ### Node Controller diff --git a/docs/admin/salt.md b/docs/admin/salt.md index 3c2151221ea..83f8e9479d4 100644 --- a/docs/admin/salt.md +++ b/docs/admin/salt.md @@ -105,7 +105,7 @@ Key | Value These keys may be leveraged by the Salt sls files to branch behavior. -In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, its important to sometimes distinguish behavior based on operating system using if branches like the following. +In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following. ``` {% if grains['os_family'] == 'RedHat' %} @@ -121,7 +121,7 @@ In addition, a cluster may be running a Debian based operating system or Red Hat ## Future enhancements (Networking) -Per pod IP configuration is provider-specific, so when making networking changes, its important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) +Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.) We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.