Various minor edits/clarifications to docs/admin/ docs.

Deleted docs/admin/namespaces.md as it was content-free and the topic is
already covered well in docs/user-guide/namespaces.md
This commit is contained in:
David Oppenheimer
2015-07-17 10:12:08 -07:00
parent e81645b973
commit 2a26b7487e
14 changed files with 83 additions and 130 deletions

View File

@@ -54,10 +54,10 @@ Documentation for other releases can be found at
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](../user-guide/pods.md) and be managed from the master
systems. The services include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in design
doc for more details.
the services necessary to run [Pods](../user-guide/pods.md) and is managed by the master
components. The services on a node include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in the
architecture design doc for more details.
## Node Status
@@ -66,22 +66,25 @@ pieces of information:
### Node Addresses
<!--- TODO: this section is outdated. There is no HostIP field in the API,
but there are addresses of type InternalIP and ExternalIP -->
Host IP address is queried from cloudprovider and stored as part of node
status. If kubernetes runs without cloudprovider, node's ID will be used.
IP address can change, and there are different kind of IPs, e.g. public
IP, private IP, dynamic IP, ipv6, etc. It makes more sense to save it as
a status rather than spec.
The usage of these fields varies depending on your cloud provider or bare metal configuration.
* HostName: Generally not used
* ExternalIP: Generally the IP address of the node that is externally routable (available from outside the cluster)
* InternalIP: Generally the IP address of the node that is routable only within the cluster
### Node Phase
Node Phase is the current lifecycle phase of node, one of `Pending`,
`Running` and `Terminated`. Node Phase management is under development,
here is a brief overview: In kubernetes, node will be created in `Pending`
phase, until it is discovered and checked in by kubernetes, at which time,
kubernetes will mark it as `Running`. The end of a node's lifecycle is
`Terminated`. A terminated node will not receive any scheduling request,
`Running` and `Terminated`.
* Pending: New nodes are created in this state. A node stays in this state until it is configured.
* Running: Node has been configured and the Kubernetes components are running
* Terminated: Node has been removed from the cluster. It will not receive any scheduling requests,
and any running pods will be removed from the node.
Node with `Running` phase is necessary but not sufficient requirement for
@@ -90,11 +93,13 @@ must have appropriate conditions, see below.
### Node Condition
Node Condition describes the conditions of `Running` nodes. (However,
it can be present also when node status is different, e.g. `Unknown`)
Current valid condition is `Ready`. In the future, we plan to add more.
`Ready` means kubelet is healthy and ready to accept pods. Different
condition provides different level of understanding for node health.
Node Condition describes the conditions of `Running` nodes. Currently the only
node condition is Ready. The Status of this condition can be True, False, or
Unknown. True means the Kubelet is healthy and ready to accept pods.
False means the Kubelet is not healthy and is not accepting pods. Unknown
means the Node Controller, which manages node lifecycle and is responsible for
setting the Status of the condition, has not heard from the
node recently (currently 40 seconds).
Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:
```json
@@ -106,23 +111,26 @@ the following conditions mean the node is in sane state:
]
```
If the Status of the Ready condition
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated.
### Node Capacity
Describes the resources available on the node: CPUs, memory and the maximum
number of pods that can be scheduled on this node.
number of pods that can be scheduled onto the node.
### Node Info
General information about the node, for instance kernel version, kubernetes version
(kubelet version, kube-proxy version), docker version (if used), OS name.
The information is gathered by Kubernetes from the node.
The information is gathered by Kubelet from the node.
## Node Management
Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
or from your physical or virtual machines. What this means is that when
Kubernetes creates a node, it only creates a representation for the node.
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
```json
@@ -143,10 +151,10 @@ validate the node by health checking based on the `metadata.name` field: we
assume `metadata.name` can be resolved. If the node is valid, i.e. all necessary
services are running, it is eligible to run a Pod; otherwise, it will be
ignored for any cluster activity, until it becomes valid. Note that Kubernetes
will keep invalid node unless explicitly deleted by client, and it will keep
will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep
checking to see if it becomes valid.
Currently, there are two agents that interacts with Kubernetes node interface:
Currently, there are two agents that interacts with the Kubernetes node interface:
Node Controller and Kube Admin.
### Node Controller
@@ -156,19 +164,19 @@ objects. It performs two major functions: cluster-wide node synchronization
and single node life-cycle management.
Node controller has a sync loop that creates/deletes Nodes from Kubernetes
based on all matching VM instances listed from cloud provider. The sync period
can be controlled via flag `--node-sync-period`. If a new instance
based on all matching VM instances listed from the cloud provider. The sync period
can be controlled via flag `--node-sync-period`. If a new VM instance
gets created, Node Controller creates a representation for it. If an existing
instance gets deleted, Node Controller deletes the representation. Note however,
Node Controller is unable to provision the node for you, i.e. it won't install
that Node Controller is unable to provision the node for you, i.e. it won't install
any binary; therefore, to
join Kubernetes cluster, you as an admin need to make sure proper services are
join a node to a Kubernetes cluster, you as an admin need to make sure proper services are
running in the node. In the future, we plan to automatically provision some node
services.
### Self-Registration of nodes
When kubelet flag `--register-node` is true (the default), then the kubelet will attempt to
When kubelet flag `--register-node` is true (the default), the kubelet will attempt to
register itself with the API server. This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
@@ -190,7 +198,8 @@ If the administrator wishes to create node objects manually, set kubelet flag
The administrator can modify Node resources (regardless of the setting of `--register-node`).
Modifications include setting labels on the Node, and marking it unschedulable.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
e.g. to constrain a Pod to only be eligible to run on a subset of the nodes.
Making a node unscheduleable will prevent new pods from being scheduled to that
node, but will not affect any existing pods on the node. This is useful as a
@@ -208,7 +217,7 @@ you are doing [manual node administration](#manual-node-administration), then yo
capacity when adding a node.
The kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the limits of containers on the node is less than the node capacity. It
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
includes all containers started by kubelet, but not containers started directly by docker, nor
processes not in containers.