Merge pull request #11438 from davidopp/doc3

Various minor edits/clarifications to docs/admin/ docs.
This commit is contained in:
David Oppenheimer 2015-07-17 13:10:51 -07:00
commit 341f3a8826
14 changed files with 89 additions and 135 deletions

View File

@ -87,9 +87,6 @@ project.](salt.md).
## Multi-tenant support
* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different
projects, teams, or customers to share a kubernetes cluster.
* **Resource Quota** ([resource-quota.md](resource-quota.md))
## Security

View File

@ -59,7 +59,7 @@ By default the Kubernetes APIserver serves HTTP on 2 ports:
- uses token-file or client-certificate based [authentication](authentication.md).
- uses policy-based [authorization](authorization.md).
3. Removed: ReadOnly Port
- For security reasons, this had to be removed. Use the service account feature instead.
- For security reasons, this had to be removed. Use the [service account](../user-guide/service-accounts.md) feature instead.
## Proxies and Firewall rules
@ -80,26 +80,22 @@ variety of uses cases:
1. Clients outside of a Kubernetes cluster, such as human running `kubectl`
on desktop machine. Currently, accesses the Localhost Port via a proxy (nginx)
running on the `kubernetes-master` machine. Proxy uses bearer token authentication.
2. Processes running in Containers on Kubernetes that need to do read from
the apiserver. Currently, these can use a service account.
2. Processes running in Containers on Kubernetes that need to read from
the apiserver. Currently, these can use a [service account](../user-guide/service-accounts.md).
3. Scheduler and Controller-manager processes, which need to do read-write
API operations. Currently, these have to run on the operations on the
apiserver. Currently, these have to run on the same host as the
API operations. Currently, these have to run on the same host as the
apiserver and use the Localhost Port. In the future, these will be
switched to using service accounts to avoid the need to be co-located.
4. Kubelets, which need to do read-write API operations and are necessarily
on different machines than the apiserver. Kubelet uses the Secure Port
to get their pods, to find the services that a pod can see, and to
write events. Credentials are distributed to kubelets at cluster
setup time.
setup time. Kubelets use cert-based auth, while kube-proxy uses token-based auth.
## Expected changes
- Policy will limit the actions kubelets can do via the authed port.
- Kubelets will change from token-based authentication to cert-based-auth.
- Scheduler and Controller-manager will use the Secure Port too. They
will then be able to run on different machines than the apiserver.
- A general mechanism will be provided for [giving credentials to
pods](https://github.com/GoogleCloudPlatform/kubernetes/issues/1907).
- Clients, like kubectl, will all support token-based auth, and the
Localhost will no longer be needed, and will not be the default.
However, the localhost port may continue to be an option for

View File

@ -112,7 +112,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/).
See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/) for more details.
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
@ -121,9 +121,11 @@ so that quota is not prematurely incremented only for the request to be rejected
This plug-in will observe the incoming request and ensure that it does not violate any of the constraints
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints.
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. LimitRanger can also
be used to apply default resource requests to Pods that don't specify any; currently, the default LimitRanger
applies a 0.1 CPU requirement to all Pods in the ```default``` namespace.
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/).
See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/) for more details.
### NamespaceExists
@ -140,9 +142,9 @@ We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```.
### NamespaceLifecycle
This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it.
This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new objects created in it.
A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that
A ```Namespace``` deletion kicks off a sequence of operations that remove all objects (pods, services, etc.) in that
namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in.
Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will

View File

@ -34,13 +34,13 @@ Documentation for other releases can be found at
Kubernetes uses client certificates, tokens, or http basic auth to authenticate users for API calls.
Client certificate authentication is enabled by passing the `--client_ca_file=SOMEFILE`
**Client certificate authentication** is enabled by passing the `--client_ca_file=SOMEFILE`
option to apiserver. The referenced file must contain one or more certificates authorities
to use to validate client certificates presented to the apiserver. If a client certificate
is presented and verified, the common name of the subject is used as the user name for the
request.
Token authentication is enabled by passing the `--token_auth_file=SOMEFILE` option
**Token authentication** is enabled by passing the `--token_auth_file=SOMEFILE` option
to apiserver. Currently, tokens last indefinitely, and the token list cannot
be changed without restarting apiserver. We plan in the future for tokens to
be short-lived, and to be generated as needed rather than stored in a file.
@ -51,7 +51,7 @@ and is a csv file with 3 columns: token, user name, user uid.
When using token authentication from an http client the apiserver expects an `Authorization`
header with a value of `Bearer SOMETOKEN`.
Basic authentication is enabled by passing the `--basic_auth_file=SOMEFILE`
**Basic authentication** is enabled by passing the `--basic_auth_file=SOMEFILE`
option to apiserver. Currently, the basic auth credentials last indefinitely,
and the password cannot be changed without restarting apiserver. Note that basic
authentication is currently supported for convenience while we finish making the
@ -60,7 +60,7 @@ more secure modes described above easier to use.
The basic auth file format is implemented in `plugin/pkg/auth/authenticator/password/passwordfile/...`
and is a csv file with 3 columns: password, user name, user id.
When using basic authentication from an http client the apiserver expects an `Authorization` header
When using basic authentication from an http client, the apiserver expects an `Authorization` header
with a value of `Basic BASE64ENCODEDUSER:PASSWORD`.
## Plugin Development

View File

@ -37,9 +37,7 @@ In Kubernetes, authorization happens as a separate step from authentication.
See the [authentication documentation](authentication.md) for an
overview of authentication.
Authorization applies to all HTTP accesses on the main apiserver port. (The
readonly port is not currently subject to authorization, but is planned to be
removed soon.)
Authorization applies to all HTTP accesses on the main (secure) apiserver port.
The authorization check for any request compares attributes of the context of
the request, (such as user, resource, and namespace) with access

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Kubernetes Large Cluster
## Support
At v1.0, Kubernetes supports clusters up to 100 nodes with 30-50 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
At v1.0, Kubernetes supports clusters up to 100 nodes with 30 pods per node and 1-2 container per pod (as defined in the [1.0 roadmap](../../docs/roadmap.md#reliability-and-performance)).
## Setup
@ -41,7 +41,7 @@ Normally the number of nodes in a cluster is controlled by the the value `NUM_MI
Simply changing that value to something very large, however, may cause the setup script to fail for many cloud providers. A GCE deployment, for example, will run in to quota issues and fail to bring the cluster up.
When setting up a large Kubernetes cluster, the following must be taken into consideration.
When setting up a large Kubernetes cluster, the following issues must be considered.
### Quota Issues
@ -56,7 +56,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Forwarding rules
* Routes
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers limit the number of VMs you can create during a given period.
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).

View File

@ -31,8 +31,10 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Troubleshooting
Most of the time, if you encounter problems, it is your application that is having problems. For application
problems please see the [application troubleshooting guide](../user-guide/application-troubleshooting.md). You may also visit [troubleshooting document](../troubleshooting.md) for more information.
This doc is about cluster troubleshooting; we assume you have already ruled out your application as the root cause of the
problem you are experiencing. See
the [application troubleshooting guide](../user-guide/application-troubleshooting.md) for tips on application debugging.
You may also visit [troubleshooting document](../troubleshooting.md) for more information.
## Listing your cluster
The first thing to debug in your cluster is if your nodes are all registered correctly.
@ -46,7 +48,7 @@ And verify that all of the nodes you expect to see are present and that they are
## Looking at logs
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
of the relevant log files. (note that on systemd based systems, you may need to use ```journalctl``` instead)
of the relevant log files. (note that on systemd-based systems, you may need to use ```journalctl``` instead)
### Master
* /var/log/kube-apiserver.log - API Server, responsible for serving the API
@ -59,7 +61,7 @@ of the relevant log files. (note that on systemd based systems, you may need to
## A general overview of cluster failure modes
This is an incomplete list of things that could go wrong, and how to deal with them.
This is an incomplete list of things that could go wrong, and how to adjust your cluster setup to mitigate the problems.
Root causes:
- VM(s) shutdown
@ -102,18 +104,18 @@ Specific scenarios:
- etc.
Mitigations:
- Action: Use IaaS providers automatic VM restarting feature for IaaS VMs
- Action: Use IaaS provider's automatic VM restarting feature for IaaS VMs
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Mitigates: Supporting services VM shutdown or crashes
- Action use IaaS providers reliable storage (e.g GCE PD or AWS EBS volume) for VMs with apiserver+etcd
- Mitigates: Apiserver backing storage lost
- Action: Use [replicated APIserver](high-availability.md) feature
- Mitigates: Apiserver VM shutdown or apiserver crashing
- Will tolerate one or more simultaneous apiserver failures
- Action: Use (experimental) [high-availability](high-availability.md) configuration
- Mitigates: Master VM shutdown or master components (scheduler, API server, controller-managing) crashing
- Will tolerate one or more simultaneous node or component failures
- Mitigates: Apiserver backing storage (i.e., etcd's data directory) lost
- Each apiserver has independent storage. Etcd will recover from loss of one member. Risk of total data loss greatly reduced.
- Assuming you used clustered etcd.
- Action: Snapshot apiserver PDs/EBS-volumes periodically
- Mitigates: Apiserver backing storage lost

View File

@ -34,7 +34,7 @@ Documentation for other releases can be found at
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
configured to tell individual containers to use the DNS Service's IP.
configured to tell individual containers to use the DNS Service's IP to resolve DNS names.
Every Service defined in the cluster (including the DNS server itself) will be
assigned a DNS name. By default, a client Pod's DNS search list will
@ -51,7 +51,7 @@ supports forward lookups (A records) and service lookups (SRV records).
## How it Works
The running DNS pod holds 3 containers - skydns, etcd (which skydns uses),
The running DNS pod holds 3 containers - skydns, etcd (a private instance which skydns uses),
and a kubernetes-to-skydns bridge called kube2sky. The kube2sky process
watches the kubernetes master for changes in Services, and then writes the
information to etcd, which skydns reads. This etcd instance is not linked to

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Considerations for running multiple Kubernetes clusters
You may want to set up multiple kubernetes clusters, both to
have clusters in different regions to be nearer to your users; and to tolerate failures and/or invasive maintenance.
have clusters in different regions to be nearer to your users, and to tolerate failures and/or invasive maintenance.
This document describes some of the issues to consider when making a decision about doing so.
Note that at present,
@ -54,8 +54,8 @@ We suggest that all the VMs in a Kubernetes cluster should be in the same availa
It is okay to have multiple clusters per availability zone, though on balance we think fewer is better.
Reasons to prefer fewer clusters are:
- improved bin packing of Pods in some cases with more nodes in one cluster.
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures).
- improved bin packing of Pods in some cases with more nodes in one cluster (less resource fragmentation)
- reduced operational overhead (though the advantage is diminished as ops tooling and processes matures)
- reduced costs for per-cluster fixed resource costs, e.g. apiserver VMs (but small as a percentage
of overall cluster cost for medium to large clusters).
@ -82,13 +82,13 @@ you need `R + U` clusters. If it is not (e.g you want to ensure low latency for
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
you may need even more clusters. Our [roadmap](../roadmap.md)
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
you may need even more clusters. Kubernetes v1.0 currently supports clusters up to 100 nodes in size, but we are targeting
1000-node clusters by early 2016.
## Working with multiple clusters
When you have multiple clusters, you would typically create services with the same config in each cluster and put each of those
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer), so that
service instances behind a load balancer (AWS Elastic Load Balancer, GCE Forwarding Rule or HTTP Load Balancer) spanning all of them, so that
failures of a single cluster are not visible to end users.

View File

@ -1,49 +0,0 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/docs/admin/namespaces.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Namespaces
Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](../user-guide/identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.
Use of multiple namespaces is optional. For small teams, they may not be needed.
This is a placeholder document about namespace administration.
TODO: document namespace creation, ownership assignment, visibility rules,
policy creation, interaction with network.
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md). The user documentation can be found at [Namespaces](../../docs/user-guide/namespaces.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -54,10 +54,10 @@ Documentation for other releases can be found at
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](../user-guide/pods.md) and be managed from the master
systems. The services include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in design
doc for more details.
the services necessary to run [Pods](../user-guide/pods.md) and is managed by the master
components. The services on a node include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in the
architecture design doc for more details.
## Node Status
@ -66,22 +66,25 @@ pieces of information:
### Node Addresses
<!--- TODO: this section is outdated. There is no HostIP field in the API,
but there are addresses of type InternalIP and ExternalIP -->
Host IP address is queried from cloudprovider and stored as part of node
status. If kubernetes runs without cloudprovider, node's ID will be used.
IP address can change, and there are different kind of IPs, e.g. public
IP, private IP, dynamic IP, ipv6, etc. It makes more sense to save it as
a status rather than spec.
The usage of these fields varies depending on your cloud provider or bare metal configuration.
* HostName: Generally not used
* ExternalIP: Generally the IP address of the node that is externally routable (available from outside the cluster)
* InternalIP: Generally the IP address of the node that is routable only within the cluster
### Node Phase
Node Phase is the current lifecycle phase of node, one of `Pending`,
`Running` and `Terminated`. Node Phase management is under development,
here is a brief overview: In kubernetes, node will be created in `Pending`
phase, until it is discovered and checked in by kubernetes, at which time,
kubernetes will mark it as `Running`. The end of a node's lifecycle is
`Terminated`. A terminated node will not receive any scheduling request,
`Running` and `Terminated`.
* Pending: New nodes are created in this state. A node stays in this state until it is configured.
* Running: Node has been configured and the Kubernetes components are running
* Terminated: Node has been removed from the cluster. It will not receive any scheduling requests,
and any running pods will be removed from the node.
Node with `Running` phase is necessary but not sufficient requirement for
@ -90,11 +93,13 @@ must have appropriate conditions, see below.
### Node Condition
Node Condition describes the conditions of `Running` nodes. (However,
it can be present also when node status is different, e.g. `Unknown`)
Current valid condition is `Ready`. In the future, we plan to add more.
`Ready` means kubelet is healthy and ready to accept pods. Different
condition provides different level of understanding for node health.
Node Condition describes the conditions of `Running` nodes. Currently the only
node condition is Ready. The Status of this condition can be True, False, or
Unknown. True means the Kubelet is healthy and ready to accept pods.
False means the Kubelet is not healthy and is not accepting pods. Unknown
means the Node Controller, which manages node lifecycle and is responsible for
setting the Status of the condition, has not heard from the
node recently (currently 40 seconds).
Node condition is represented as a json object. For example,
the following conditions mean the node is in sane state:
```json
@ -106,23 +111,26 @@ the following conditions mean the node is in sane state:
]
```
If the Status of the Ready condition
is Unknown or False for more than five minutes, then all of the Pods on the node are terminated by the Node Controller.
### Node Capacity
Describes the resources available on the node: CPUs, memory and the maximum
number of pods that can be scheduled on this node.
number of pods that can be scheduled onto the node.
### Node Info
General information about the node, for instance kernel version, kubernetes version
(kubelet version, kube-proxy version), docker version (if used), OS name.
The information is gathered by Kubernetes from the node.
The information is gathered by Kubelet from the node.
## Node Management
Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
or from your physical or virtual machines. What this means is that when
Kubernetes creates a node, it only creates a representation for the node.
created by Kubernetes: it is either taken from cloud providers like Google Compute Engine,
or from your pool of physical or virtual machines. What this means is that when
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
After creation, Kubernetes will check whether the node is valid or not.
For example, if you try to create a node from the following content:
```json
@ -143,11 +151,10 @@ validate the node by health checking based on the `metadata.name` field: we
assume `metadata.name` can be resolved. If the node is valid, i.e. all necessary
services are running, it is eligible to run a Pod; otherwise, it will be
ignored for any cluster activity, until it becomes valid. Note that Kubernetes
will keep invalid node unless explicitly deleted by client, and it will keep
will keep the object for the invalid node unless it is explicitly deleted by the client, and it will keep
checking to see if it becomes valid.
Currently, there are two agents that interacts with Kubernetes node interface:
Node Controller and Kube Admin.
Currently, there are three components that interact with the Kubernetes node interface: Node Controller, Kubelet, and kubectl.
### Node Controller
@ -156,19 +163,19 @@ objects. It performs two major functions: cluster-wide node synchronization
and single node life-cycle management.
Node controller has a sync loop that creates/deletes Nodes from Kubernetes
based on all matching VM instances listed from cloud provider. The sync period
can be controlled via flag `--node-sync-period`. If a new instance
based on all matching VM instances listed from the cloud provider. The sync period
can be controlled via flag `--node-sync-period`. If a new VM instance
gets created, Node Controller creates a representation for it. If an existing
instance gets deleted, Node Controller deletes the representation. Note however,
Node Controller is unable to provision the node for you, i.e. it won't install
that Node Controller is unable to provision the node for you, i.e. it won't install
any binary; therefore, to
join Kubernetes cluster, you as an admin need to make sure proper services are
join a node to a Kubernetes cluster, you as an admin need to make sure proper services are
running in the node. In the future, we plan to automatically provision some node
services.
### Self-Registration of nodes
When kubelet flag `--register-node` is true (the default), then the kubelet will attempt to
When kubelet flag `--register-node` is true (the default), the kubelet will attempt to
register itself with the API server. This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
@ -190,7 +197,8 @@ If the administrator wishes to create node objects manually, set kubelet flag
The administrator can modify Node resources (regardless of the setting of `--register-node`).
Modifications include setting labels on the Node, and marking it unschedulable.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling.
Labels on nodes can be used in conjunction with node selectors on pods to control scheduling,
e.g. to constrain a Pod to only be eligible to run on a subset of the nodes.
Making a node unscheduleable will prevent new pods from being scheduled to that
node, but will not affect any existing pods on the node. This is useful as a
@ -208,7 +216,7 @@ you are doing [manual node administration](#manual-node-administration), then yo
capacity when adding a node.
The kubernetes scheduler ensures that there are enough resources for all the pods on a node. It
checks that the sum of the limits of containers on the node is less than the node capacity. It
checks that the sum of the limits of containers on the node is no greater than than the node capacity. It
includes all containers started by kubelet, but not containers started directly by docker, nor
processes not in containers.

View File

@ -105,7 +105,7 @@ Key | Value
These keys may be leveraged by the Salt sls files to branch behavior.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, its important to sometimes distinguish behavior based on operating system using if branches like the following.
In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following.
```
{% if grains['os_family'] == 'RedHat' %}
@ -117,11 +117,11 @@ In addition, a cluster may be running a Debian based operating system or Red Hat
## Best Practices
1. When configuring default arguments for processes, its best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors that may not be familiar with the particulars of each distribution.
1. When configuring default arguments for processes, it's best to avoid the use of EnvironmentFiles (Systemd in Red Hat environments) or init.d files (Debian distributions) to hold default values that should be common across operating system environments. This helps keep our Salt template files easy to understand for editors who may not be familiar with the particulars of each distribution.
## Future enhancements (Networking)
Per pod IP configuration is provider specific, so when making networking changes, its important to sand-box these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
Per pod IP configuration is provider-specific, so when making networking changes, it's important to sandbox these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.

View File

@ -47,7 +47,7 @@ for a number of reasons:
- User accounts are intended to be global. Names must be unique across all
namespaces of a cluster, future user resource will not be namespaced).
Service accounts are namespaced.
- Typically, a clusters User accounts might be synced from a corporate
- Typically, a cluster's User accounts might be synced from a corporate
database, where new user account creation requires special privileges and
is tied to complex business processes. Service account creation is intended
to be more lightweight, allowing cluster users to create service accounts for
@ -82,7 +82,7 @@ TokenController runs as part of controller-manager. It acts asynchronously. It:
- observes serviceAccount creation and creates a corresponding Secret to allow API access.
- observes serviceAccount deletion and deletes all corresponding ServiceAccountToken Secrets
- observes secret addition, and ensures the referenced ServiceAccount exists, and adds a token to the secret if needed
- observer secret deleteion and removes a reference from the corresponding ServiceAccount if needed
- observes secret deleteion and removes a reference from the corresponding ServiceAccount if needed
#### To create additional API tokens

View File

@ -138,7 +138,7 @@ These fields are required for proper decoding of the object. They may be populat
Every object kind MUST have the following metadata in a nested object field called "metadata":
* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/admin/namespaces.md](../admin/namespaces.md) for more.
* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [docs/user-guide/namespaces.md](../user-guide/namespaces.md) for more.
* name: a string that uniquely identifies this object within the current namespace (see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)). This value is used in the path when retrieving an individual object.
* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [docs/user-guide/identifiers.md](../user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated