Merge pull request #11013 from lavalamp/docs
Move admin related docs into docs/admin
@ -17,7 +17,7 @@ certainly want the docs that go with that version.</h1>
|
||||
* The [User's guide](user-guide.md) is for anyone who wants to run programs and
|
||||
services on an existing Kubernetes cluster.
|
||||
|
||||
* The [Cluster Admin's guide](cluster-admin-guide.md) is for anyone setting up
|
||||
* The [Cluster Admin's guide](admin/README.md) is for anyone setting up
|
||||
a Kubernetes cluster or administering it.
|
||||
|
||||
* The [Developer guide](developer-guide.md) is for anyone wanting to write
|
||||
|
@ -110,7 +110,7 @@ certificate.
|
||||
|
||||
On some clusters, the apiserver does not require authentication; it may serve
|
||||
on localhost, or be protected by a firewall. There is not a standard
|
||||
for this. [Configuring Access to the API](accessing_the_api.md)
|
||||
for this. [Configuring Access to the API](admin/accessing-the-api.md)
|
||||
describes how a cluster admin can configure this. Such approaches may conflict
|
||||
with future high-availability support.
|
||||
|
||||
@ -134,7 +134,7 @@ the `kubernetes` DNS name, which resolves to a Service IP which in turn
|
||||
will be routed to an apiserver.
|
||||
|
||||
The recommended way to authenticate to the apiserver is with a
|
||||
[service account](service_accounts.md) credential. By default, a pod
|
||||
[service account](service-accounts.md) credential. By default, a pod
|
||||
is associated with a service account, and a credential (token) for that
|
||||
service account is placed into the filesystem tree of each container in that pod,
|
||||
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
|
||||
@ -153,7 +153,7 @@ In each case, the credentials of the pod are used to communicate securely with t
|
||||
## Accessing services running on the cluster
|
||||
The previous section was about connecting the Kubernetes API server. This section is about
|
||||
connecting to other services running on Kubernetes cluster. In kubernetes, the
|
||||
[nodes](node.md), [pods](pods.md) and [services](services.md) all have
|
||||
[nodes](admin/node.md), [pods](pods.md) and [services](services.md) all have
|
||||
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
|
||||
routable, so they will not be reachable from a machine outside the cluster,
|
||||
such as your desktop machine.
|
||||
|
@ -15,12 +15,12 @@ certainly want the docs that go with that version.</h1>
|
||||
# Kubernetes Cluster Admin Guide
|
||||
|
||||
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
|
||||
It assumes some familiarity with concepts in the [User Guide](user-guide.md).
|
||||
It assumes some familiarity with concepts in the [User Guide](../user-guide.md).
|
||||
|
||||
## Planning a cluster
|
||||
|
||||
There are many different examples of how to setup a kubernetes cluster. Many of them are listed in this
|
||||
[matrix](getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*.
|
||||
[matrix](../getting-started-guides/README.md). We call each of the combinations in this matrix a *distro*.
|
||||
|
||||
Before choosing a particular guide, here are some things to consider:
|
||||
- Are you just looking to try out Kubernetes on your laptop, or build a high-availability many-node cluster? Both
|
||||
@ -41,7 +41,7 @@ Before choosing a particular guide, here are some things to consider:
|
||||
|
||||
## Setting up a cluster
|
||||
|
||||
Pick one of the Getting Started Guides from the [matrix](getting-started-guides/README.md) and follow it.
|
||||
Pick one of the Getting Started Guides from the [matrix](../getting-started-guides/README.md) and follow it.
|
||||
If none of the Getting Started Guides fits, you may want to pull ideas from several of the guides.
|
||||
|
||||
One option for custom networking is *OpenVSwitch GRE/VxLAN networking* ([ovs-networking.md](ovs-networking.md)), which
|
||||
@ -52,7 +52,7 @@ If you are modifying an existing guide which uses Salt, this document explains [
|
||||
project.](salt.md).
|
||||
|
||||
## Upgrading a cluster
|
||||
[Upgrading a cluster](cluster_management.md).
|
||||
[Upgrading a cluster](cluster-management.md).
|
||||
|
||||
## Managing nodes
|
||||
|
||||
@ -63,30 +63,30 @@ project.](salt.md).
|
||||
* **DNS Integration with SkyDNS** ([dns.md](dns.md)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* **Logging** with [Kibana](logging.md)
|
||||
* **Logging** with [Kibana](../logging.md)
|
||||
|
||||
## Multi-tenant support
|
||||
|
||||
* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different
|
||||
projects, teams, or customers to share a kubernetes cluster.
|
||||
|
||||
* **Resource Quota** ([resource_quota_admin.md](resource_quota_admin.md))
|
||||
* **Resource Quota** ([resource-quota.md](resource-quota.md))
|
||||
|
||||
## Security
|
||||
|
||||
* **Kubernetes Container Environment** ([container-environment.md](container-environment.md)):
|
||||
* **Kubernetes Container Environment** ([docs/container-environment.md](../container-environment.md)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node.
|
||||
|
||||
* **Securing access to the API Server** [accessing the api](accessing_the_api.md)
|
||||
* **Securing access to the API Server** [accessing the api](accessing-the-api.md)
|
||||
|
||||
* **Authentication** [authentication](authentication.md)
|
||||
|
||||
* **Authorization** [authorization](authorization.md)
|
||||
|
||||
* **Admission Controllers** [admission_controllers](admission_controllers.md)
|
||||
* **Admission Controllers** [admission_controllers](admission-controllers.md)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -20,7 +20,7 @@ cluster administrators who want to customize their cluster
|
||||
or understand the details.
|
||||
|
||||
Most questions about accessing the cluster are covered
|
||||
in [Accessing the cluster](accessing-the-cluster.md).
|
||||
in [Accessing the cluster](../accessing-the-cluster.md).
|
||||
|
||||
|
||||
## Ports and IPs Served On
|
||||
@ -90,5 +90,5 @@ variety of uses cases:
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -81,12 +81,12 @@ commands in those containers, we strongly encourage enabling this plug-in.
|
||||
|
||||
### ServiceAccount
|
||||
|
||||
This plug-in implements automation for [serviceAccounts](service_accounts.md).
|
||||
This plug-in implements automation for [serviceAccounts](../service-accounts.md).
|
||||
We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects.
|
||||
|
||||
### SecurityContextDeny
|
||||
|
||||
This plug-in will deny any pod with a [SecurityContext](security_context.md) that defines options that were not available on the ```Container```.
|
||||
This plug-in will deny any pod with a [SecurityContext](../security-context.md) that defines options that were not available on the ```Container```.
|
||||
|
||||
### ResourceQuota
|
||||
|
||||
@ -94,7 +94,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
|
||||
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
|
||||
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
|
||||
|
||||
See the [resourceQuota design doc](design/admission_control_resource_quota.md).
|
||||
See the [resourceQuota design doc](../design/admission_control_resource_quota.md).
|
||||
|
||||
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
|
||||
so that quota is not prematurely incremented only for the request to be rejected later in admission control.
|
||||
@ -105,7 +105,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
|
||||
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
|
||||
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints.
|
||||
|
||||
See the [limitRange design doc](design/admission_control_limit_range.md).
|
||||
See the [limitRange design doc](../design/admission_control_limit_range.md).
|
||||
|
||||
### NamespaceExists
|
||||
|
||||
@ -142,5 +142,5 @@ For Kubernetes 1.0, we strongly recommend running the following set of admission
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -55,5 +55,5 @@ github.com, google.com, enterprise directory, kerberos, etc.)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -94,7 +94,7 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
||||
3. Kubelet can read and write events: `{"user":"kubelet", "resource": "events"}`
|
||||
4. Bob can just read pods in namespace "projectCaribou": `{"user":"bob", "resource": "pods", "readonly": true, "ns": "projectCaribou"}`
|
||||
|
||||
[Complete file example](../pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
[Complete file example](../../pkg/auth/authorizer/abac/example_policy_file.jsonl)
|
||||
|
||||
## Plugin Development
|
||||
|
||||
@ -118,5 +118,5 @@ caching and revocation of permissions.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -134,7 +134,7 @@ you need `R + U` clusters. If it is not (e.g you want to ensure low latency for
|
||||
cluster failure), then you need to have `R * U` clusters (`U` in each of `R` regions). In any case, try to put each cluster in a different zone.
|
||||
|
||||
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then
|
||||
you may need even more clusters. Our [roadmap](roadmap.md)
|
||||
you may need even more clusters. Our [roadmap](../roadmap.md)
|
||||
calls for maximum 100 node clusters at v1.0 and maximum 1000 node clusters in the middle of 2015.
|
||||
|
||||
## Working with multiple clusters
|
||||
@ -145,5 +145,5 @@ failures of a single cluster are not visible to end users.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -74,5 +74,5 @@ If you want more control over the upgrading process, you may use the following w
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# DNS Integration with Kubernetes
|
||||
|
||||
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../cluster/addons/README.md).
|
||||
As of kubernetes 0.8, DNS is offered as a [cluster add-on](../../cluster/addons/README.md).
|
||||
If enabled, a DNS Pod and Service will be scheduled on the cluster, and the kubelets will be
|
||||
configured to tell individual containers to use the DNS Service's IP.
|
||||
|
||||
@ -49,9 +49,9 @@ time.
|
||||
|
||||
## For more information
|
||||
|
||||
See [the docs for the DNS cluster addon](../cluster/addons/dns/README.md).
|
||||
See [the docs for the DNS cluster addon](../../cluster/addons/dns/README.md).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
31
docs/admin/namespaces.md
Normal file
@ -0,0 +1,31 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<h1>*** PLEASE NOTE: This document applies to the HEAD of the source
|
||||
tree only. If you are using a released version of Kubernetes, you almost
|
||||
certainly want the docs that go with that version.</h1>
|
||||
|
||||
<strong>Documentation for specific releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).</strong>
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Namespaces
|
||||
|
||||
Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](../identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.
|
||||
|
||||
Use of multiple namespaces is optional. For small teams, they may not be needed.
|
||||
|
||||
This is a placeholder document about namespace administration.
|
||||
|
||||
TODO: document namespace creation, ownership assignment, visibility rules,
|
||||
policy creation, interaction with network.
|
||||
|
||||
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md). The user documentation can be found at [Namespaces](../../docs/namespaces.md)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -34,10 +34,10 @@ certainly want the docs that go with that version.</h1>
|
||||
Kubernetes approaches networking somewhat differently than Docker does by
|
||||
default. There are 4 distinct networking problems to solve:
|
||||
1. Highly-coupled container-to-container communications: this is solved by
|
||||
[pods](pods.md) and `localhost` communications.
|
||||
[pods](../pods.md) and `localhost` communications.
|
||||
2. Pod-to-Pod communications: this is the primary focus of this document.
|
||||
3. Pod-to-Service communications: this is covered by [services](services.md).
|
||||
4. External-to-Service communications: this is covered by [services](services.md).
|
||||
3. Pod-to-Service communications: this is covered by [services](../services.md).
|
||||
4. External-to-Service communications: this is covered by [services](../services.md).
|
||||
|
||||
## Summary
|
||||
|
||||
@ -204,9 +204,9 @@ IPs.
|
||||
|
||||
The early design of the networking model and its rationale, and some future
|
||||
plans are described in more detail in the [networking design
|
||||
document](design/networking.md).
|
||||
document](../design/networking.md).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -36,9 +36,9 @@ certainly want the docs that go with that version.</h1>
|
||||
|
||||
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
|
||||
may be a VM or physical machine, depending on the cluster. Each node has
|
||||
the services necessary to run [Pods](pods.md) and be managed from the master
|
||||
the services necessary to run [Pods](../pods.md) and be managed from the master
|
||||
systems. The services include docker, kubelet and network proxy. See
|
||||
[The Kubernetes Node](design/architecture.md#the-kubernetes-node) section in design
|
||||
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in design
|
||||
doc for more details.
|
||||
|
||||
## Node Status
|
||||
@ -101,7 +101,7 @@ The information is gathered by Kubernetes from the node.
|
||||
|
||||
## Node Management
|
||||
|
||||
Unlike [Pods](pods.md) and [Services](services.md), a Node is not inherently
|
||||
Unlike [Pods](../pods.md) and [Services](../services.md), a Node is not inherently
|
||||
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
|
||||
or from your physical or virtual machines. What this means is that when
|
||||
Kubernetes creates a node, it only creates a representation for the node.
|
||||
@ -216,5 +216,5 @@ on each kubelet where you want to reserve resources.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -29,5 +29,5 @@ Routing rules enable any 10.244.0.0/16 target to become reachable via the OVS br
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 70 KiB |
@ -116,5 +116,5 @@ hard limits of each namespace.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -109,9 +109,9 @@ We should define a grains.conf key that captures more specifically what network
|
||||
|
||||
## Further reading
|
||||
|
||||
The [cluster/saltbase](../cluster/saltbase/) tree has more details on the current SaltStack configuration.
|
||||
The [cluster/saltbase](../../cluster/saltbase/) tree has more details on the current SaltStack configuration.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -15,7 +15,7 @@ certainly want the docs that go with that version.</h1>
|
||||
# Cluster Admin Guide to Service Accounts
|
||||
|
||||
*This is a Cluster Administrator guide to service accounts. It assumes knowledge of
|
||||
the [User Guide to Service Accounts](service_accounts.md).*
|
||||
the [User Guide to Service Accounts](../service-accounts.md).*
|
||||
|
||||
*Support for authorization and user accounts is planned but incomplete. Sometimes
|
||||
incomplete features are referred to in order to better describe service accounts.*
|
||||
@ -49,7 +49,7 @@ Three separate components cooperate to implement the automation around service a
|
||||
### Service Account Admission Controller
|
||||
|
||||
The modification of pods is implemented via a plugin
|
||||
called an [Admission Controller](admission_controllers.md). It is part of the apiserver.
|
||||
called an [Admission Controller](admission-controllers.md). It is part of the apiserver.
|
||||
It acts synchronously to modify pods as they are created or updated. When this plugin is active
|
||||
(and it is by default on most distributions), then it does the following when a pod is created or modified:
|
||||
1. If the pod does not have a `ServiceAccount` set, it sets the `ServiceAccount` to `default`.
|
||||
@ -97,5 +97,5 @@ kubectl delete secret mysecretname
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -20,7 +20,7 @@ Overall API conventions are described in the [API conventions doc](api-conventio
|
||||
|
||||
Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swagger-ui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/).
|
||||
|
||||
Remote access to the API is discussed in the [access doc](accessing_the_api.md).
|
||||
Remote access to the API is discussed in the [access doc](admin/accessing-the-api.md).
|
||||
|
||||
The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](user-guide/kubectl/kubectl.md) command-line tool can be used to create, update, delete, and get API objects.
|
||||
|
||||
@ -48,7 +48,7 @@ As of June 4, 2015, the Kubernetes v1 API has been enabled by default. The v1bet
|
||||
|
||||
### v1 conversion tips (from v1beta3)
|
||||
|
||||
We're working to convert all documentation and examples to v1. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec.
|
||||
We're working to convert all documentation and examples to v1. A simple [API conversion tool](admin/cluster-management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec.
|
||||
|
||||
Changes to services are the most significant difference between v1beta3 and v1.
|
||||
|
||||
@ -58,7 +58,7 @@ Changes to services are the most significant difference between v1beta3 and v1.
|
||||
|
||||
Some other difference between v1beta3 and v1:
|
||||
|
||||
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](security_context.md).
|
||||
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](security-context.md).
|
||||
* The `pod.spec.host` property is renamed to `pod.spec.nodeName`.
|
||||
* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`.
|
||||
* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].lastState.terminated` respectively.
|
||||
@ -79,7 +79,7 @@ Some important differences between v1beta1/2 and v1beta3:
|
||||
* The `labels` query parameter has been renamed to `labelSelector`.
|
||||
* The `fields` query parameter has been renamed to `fieldSelector`.
|
||||
* The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`.
|
||||
* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](compute_resources.md#specifying-resource-quantities) rather than fixed scales (e.g., milli-cores).
|
||||
* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](compute-resources.md#specifying-resource-quantities) rather than fixed scales (e.g., milli-cores).
|
||||
* Restart policy is represented simply as a string (e.g., `"Always"`) rather than as a nested map (`always{}`).
|
||||
* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`.
|
||||
* The volume `source` is inlined into `volume` rather than nested.
|
||||
|
@ -68,7 +68,7 @@ To avoid running into cluster addon resource issues, when creating a cluster wit
|
||||
* [FluentD with ElasticSearch Plugin](../cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
|
||||
* [FluentD with GCP Plugin](../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
|
||||
|
||||
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](compute_resources.md#troubleshooting).
|
||||
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](compute-resources.md#troubleshooting).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -34,7 +34,7 @@ in units of cores. Memory is specified in units of bytes.
|
||||
|
||||
CPU and RAM are collectively referred to as *compute resources*, or just *resources*. Compute
|
||||
resources are measureable quantities which can be requested, allocated, and consumed. They are
|
||||
distinct from [API resources](working_with_resources.md). API resources, such as pods and
|
||||
distinct from [API resources](working-with-resources.md). API resources, such as pods and
|
||||
[services](services.md) are objects that can be written to and retrieved from the Kubernetes API
|
||||
server.
|
||||
|
||||
@ -147,7 +147,7 @@ Here are some example command lines that extract just the necessary information:
|
||||
- `kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'`
|
||||
- `kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'`
|
||||
|
||||
The [resource quota](resource_quota_admin.md) feature can be configured
|
||||
The [resource quota](admin/resource-quota.md) feature can be configured
|
||||
to limit the total amount of resources that can be consumed. If used in conjunction
|
||||
with namespaces, it can prevent one team from hogging all the resources.
|
||||
|
||||
@ -223,5 +223,5 @@ across providers and platforms.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -24,7 +24,7 @@ Kubernetes enables users to ask a cluster to run a set of containers. The system
|
||||
|
||||
Kubernetes is intended to run on a number of cloud providers, as well as on physical hosts.
|
||||
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../availability.md) and [cluster federation proposal](../proposals/federation.md) for more details).
|
||||
A single Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones (see [the availability doc](../admin/availability.md) and [cluster federation proposal](../proposals/federation.md) for more details).
|
||||
|
||||
Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS platform and toolkit. Therefore, architecturally, we want Kubernetes to be built as a collection of pluggable components and layers, with the ability to use alternative schedulers, controllers, storage systems, and distribution mechanisms, and we're evolving its current code in that direction. Furthermore, we want others to be able to extend Kubernetes functionality, such as with higher-level PaaS functionality or multi-cluster layers, without modification of core Kubernetes source. Therefore, its API isn't just (or even necessarily mainly) targeted at end users, but at tool and extension developers. Its APIs are intended to serve as the foundation for an open ecosystem of tools, automation systems, and higher-level API layers. Consequently, there are no "internal" inter-component APIs. All APIs are visible and available, including the APIs used by the scheduler, the node controller, the replication-controller manager, Kubelet's API, etc. There's no glass to break -- in order to handle more complex use cases, one can just access the lower-level APIs in a fully transparent, composable manner.
|
||||
|
||||
|
@ -33,7 +33,7 @@ The **Kubelet** manages [pods](../pods.md) and their containers, their images, t
|
||||
|
||||
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
|
||||
|
||||
Service endpoints are currently found via [DNS](../dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
|
||||
Service endpoints are currently found via [DNS](../admin/dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
|
||||
|
||||
## The Kubernetes Control Plane
|
||||
|
||||
|
@ -86,7 +86,7 @@ distinguish distinct entities, and reference particular entities across operatio
|
||||
|
||||
A *Namespace* provides an authorization scope for accessing content associated with the *Namespace*.
|
||||
|
||||
See [Authorization plugins](../authorization.md)
|
||||
See [Authorization plugins](../admin/authorization.md)
|
||||
|
||||
### Limit Resource Consumption
|
||||
|
||||
|
@ -129,7 +129,7 @@ a pod tries to egress beyond GCE's project the packets must be SNAT'ed
|
||||
|
||||
With the primary aim of providing IP-per-pod-model, other implementations exist
|
||||
to serve the purpose outside of GCE.
|
||||
- [OpenVSwitch with GRE/VxLAN](../ovs-networking.md)
|
||||
- [OpenVSwitch with GRE/VxLAN](../admin/ovs-networking.md)
|
||||
- [Flannel](https://github.com/coreos/flannel#flannel)
|
||||
- [L2 networks](http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/)
|
||||
("With Linux Bridge devices" section)
|
||||
|
@ -13,7 +13,7 @@ certainly want the docs that go with that version.</h1>
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
**Note: this is a design doc, which describes features that have not been completely implemented.
|
||||
User documentation of the current state is [here](../compute_resources.md). The tracking issue for
|
||||
User documentation of the current state is [here](../compute-resources.md). The tracking issue for
|
||||
implementation of this model is
|
||||
[#168](https://github.com/GoogleCloudPlatform/kubernetes/issues/168). Currently, only memory and
|
||||
cpu limits on containers (not pods) are supported. "memory" is in bytes and "cpu" is in
|
||||
|
@ -34,7 +34,7 @@ They also may interact with services other than the Kubernetes API, such as:
|
||||
## Design Overview
|
||||
A service account binds together several things:
|
||||
- a *name*, understood by users, and perhaps by peripheral systems, for an identity
|
||||
- a *principal* that can be authenticated and [authorized](../authorization.md)
|
||||
- a *principal* that can be authenticated and [authorized](../admin/authorization.md)
|
||||
- a [security context](security_context.md), which defines the Linux Capabilities, User IDs, Groups IDs, and other
|
||||
capabilities and controls on interaction with the file system and OS.
|
||||
- a set of [secrets](secrets.md), which a container may use to
|
||||
|
@ -17,7 +17,7 @@ certainly want the docs that go with that version.</h1>
|
||||
The developer guide is for anyone wanting to either write code which directly accesses the
|
||||
kubernetes API, or to contribute directly to the kubernetes project.
|
||||
It assumes some familiarity with concepts in the [User Guide](user-guide.md) and the [Cluster Admin
|
||||
Guide](cluster-admin-guide.md).
|
||||
Guide](admin/README.md).
|
||||
|
||||
|
||||
## Developing against the Kubernetes API
|
||||
@ -35,10 +35,10 @@ Guide](cluster-admin-guide.md).
|
||||
|
||||
## Writing Plugins
|
||||
|
||||
* **Authentication Plugins** ([authentication.md](authentication.md)):
|
||||
* **Authentication Plugins** ([admin/authentication.md](admin/authentication.md)):
|
||||
The current and planned states of authentication tokens.
|
||||
|
||||
* **Authorization Plugins** ([authorization.md](authorization.md)):
|
||||
* **Authorization Plugins** ([admin/authorization.md](admin/authorization.md)):
|
||||
Authorization applies to all HTTP requests on the main apiserver port.
|
||||
This doc explains the available authorization implementations.
|
||||
|
||||
|
@ -89,5 +89,5 @@ Some more thorough examples:
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -62,7 +62,7 @@ Definition of columns:
|
||||
- **OS** is the base operating system of the nodes.
|
||||
- **Config. Mgmt** is the configuration management system that helps install and maintain kubernetes software on the
|
||||
nodes.
|
||||
- **Networking** is what implements the [networking model](../../docs/networking.md). Those with networking type
|
||||
- **Networking** is what implements the [networking model](../../docs/admin/networking.md). Those with networking type
|
||||
_none_ may not support more than one node, or may support multiple VM nodes only in the same physical node.
|
||||
- **Conformance** indicates whether a cluster created with this configuration has passed the project's conformance
|
||||
tests for supporting the API and base features of Kubernetes v1.0.0.
|
||||
|
@ -25,7 +25,7 @@ You need two machines with CentOS installed on them.
|
||||
## Starting a cluster
|
||||
This is a getting started guide for CentOS. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node working. Multiple nodes requires a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, centos-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_. The remaining host, centos-minion will be the node and run kubelet, proxy, cadvisor and docker.
|
||||
|
||||
|
@ -27,7 +27,7 @@ Getting started on [Fedora](http://fedoraproject.org)
|
||||
|
||||
This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc...
|
||||
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional [networking configuration](../../admin/networking.md) done outside of kubernetes. Although the additional kubernetes configuration requirements should be obvious.
|
||||
|
||||
The kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run _etcd_ (not needed if _etcd_ runs on a different host but this guide assumes that _etcd_ and kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
|
||||
|
||||
|
@ -35,7 +35,7 @@ Here is the same information in a picture which shows how the pods might be plac
|
||||

|
||||
|
||||
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod’s execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
|
||||
[cluster DNS service](../../docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
[cluster DNS service](../admin/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
|
||||
|
||||
To help explain how cluster level logging works let’s start off with a synthetic log generator pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
|
||||
```
|
||||
|
@ -82,7 +82,7 @@ on how flags are set on various components.
|
||||
have identical configurations.
|
||||
|
||||
### Network
|
||||
Kubernetes has a distinctive [networking model](../networking.md).
|
||||
Kubernetes has a distinctive [networking model](../admin/networking.md).
|
||||
|
||||
Kubernetes allocates an IP address to each pod. When creating a cluster, you
|
||||
need to allocate a block of IPs for Kubernetes to use as Pod IPs. The simplest
|
||||
@ -252,7 +252,7 @@ The admin user (and any users) need:
|
||||
|
||||
Your tokens and passwords need to be stored in a file for the apiserver
|
||||
to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
|
||||
The format for this file is described in the [authentication documentation](../authentication.md).
|
||||
The format for this file is described in the [authentication documentation](../admin/authentication.md).
|
||||
|
||||
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
|
||||
into a [kubeconfig file](../kubeconfig-file.md).
|
||||
@ -378,7 +378,7 @@ Arguments to consider:
|
||||
- `--docker-root=`
|
||||
- `--root-dir=`
|
||||
- `--configure-cbr0=` (described above)
|
||||
- `--register-node` (described in [Node](../node.md) documentation.
|
||||
- `--register-node` (described in [Node](../admin/node.md) documentation.
|
||||
|
||||
### kube-proxy
|
||||
|
||||
@ -398,7 +398,7 @@ Each node needs to be allocated its own CIDR range for pod networking.
|
||||
Call this `NODE_X_POD_CIDR`.
|
||||
|
||||
A bridge called `cbr0` needs to be created on each node. The bridge is explained
|
||||
further in the [networking documentation](../networking.md). The bridge itself
|
||||
further in the [networking documentation](../admin/networking.md). The bridge itself
|
||||
needs an address from `$NODE_X_POD_CIDR` - by convention the first IP. Call
|
||||
this `NODE_X_BRIDGE_ADDR`. For example, if `NODE_X_POD_CIDR` is `10.0.0.0/16`,
|
||||
then `NODE_X_BRIDGE_ADDR` is `10.0.0.1/16`. NOTE: this retains the `/16` suffix
|
||||
@ -444,7 +444,7 @@ traffic to the internet, but have no problem with them inside your GCE Project.
|
||||
### Using Configuration Management
|
||||
The previous steps all involved "conventional" system administration techniques for setting up
|
||||
machines. You may want to use a Configuration Management system to automate the node configuration
|
||||
process. There are examples of [Saltstack](../salt.md), Ansible, Juju, and CoreOS Cloud Config in the
|
||||
process. There are examples of [Saltstack](../admin/salt.md), Ansible, Juju, and CoreOS Cloud Config in the
|
||||
various Getting Started Guides.
|
||||
|
||||
## Bootstrapping the Cluster
|
||||
@ -463,7 +463,7 @@ You will need to run one or more instances of etcd.
|
||||
- Alternative: run 3 or 5 etcd instances.
|
||||
- Log can be written to non-durable storage because storage is replicated.
|
||||
- run a single apiserver which connects to one of the etc nodes.
|
||||
See [Availability](../availability.md) for more discussion on factors affecting cluster
|
||||
See [Availability](../admin/availability.md) for more discussion on factors affecting cluster
|
||||
availability.
|
||||
|
||||
To run an etcd instance:
|
||||
@ -489,7 +489,7 @@ Here are some apiserver flags you may need to set:
|
||||
- `--tls-cert-file=/srv/kubernetes/server.cert` -%}
|
||||
- `--tls-private-key-file=/srv/kubernetes/server.key` -%}
|
||||
- `--admission-control=$RECOMMENDED_LIST`
|
||||
- See [admission controllers](../admission_controllers.md) for recommended arguments.
|
||||
- See [admission controllers](../admin/admission-controllers.md) for recommended arguments.
|
||||
- `--allow-privileged=true`, only if you trust your cluster user to run pods as root.
|
||||
|
||||
If you are following the firewall-only security approach, then use these arguments:
|
||||
@ -663,7 +663,7 @@ Flags to consider using with controller manager.
|
||||
- `--allocate-node-cidrs=`
|
||||
- *TODO*: explain when you want controller to do this and when you wanna do it another way.
|
||||
- `--cloud-provider=` and `--cloud-config` as described in apiserver section.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../service_accounts.md) feature.
|
||||
- `--service-account-private-key-file=/srv/kubernetes/server.key`, used by [service account](../service-accounts.md) feature.
|
||||
- `--master=127.0.0.1:8080`
|
||||
|
||||
Template for controller manager pod:
|
||||
|
@ -47,7 +47,7 @@ can be created/destroyed together. See [pods](pods.md).
|
||||
for easy scaling of replicated systems, and handles restarting of a Pod when the machine it is on reboots or otherwise fails.
|
||||
|
||||
**Resource**
|
||||
: CPU, memory, and other things that a pod can request. See [compute resources](compute_resources.md).
|
||||
: CPU, memory, and other things that a pod can request. See [compute resources](compute-resources.md).
|
||||
|
||||
**Secret**
|
||||
: An object containing sensitive information, such as authentication tokens, which can be made available to containers upon request. See [secrets](secrets.md).
|
||||
|
@ -215,7 +215,7 @@ spec:
|
||||
```
|
||||
This needs to be done for each pod that is using a private registry.
|
||||
However, setting of this field can be automated by setting the imagePullSecrets
|
||||
in a [serviceAccount](service_accounts.md) resource.
|
||||
in a [serviceAccount](service-accounts.md) resource.
|
||||
|
||||
Currently, all pods will potentially have read access to any images which were
|
||||
pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your
|
||||
|
@ -24,9 +24,9 @@ Users can create and manage pods themselves, but Kubernetes drastically simplifi
|
||||
|
||||
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
|
||||
|
||||
Kubernetes supports a unique [networking model](networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
|
||||
Kubernetes supports a unique [networking model](admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
|
||||
|
||||
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
|
||||
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
|
||||
|
||||
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the object’s name, and the object’s [namespace](namespaces.md). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
|
||||
|
||||
|
@ -39,7 +39,7 @@ Like individual application containers, pods are considered to be relatively eph
|
||||
|
||||
Pods facilitate data sharing and communication among their constituents.
|
||||
|
||||
The applications in the pod all use the same network namespace/IP and port space, and can find and communicate with each other using localhost. Each pod has an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. The hostname is set to the pod's Name for the application containers within the pod. [More details on networking](networking.md).
|
||||
The applications in the pod all use the same network namespace/IP and port space, and can find and communicate with each other using localhost. Each pod has an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. The hostname is set to the pod's Name for the application containers within the pod. [More details on networking](admin/networking.md).
|
||||
|
||||
In addition to defining the application containers that run in the pod, the pod specifies a set of shared storage volumes. Volumes enable data to survive container restarts and to be shared among the applications within the pod.
|
||||
|
||||
|
@ -66,7 +66,7 @@ The automatic creation and use of API credentials can be disabled or overridden
|
||||
if desired. However, if all you need to do is securely access the apiserver,
|
||||
this is the recommended workflow.
|
||||
|
||||
See the [Service Account](service_accounts.md) documentation for more
|
||||
See the [Service Account](service-accounts.md) documentation for more
|
||||
information on how Service Accounts work.
|
||||
|
||||
### Creating a Secret Manually
|
||||
@ -92,7 +92,7 @@ are `value-1` and `value-2`, respectively, with carriage return and newline char
|
||||
Create the secret using [`kubectl create`](user-guide/kubectl/kubectl_create.md).
|
||||
|
||||
Once the secret is created, you can:
|
||||
- create pods that automatically use it via a [Service Account](service_accounts.md).
|
||||
- create pods that automatically use it via a [Service Account](service-accounts.md).
|
||||
- modify your pod specification to use the secret
|
||||
|
||||
### Manually specifying a Secret to be Mounted on a Pod
|
||||
@ -141,7 +141,7 @@ Use of imagePullSecrets is desribed in the [images documentation](images.md#spec
|
||||
*This feature is planned but not implemented. See [issue
|
||||
9902](https://github.com/GoogleCloudPlatform/kubernetes/issues/9902).*
|
||||
|
||||
You can reference manually created secrets from a [service account](service_accounts.md).
|
||||
You can reference manually created secrets from a [service account](service-accounts.md).
|
||||
Then, pods which use that service account will have
|
||||
`volumeMounts` and/or `imagePullSecrets` added to them.
|
||||
The secrets will be mounted at **TBD**.
|
||||
|
@ -18,5 +18,5 @@ A security context defines the operating system security settings (uid, gid, cap
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -17,7 +17,7 @@ certainly want the docs that go with that version.</h1>
|
||||
A service account provides an identity for processes that run in a Pod.
|
||||
|
||||
*This is a user introduction to Service Accounts. See also the
|
||||
[Cluster Admin Guide to Service Accounts](service_accounts_admin.md).*
|
||||
[Cluster Admin Guide to Service Accounts](admin/service-accounts-admin.md).*
|
||||
|
||||
*Note: This document describes how service accounts behave in a cluster set up
|
||||
as recommended by the Kubernetes project. Your cluster administrator may have
|
||||
@ -37,7 +37,7 @@ When you create a pod, you do not need to specify a service account. It is
|
||||
automatically assigned the `default` service account of the same namespace. If
|
||||
you get the raw json or yaml for a pod you have created (e.g. `kubectl get
|
||||
pods/podname -o yaml`), you can see the `spec.serviceAccount` field has been
|
||||
[automatically set](working_with_resources.md#resources-are-automatically-modified).
|
||||
[automatically set](working-with-resources.md#resources-are-automatically-modified).
|
||||
|
||||
You can access the API using a proxy or with a client library, as described in
|
||||
[Accessing the Cluster](accessing-the-cluster.md#accessing-the-api-from-a-pod).
|
||||
@ -105,5 +105,5 @@ TODO explain:
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
Before Width: | Height: | Size: 67 KiB After Width: | Height: | Size: 67 KiB |
Before Width: | Height: | Size: 25 KiB After Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 42 KiB After Width: | Height: | Size: 42 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
@ -193,7 +193,7 @@ The net result is that any traffic bound for the `Service` is proxied to an
|
||||
appropriate backend without the clients knowing anything about Kubernetes or
|
||||
`Services` or `Pods`.
|
||||
|
||||

|
||||

|
||||
|
||||
By default, the choice of backend is random. Client-IP based session affinity
|
||||
can be selected by setting `service.spec.sessionAffinity` to `"ClientIP"` (the
|
||||
@ -504,7 +504,7 @@ This means that `Service` owners can choose any port they want without risk of
|
||||
collision. Clients can simply connect to an IP and port, without being aware
|
||||
of which `Pods` they are actually accessing.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@ -16,7 +16,7 @@ certainly want the docs that go with that version.</h1>
|
||||
|
||||
The user guide is intended for anyone who wants to run programs and services
|
||||
on an existing Kubernetes cluster. Setup and administration of a
|
||||
Kubernetes cluster is described in the [Cluster Admin Guide](cluster-admin-guide.md).
|
||||
Kubernetes cluster is described in the [Cluster Admin Guide](admin/README.md).
|
||||
The [Developer Guide](developer-guide.md) is for anyone wanting to either write code which directly accesses the
|
||||
kubernetes API, or to contribute directly to the kubernetes project.
|
||||
|
||||
@ -25,7 +25,7 @@ kubernetes API, or to contribute directly to the kubernetes project.
|
||||
* **Overview** ([overview.md](overview.md)): A brief overview
|
||||
of Kubernetes concepts.
|
||||
|
||||
* **Nodes** ([node.md](node.md)): A node is a worker machine in Kubernetes.
|
||||
* **Nodes** ([admin/node.md](admin/node.md)): A node is a worker machine in Kubernetes.
|
||||
|
||||
* **Pods** ([pods.md](pods.md)): A pod is a tightly-coupled group of containers
|
||||
with shared volumes.
|
||||
@ -81,15 +81,15 @@ for i in *.md; do grep -r $i . | grep -v "^\./$i" > /dev/null; rv=$?; if [[ $rv
|
||||
* **Annotations** ([annotations.md](annotations.md)): Attaching
|
||||
arbitrary non-identifying metadata.
|
||||
|
||||
* **Downward API** ([downward_api.md](downward_api.md)): Accessing system
|
||||
* **Downward API** ([downward-api.md](downward-api.md)): Accessing system
|
||||
configuration from a pod without accessing Kubernetes API (see also
|
||||
[container-environment.md](container-environment.md)).
|
||||
|
||||
* **Kubernetes Container Environment** ([container-environment.md](container-environment.md)):
|
||||
Describes the environment for Kubelet managed containers on a Kubernetes
|
||||
node (see also [downward_api.md](downward_api.md)).
|
||||
node (see also [downward-api.md](downward-api.md)).
|
||||
|
||||
* **DNS Integration with SkyDNS** ([dns.md](dns.md)):
|
||||
* **DNS Integration with SkyDNS** ([admin/dns.md](admin/dns.md)):
|
||||
Resolving a DNS name directly to a Kubernetes service.
|
||||
|
||||
* **Identifiers** ([identifiers.md](identifiers.md)): Names and UIDs
|
||||
@ -103,12 +103,12 @@ for i in *.md; do grep -r $i . | grep -v "^\./$i" > /dev/null; rv=$?; if [[ $rv
|
||||
* **Namespaces** ([namespaces.md](namespaces.md)): Namespaces help different
|
||||
projects, teams, or customers to share a kubernetes cluster.
|
||||
|
||||
* **Networking** ([networking.md](networking.md)): Pod networking overview.
|
||||
* **Networking** ([admin/networking.md](admin/networking.md)): Pod networking overview.
|
||||
|
||||
* **Services and firewalls** ([services-firewalls.md](services-firewalls.md)): How
|
||||
to use firewalls.
|
||||
|
||||
* **Compute Resources** ([compute_resources.md](compute_resources.md)):
|
||||
* **Compute Resources** ([compute-resources.md](compute-resources.md)):
|
||||
Provides resource information such as size, type, and quantity to assist in
|
||||
assigning Kubernetes resources appropriately.
|
||||
|
||||
|
@ -72,7 +72,7 @@ $ kubectl get pods -l app=nginx -o json | grep podIP
|
||||
```
|
||||
You should be able to ssh into any node in your cluster and curl both ips. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using ip. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](../networking.md#how-to-achieve-this) if you’re curious.
|
||||
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if you’re curious.
|
||||
|
||||
## Creating a Service for the pods
|
||||
|
||||
|
@ -370,7 +370,7 @@ medium of the filesystem holding the kubelet root dir (typically
|
||||
pods.
|
||||
|
||||
In the future, we expect that `emptyDir` and `hostPath` volumes will be able to
|
||||
request a certain amount of space using a [resource](compute_resources.md)
|
||||
request a certain amount of space using a [resource](compute-resources.md)
|
||||
specification, and to select the type of media to use, for clusters that have
|
||||
several media types.
|
||||
|
||||
|
@ -72,5 +72,5 @@ Once there:
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
@ -82,7 +82,7 @@ Backend Namespace: default
|
||||
|
||||
First the frontend pod's information is printed. The pod name and
|
||||
[namespace](../../docs/design/namespaces.md) are retreived from the
|
||||
[Downward API](../../docs/downward_api.md). Next, `USER_VAR` is the name of
|
||||
[Downward API](../../docs/downward-api.md). Next, `USER_VAR` is the name of
|
||||
an environment variable set in the [pod
|
||||
definition](show-rc.yaml). Then, the dynamic kubernetes environment
|
||||
variables are scanned and printed. These are used to find the backend
|
||||
|