mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-29 14:37:00 +00:00
Merge pull request #9082 from tnguyen-rh/hmm
docs: markdown and simple fixes
This commit is contained in:
commit
a58847e45d
@ -13,6 +13,7 @@ Like labels, annotations are key-value maps.
|
||||
```
|
||||
|
||||
Possible information that could be recorded in annotations:
|
||||
|
||||
* fields managed by a declarative configuration layer, to distinguish them from client- and/or server-set default values and other auto-generated fields, fields set by auto-sizing/auto-scaling systems, etc., in order to facilitate merging
|
||||
* build/release/image information (timestamps, release ids, git branch, PR numbers, image hashes, registry address, etc.)
|
||||
* pointers to logging/monitoring/analytics/audit repos
|
||||
|
@ -57,9 +57,10 @@ This hook is called before the PostStart handler, when a container has been rest
|
||||
This hook is called immediately before a container is terminated. This event handler is blocking, and must complete before the call to delete the container is sent to the Docker daemon. The SIGTERM notification sent by Docker is also still sent.
|
||||
|
||||
A single parameter named reason is passed to the handler which contains the reason for termination. Currently the valid values for reason are:
|
||||
* ● ```Delete``` - indicating an API call to delete the pod containing this container.
|
||||
* ● ```Health``` - indicating that a health check of the container failed.
|
||||
* ● ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
|
||||
* ```Delete``` - indicating an API call to delete the pod containing this container.
|
||||
* ```Health``` - indicating that a health check of the container failed.
|
||||
* ```Dependency``` - indicating that a dependency for the container or the pod is missing, and thus, the container needs to be restarted. Examples include, the pod infra container crashing, or persistent disk failing for a container that mounts PD.
|
||||
|
||||
Eventually, user specified reasons may be [added to the API](https://github.com/GoogleCloudPlatform/kubernetes/issues/137).
|
||||
|
||||
@ -67,7 +68,7 @@ Eventually, user specified reasons may be [added to the API](https://github.com/
|
||||
### Hook Handler Execution
|
||||
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook. These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod. If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes. Blocking hook handlers do *not* affect management of other Pods. Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
|
||||
|
||||
For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs. The details of this parameter passing is handler implementation dependent (see below)
|
||||
For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs. The details of this parameter passing is handler implementation dependent (see below).
|
||||
|
||||
### Hook delivery guarantees
|
||||
Hook delivery is "at least one", which means that a hook may be called multiple times for any given event (e.g. "start" or "stop") and it is up to the hook implementer to be able to handle this
|
||||
|
@ -11,7 +11,7 @@ Each object can have a set of key/value labels defined. Each Key must be unique
|
||||
}
|
||||
```
|
||||
|
||||
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](/docs/annotations.md).
|
||||
We'll eventually index and reverse-index labels for efficient queries and watches, use them to sort and group in UIs and CLIs, etc. We don't want to pollute labels with non-identifying, especially large and/or structured, data. Non-identifying information should be recorded using [annotations](annotations.md).
|
||||
|
||||
|
||||
## Motivation
|
||||
@ -21,11 +21,12 @@ Labels enable users to map their own organizational structures onto system objec
|
||||
Service deployments and batch processing pipelines are often multi-dimensional entities (e.g., multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier). Management often requires cross-cutting operations, which breaks encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users.
|
||||
|
||||
Example labels:
|
||||
- `"release" : "stable"`, `"release" : "canary"`, ...
|
||||
- `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
|
||||
- `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "middleware"`
|
||||
- `"partition" : "customerA"`, `"partition" : "customerB"`, ...
|
||||
- `"track" : "daily"`, `"track" : "weekly"`
|
||||
|
||||
* `"release" : "stable"`, `"release" : "canary"`, ...
|
||||
* `"environment" : "dev"`, `"environment" : "qa"`, `"environment" : "production"`
|
||||
* `"tier" : "frontend"`, `"tier" : "backend"`, `"tier" : "middleware"`
|
||||
* `"partition" : "customerA"`, `"partition" : "customerB"`, ...
|
||||
* `"track" : "daily"`, `"track" : "weekly"`
|
||||
|
||||
These are just examples; you are free to develop your own conventions.
|
||||
|
||||
@ -80,12 +81,14 @@ _Set-based_ requirements can be mixed with _equality-based_ requirements. For ex
|
||||
## API
|
||||
|
||||
LIST and WATCH operations may specify label selectors to filter the sets of objects returned using a query parameter. Both requirements are permitted:
|
||||
- _equality-based_ requirements: `?label-selector=key1%3Dvalue1,key2%3Dvalue2`
|
||||
- _set-based_ requirements: `?label-selector=key+in+%28value1%2Cvalue2%29%2Ckey2+notin+%28value3`
|
||||
|
||||
* _equality-based_ requirements: `?label-selector=key1%3Dvalue1,key2%3Dvalue2`
|
||||
* _set-based_ requirements: `?label-selector=key+in+%28value1%2Cvalue2%29%2Ckey2+notin+%28value3`
|
||||
|
||||
Kubernetes also currently supports two objects that use label selectors to keep track of their members, `service`s and `replicationcontroller`s:
|
||||
- `service`: A [service](/docs/services.md) is a configuration unit for the proxies that run on every worker node. It is named and points to one or more pods.
|
||||
- `replicationcontroller`: A [replication controller](/docs/replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time.
|
||||
|
||||
* `service`: A [service](services.md) is a configuration unit for the proxies that run on every worker node. It is named and points to one or more pods.
|
||||
* `replicationcontroller`: A [replication controller](replication-controller.md) ensures that a specified number of pod "replicas" are running at any one time.
|
||||
|
||||
The set of pods that a `service` targets is defined with a label selector. Similarly, the population of pods that a `replicationcontroller` is monitoring is also defined with a label selector. For management convenience and consistency, `services` and `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common.
|
||||
|
||||
|
21
docs/pods.md
21
docs/pods.md
@ -7,10 +7,11 @@ In Kubernetes, rather than individual application containers, _pods_ are the sma
|
||||
A _pod_ (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context. Within that context, the applications may also have individual cgroup isolations applied. A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled -- in a pre-container world, they would have executed on the same physical or virtual host.
|
||||
|
||||
The context of the pod can be defined as the conjunction of several Linux namespaces:
|
||||
- PID namespace (applications within the pod can see each other's processes)
|
||||
- network namespace (applications within the pod have access to the same IP and port space)
|
||||
- IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)
|
||||
- UTS namespace (applications within the pod share a hostname)
|
||||
|
||||
* PID namespace (applications within the pod can see each other's processes)
|
||||
* network namespace (applications within the pod have access to the same IP and port space)
|
||||
* IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate)
|
||||
* UTS namespace (applications within the pod share a hostname)
|
||||
|
||||
Applications within a pod also have access to shared volumes, which are defined at the pod level and made available in each application's filesystem. Additionally, a pod may define top-level cgroup isolations which form an outer bound to any individual isolation applied to constituent applications.
|
||||
|
||||
@ -35,11 +36,12 @@ Pods also simplify application deployment and management by providing a higher-l
|
||||
## Uses of pods
|
||||
|
||||
Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as:
|
||||
- content management systems, file and data loaders, local cache managers, etc.
|
||||
- log and checkpoint backup, compression, rotation, snapshotting, etc.
|
||||
- data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
|
||||
- proxies, bridges, and adapters
|
||||
- controllers, managers, configurators, and updaters
|
||||
|
||||
* content management systems, file and data loaders, local cache managers, etc.
|
||||
* log and checkpoint backup, compression, rotation, snapshotting, etc.
|
||||
* data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
|
||||
* proxies, bridges, and adapters
|
||||
* controllers, managers, configurators, and updaters
|
||||
|
||||
Individual pods are not intended to run multiple instances of the same application, in general.
|
||||
|
||||
@ -65,6 +67,7 @@ In general, users shouldn't need to create pods directly. They should almost alw
|
||||
The use of collective APIs as the primary user-facing primitive is relatively common among cluster scheduling systems, including [Borg](https://research.google.com/pubs/pub43438.html), [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html), [Aurora](http://aurora.apache.org/documentation/latest/configuration-reference/#job-schema), and [Tupperware](http://www.slideshare.net/Docker/aravindnarayanan-facebook140613153626phpapp02-37588997).
|
||||
|
||||
Pod is exposed as a primitive in order to facilitate:
|
||||
|
||||
* scheduler and controller pluggability
|
||||
* support for pod-level operations without the need to "proxy" them via controller APIs
|
||||
* decoupling of pod lifetime from controller lifetime, such as for bootstrapping
|
||||
|
@ -2,9 +2,9 @@
|
||||
|
||||
## What is a _replication controller_?
|
||||
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. Replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
|
||||
|
||||
As discussed in [life of a pod](pod-states.md), `replicationController` is *only* appropriate for pods with `RestartPolicy = Always` (Note: If `RestartPolicy` is not set, the default value is `Always`.). `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
|
||||
As discussed in [life of a pod](pod-states.md), `ReplicationController` is *only* appropriate for pods with `RestartPolicy = Always` (Note: If `RestartPolicy` is not set, the default value is `Always`.). `ReplicationController` should refuse to instantiate any pod that has a different restart policy. As discussed in [issue #503](https://github.com/GoogleCloudPlatform/kubernetes/issues/503#issuecomment-50169443), we expect other types of controllers to be added to Kubernetes to handle other types of workloads, such as build/test and batch workloads, in the future.
|
||||
|
||||
A replication controller will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple replication controllers, and it is expected that many replication controllers may be created and destroyed over the lifetime of a service. Both services themselves and their clients should remain oblivious to the replication controllers that maintain the pods of the services.
|
||||
|
||||
@ -12,7 +12,7 @@ A replication controller will never terminate on its own, but it isn't expected
|
||||
|
||||
### Pod template
|
||||
|
||||
A replication controller creates new pods from a template, which is currently inline in the `replicationcontroller` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170).
|
||||
A replication controller creates new pods from a template, which is currently inline in the `ReplicationController` object, but which we plan to extract into its own resource [#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170).
|
||||
|
||||
Rather than specifying the current desired state of all replicas, pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. There is no quantum entanglement. Subsequent changes to the template or even switching to a new template has no direct effect on the pods already created. Similarly, pods created by a replication controller may subsequently be updated directly. This is in deliberate contrast to pods, which do specify the current desired state of all containers belonging to the pod. This approach radically simplifies system semantics and increases the flexibility of the primitive, as demonstrated by the use cases explained below.
|
||||
|
||||
@ -20,21 +20,21 @@ Pods created by a replication controller are intended to be fungible and semanti
|
||||
|
||||
### Labels
|
||||
|
||||
The population of pods that a `replicationcontroller` is monitoring is defined with a [label selector](labels.md), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system.
|
||||
The population of pods that a `ReplicationController` is monitoring is defined with a [label selector](labels.md), which creates a loosely coupled relationship between the controller and the pods controlled, in contrast to pods, which are more tightly coupled. We deliberately chose not to represent the set of pods controlled using a fixed-length array of pod specifications, because our experience is that that approach increases complexity of management operations, for both clients and the system.
|
||||
|
||||
The replication controller should verify that the pods created from the specified template have labels that match its label selector. Though it isn't verified yet, you should also ensure that only one replication controller controls any given pod, by ensuring that the label selectors of replication controllers do not target overlapping sets.
|
||||
|
||||
Note that `replicationcontrollers` may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
|
||||
Note that `ReplicationController`s may themselves have labels and would generally carry the labels their corresponding pods have in common, but these labels do not affect the behavior of the replication controllers.
|
||||
|
||||
Pods may be removed from a replication controller's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).
|
||||
|
||||
Similarly, deleting a replication controller does not affect the pods it created. It's `replicas` field must first be set to 0 in order to delete the pods controlled. In the future, we may provide a feature to do this and the deletion in a single client operation.
|
||||
Similarly, deleting a replication controller does not affect the pods it created. Its `replicas` field must first be set to 0 in order to delete the pods controlled. In the future, we may provide a feature to do this and the deletion in a single client operation.
|
||||
|
||||
## Responsibilities of the replication controller
|
||||
|
||||
The replication controller simply ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, [readiness](https://github.com/GoogleCloudPlatform/kubernetes/issues/620) and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.
|
||||
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)).
|
||||
The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to the replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)).
|
||||
|
||||
The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, stop, scale, rolling-update) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc.
|
||||
|
||||
@ -42,15 +42,15 @@ The replication controller is intended to be a composable building-block primiti
|
||||
|
||||
### Rescheduling
|
||||
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
|
||||
As mentioned above, whether you have 1 pod you want to keep running, or 1000, a replication controller will ensure that the specified number of pods exists, even in the event of node failure or pod termination (e.g., due to an action by another control agent).
|
||||
|
||||
### Scaling
|
||||
|
||||
Replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the `replicas` field.
|
||||
|
||||
### Rolling updates
|
||||
|
||||
Replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
The replication controller is designed to facilitate rolling updates to a service by replacing pods one-by-one.
|
||||
|
||||
As explained in [#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353), the recommended approach is to create a new replication controller with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.
|
||||
|
||||
@ -62,7 +62,7 @@ The two replication controllers would need to create pods with at least one diff
|
||||
|
||||
In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.
|
||||
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `replicationcontroller` with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another `replicationcontroller` with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the `replicationcontrollers` separately to test things out, monitor the results, etc.
|
||||
For instance, a service might target all pods with `tier in (frontend), environment in (prod)`. Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a `ReplicationController` with `replicas` set to 9 for the bulk of the replicas, with labels `tier=frontend, environment=prod, track=stable`, and another `ReplicationController` with `replicas` set to 1 for the canary, with labels `tier=frontend, environment=prod, track=canary`. Now the service is covering both the canary and non-canary pods. But you can mess with the `ReplicationController`s separately to test things out, monitor the results, etc.
|
||||
|
||||
|
||||
[]()
|
||||
|
@ -6,7 +6,7 @@ Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they
|
||||
are not resurrected. [`ReplicationControllers`](replication-controller.md) in
|
||||
particular create and destroy `Pods` dynamically (e.g. when scaling up or down
|
||||
or when doing rolling updates). While each `Pod` gets its own IP address, even
|
||||
those IP addresses can not be relied upon to be stable over time. This leads to
|
||||
those IP addresses cannot be relied upon to be stable over time. This leads to
|
||||
a problem: if some set of `Pods` (let's call them backends) provides
|
||||
functionality to other `Pods` (let's call them frontends) inside the Kubernetes
|
||||
cluster, how do those frontends find out and keep track of which backends are
|
||||
@ -83,12 +83,13 @@ is `TCP`.
|
||||
|
||||
Services generally abstract access to Kubernetes `Pods`, but they can also
|
||||
abstract other kinds of backends. For example:
|
||||
- you want to have an external database cluster in production, but in test
|
||||
you use your own databases
|
||||
- you want to point your service to a service in another
|
||||
[`Namespace`](namespaces.md) or on another cluster
|
||||
- you are migrating your workload to Kubernetes and some of your backends run
|
||||
outside of Kubernetes
|
||||
|
||||
* You want to have an external database cluster in production, but in test
|
||||
you use your own databases.
|
||||
* You want to point your service to a service in another
|
||||
[`Namespace`](namespaces.md) or on another cluster.
|
||||
* You are migrating your workload to Kubernetes and some of your backends run
|
||||
outside of Kubernetes.
|
||||
|
||||
In any of these scenarios you can define a service without a selector:
|
||||
|
||||
@ -302,10 +303,11 @@ address. Kubernetes supports two ways of doing this: `NodePort`s and
|
||||
|
||||
Every `Service` has a `Type` field which defines how the `Service` can be
|
||||
accessed. Valid values for this field are:
|
||||
- ClusterIP: use a cluster-internal IP (portal) only - this is the default
|
||||
- NodePort: use a cluster IP, but also expose the service on a port on each
|
||||
|
||||
* `ClusterIP`: use a cluster-internal IP (portal) only - this is the default
|
||||
* `NodePort`: use a cluster IP, but also expose the service on a port on each
|
||||
node of the cluster (the same port on each)
|
||||
- LoadBalancer: use a ClusterIP and a NodePort, but also ask the cloud
|
||||
* `LoadBalancer`: use a ClusterIP and a NodePort, but also ask the cloud
|
||||
provider for a load balancer which forwards to the `Service`
|
||||
|
||||
Note that while `NodePort`s can be TCP or UDP, `LoadBalancer`s only support TCP
|
||||
@ -389,7 +391,7 @@ but the current API requires it.
|
||||
## Future work
|
||||
|
||||
In the future we envision that the proxy policy can become more nuanced than
|
||||
simple round robin balancing, for example master elected or sharded. We also
|
||||
simple round robin balancing, for example master-elected or sharded. We also
|
||||
envision that some `Services` will have "real" load balancers, in which case the
|
||||
VIP will simply transport the packets there.
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
The user guide is intended for anyone who wants to run programs and services
|
||||
on an existing Kubernetes cluster. Setup and administration of a
|
||||
Kubernetes cluster is described in the [Cluster Admin Guide](cluster-admin-guide.md).
|
||||
The developer guide describes is for anyone wanting to either write code which directly accesses the
|
||||
The developer guide is for anyone wanting to either write code which directly accesses the
|
||||
kubernetes API, or to contribute directly to the kubernetes project.
|
||||
|
||||
## Primary concepts
|
||||
|
Loading…
Reference in New Issue
Block a user