mirror of
https://github.com/k3s-io/kubernetes.git
synced 2026-01-05 23:47:50 +00:00
automated link fixes
This commit is contained in:
@@ -27,7 +27,7 @@ This example also has a few code and configuration files needed. To avoid typin
|
||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||
|
||||
### Simple Single Pod Cassandra Node
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
In this simple case, we define a single container running Cassandra for our pod:
|
||||
|
||||
```yaml
|
||||
@@ -75,7 +75,7 @@ You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZ
|
||||
In theory could create a single Cassandra pod right now but since `KubernetesSeedProvider` needs to learn what nodes are in the Cassandra deployment we need to create a service first.
|
||||
|
||||
### Cassandra Service
|
||||
In Kubernetes a _[Service](../../docs/services.md)_ describes a set of Pods that perform the same task. For example, the set of Pods in a Cassandra cluster can be a Kubernetes Service, or even just the single Pod we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set of Pods. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
|
||||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of Pods in a Cassandra cluster can be a Kubernetes Service, or even just the single Pod we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set of Pods. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
|
||||
|
||||
Here is the service description:
|
||||
```yaml
|
||||
@@ -145,7 +145,7 @@ subsets:
|
||||
### Adding replicated nodes
|
||||
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster.
|
||||
|
||||
In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication controllers will "adopt" existing pods that match their selector query, so let's create a replication controller with a single replica to adopt our existing Cassandra pod.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ $ hack/dev-build-and-up.sh
|
||||
|
||||
### Step One: Create two namespaces
|
||||
|
||||
We'll see how cluster DNS works across multiple [namespaces](../../docs/namespaces.md), first we need to create two namespaces:
|
||||
We'll see how cluster DNS works across multiple [namespaces](../../docs/user-guide/namespaces.md), first we need to create two namespaces:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/cluster-dns/namespace-dev.yaml
|
||||
@@ -55,7 +55,7 @@ You can view your cluster name and user name in kubernetes config at ~/.kube/con
|
||||
|
||||
### Step Two: Create backend replication controller in each namespace
|
||||
|
||||
Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/replication-controller.md) in each namespace.
|
||||
Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/user-guide/replication-controller.md) in each namespace.
|
||||
|
||||
```shell
|
||||
$ kubectl config use-context dev
|
||||
@@ -83,7 +83,7 @@ dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
|
||||
### Step Three: Create backend service
|
||||
|
||||
Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-service.yaml) to create
|
||||
a [service](../../docs/services.md) for the backend server.
|
||||
a [service](../../docs/user-guide/services.md) for the backend server.
|
||||
|
||||
```shell
|
||||
$ kubectl config use-context dev
|
||||
@@ -110,7 +110,7 @@ dns-backend <none> name=dns-backend 10.0.35.246 8000/TCP
|
||||
|
||||
### Step Four: Create client pod in one namespace
|
||||
|
||||
Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.cluster.local:8000`.
|
||||
Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/user-guide/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.cluster.local:8000`.
|
||||
|
||||
```shell
|
||||
$ kubectl config use-context dev
|
||||
|
||||
@@ -17,14 +17,14 @@ certainly want the docs that go with that version.</h1>
|
||||
This directory contains the source for a Docker image that creates an instance
|
||||
of [Elasticsearch](https://www.elastic.co/products/elasticsearch) 1.5.2 which can
|
||||
be used to automatically form clusters when used
|
||||
with [replication controllers](../../docs/replication-controller.md). This will not work with the library Elasticsearch image
|
||||
with [replication controllers](../../docs/user-guide/replication-controller.md). This will not work with the library Elasticsearch image
|
||||
because multicast discovery will not find the other pod IPs needed to form a cluster. This
|
||||
image detects other Elasticsearch [pods](../../docs/pods.md) running in a specified [namespace](../../docs/namespaces.md) with a given
|
||||
image detects other Elasticsearch [pods](../../docs/user-guide/pods.md) running in a specified [namespace](../../docs/user-guide/namespaces.md) with a given
|
||||
label selector. The detected instances are used to form a list of peer hosts which
|
||||
are used as part of the unicast discovery mechansim for Elasticsearch. The detection
|
||||
of the peer nodes is done by a program which communicates with the Kubernetes API
|
||||
server to get a list of matching Elasticsearch pods. To enable authenticated
|
||||
communication this image needs a [secret](../../docs/secrets.md) to be mounted at `/etc/apiserver-secret`
|
||||
communication this image needs a [secret](../../docs/user-guide/secrets.md) to be mounted at `/etc/apiserver-secret`
|
||||
with the basic authentication username and password.
|
||||
|
||||
Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file
|
||||
@@ -127,7 +127,7 @@ $ kubectl create -f music-rc.yaml --namespace=mytunes
|
||||
replicationcontrollers/music-db
|
||||
|
||||
```
|
||||
It's also useful to have a [service](../../docs/services.md) with an load balancer for accessing the Elasticsearch
|
||||
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
||||
cluster which can be found in the file [music-service.yaml](music-service.yaml).
|
||||
```
|
||||
apiVersion: v1
|
||||
|
||||
@@ -37,7 +37,7 @@ This example assumes that you have a working cluster. See the [Getting Started G
|
||||
|
||||
### Step One: Create the Redis master pod<a id="step-one"></a>
|
||||
|
||||
Use the `examples/guestbook-go/redis-master-controller.json` file to create a [replication controller](../../docs/replication-controller.md) and Redis master [pod](../../docs/pods.md). The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).
|
||||
Use the `examples/guestbook-go/redis-master-controller.json` file to create a [replication controller](../../docs/user-guide/replication-controller.md) and Redis master [pod](../../docs/user-guide/pods.md). The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).
|
||||
|
||||
1. Use the [redis-master-controller.json](redis-master-controller.json) file to create the Redis master replication controller in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
|
||||
```shell
|
||||
@@ -74,7 +74,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
|
||||
Note: The initial `docker pull` can take a few minutes, depending on network conditions.
|
||||
|
||||
### Step Two: Create the Redis master service <a id="step-two"></a>
|
||||
A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS.
|
||||
A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS.
|
||||
|
||||
Services find the containers to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.
|
||||
|
||||
|
||||
@@ -53,9 +53,9 @@ This example requires a running Kubernetes cluster. See the [Getting Started gu
|
||||
|
||||
**Note**: The redis master in this example is *not* highly available. Making it highly available would be an interesting, but intricate exercise— redis doesn't actually support multi-master deployments at this point in time, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on.
|
||||
|
||||
To start the redis master, use the file `examples/guestbook/redis-master-controller.yaml`, which describes a single [pod](../../docs/pods.md) running a redis key-value server in a container.
|
||||
To start the redis master, use the file `examples/guestbook/redis-master-controller.yaml`, which describes a single [pod](../../docs/user-guide/pods.md) running a redis key-value server in a container.
|
||||
|
||||
Although we have a single instance of our redis master, we are using a [replication controller](../../docs/replication-controller.md) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the replication controller will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
|
||||
Although we have a single instance of our redis master, we are using a [replication controller](../../docs/user-guide/replication-controller.md) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the replication controller will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
|
||||
|
||||
Here is `redis-master-controller.yaml`:
|
||||
|
||||
@@ -173,7 +173,7 @@ $ docker logs <container_id>
|
||||
|
||||
### Step Two: Fire up the redis master service
|
||||
|
||||
A Kubernetes [service](../../docs/services.md) is a named load balancer that proxies traffic to one or more containers. This is done using the [labels](../../docs/labels.md) metadata that we defined in the `redis-master` pod above. As mentioned, we have only one redis master, but we nevertheless want to create a service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP.
|
||||
A Kubernetes [service](../../docs/user-guide/services.md) is a named load balancer that proxies traffic to one or more containers. This is done using the [labels](../../docs/user-guide/labels.md) metadata that we defined in the `redis-master` pod above. As mentioned, we have only one redis master, but we nevertheless want to create a service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP.
|
||||
|
||||
Services find the pods to load balance based on the pods' labels.
|
||||
The pod that you created in [Step One](#step-one-start-up-the-redis-master) has the label `name=redis-master`.
|
||||
|
||||
@@ -34,13 +34,13 @@ Source is freely available at:
|
||||
* Docker Trusted Build - https://quay.io/repository/pires/hazelcast-kubernetes
|
||||
|
||||
### Simple Single Pod Hazelcast Node
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
|
||||
In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
|
||||
|
||||
|
||||
### Adding a Hazelcast Service
|
||||
In Kubernetes a _[Service](../../docs/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
|
||||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
|
||||
|
||||
Here is the service description:
|
||||
```yaml
|
||||
@@ -67,7 +67,7 @@ $ kubectl create -f hazelcast-service.yaml
|
||||
### Adding replicated nodes
|
||||
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
|
||||
|
||||
In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
In Kubernetes a _[Replication Controller](../../docs/user-guide/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Hazelcast Pod.
|
||||
|
||||
|
||||
@@ -14,11 +14,11 @@ certainly want the docs that go with that version.</h1>
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
## Kubernetes Namespaces
|
||||
|
||||
Kubernetes _[namespaces](../../docs/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
Kubernetes _[namespaces](../../docs/user-guide/namespaces.md)_ help different projects, teams, or customers to share a Kubernetes cluster.
|
||||
|
||||
It does this by providing the following:
|
||||
|
||||
1. A scope for [Names](../../docs/identifiers.md).
|
||||
1. A scope for [Names](../../docs/user-guide/identifiers.md).
|
||||
2. A mechanism to attach authorization and policy to a subsection of the cluster.
|
||||
|
||||
Use of multiple namespaces is optional.
|
||||
@@ -30,7 +30,7 @@ This example demonstrates how to use Kubernetes namespaces to subdivide your clu
|
||||
This example assumes the following:
|
||||
|
||||
1. You have an [existing Kubernetes cluster](../../docs/getting-started-guides/).
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../docs/pods.md)_, _[services](../../docs/services.md)_, and _[replication controllers](../../docs/replication-controller.md)_.
|
||||
2. You have a basic understanding of Kubernetes _[pods](../../docs/user-guide/pods.md)_, _[services](../../docs/user-guide/services.md)_, and _[replication controllers](../../docs/user-guide/replication-controller.md)_.
|
||||
|
||||
### Step One: Understand the default namespace
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ Meteor uses MongoDB, and we will use the `GCEPersistentDisk` type of
|
||||
volume for persistent storage. Therefore, this example is only
|
||||
applicable to [Google Compute
|
||||
Engine](https://cloud.google.com/compute/). Take a look at the
|
||||
[volumes documentation](../../docs/volumes.md) for other options.
|
||||
[volumes documentation](../../docs/user-guide/volumes.md) for other options.
|
||||
|
||||
First, if you have not already done so:
|
||||
|
||||
@@ -118,7 +118,7 @@ and make sure the `image:` points to the container you just pushed to
|
||||
the Docker Hub or GCR.
|
||||
|
||||
We will need to provide MongoDB a persistent Kuberetes volume to
|
||||
store its data. See the [volumes documentation](../../docs/volumes.md) for
|
||||
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
|
||||
options. We're going to use Google Compute Engine persistent
|
||||
disks. Create the MongoDB disk by running:
|
||||
```
|
||||
@@ -169,7 +169,7 @@ Here we can see the MongoDB host and port information being passed
|
||||
into the Meteor app. The `MONGO_SERVICE...` environment variables are
|
||||
set by Kubernetes, and point to the service named `mongo` specified in
|
||||
[`mongo-service.json`](mongo-service.json). See the [environment
|
||||
documentation](../../docs/container-environment.md) for more details.
|
||||
documentation](../../docs/user-guide/container-environment.md) for more details.
|
||||
|
||||
As you may know, Meteor uses long lasting connections, and requires
|
||||
_sticky sessions_. With Kubernetes you can scale out your app easily
|
||||
@@ -177,7 +177,7 @@ with session affinity. The
|
||||
[`meteor-service.json`](meteor-service.json) file contains
|
||||
`"sessionAffinity": "ClientIP"`, which provides this for us. See the
|
||||
[service
|
||||
documentation](../../docs/services.md#virtual-ips-and-service-proxies) for
|
||||
documentation](../../docs/user-guide/services.md#virtual-ips-and-service-proxies) for
|
||||
more information.
|
||||
|
||||
As mentioned above, the mongo container uses a volume which is mapped
|
||||
|
||||
@@ -14,17 +14,17 @@ certainly want the docs that go with that version.</h1>
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
# Persistent Installation of MySQL and WordPress on Kubernetes
|
||||
|
||||
This example describes how to run a persistent installation of [Wordpress](https://wordpress.org/) using the [volumes](../../docs/volumes.md) feature of Kubernetes, and [Google Compute Engine](https://cloud.google.com/compute/docs/disks) [persistent disks](../../docs/volumes.md#gcepersistentdisk).
|
||||
This example describes how to run a persistent installation of [Wordpress](https://wordpress.org/) using the [volumes](../../docs/user-guide/volumes.md) feature of Kubernetes, and [Google Compute Engine](https://cloud.google.com/compute/docs/disks) [persistent disks](../../docs/user-guide/volumes.md#gcepersistentdisk).
|
||||
|
||||
We'll use the [mysql](https://registry.hub.docker.com/_/mysql/) and [wordpress](https://registry.hub.docker.com/_/wordpress/) official [Docker](https://www.docker.com/) images for this installation. (The wordpress image includes an Apache server).
|
||||
|
||||
We'll create two Kubernetes [pods](../../docs/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](../../docs/services.md) to front each pod.
|
||||
We'll create two Kubernetes [pods](../../docs/user-guide/pods.md) to run mysql and wordpress, both with associated persistent disks, then set up a Kubernetes [service](../../docs/user-guide/services.md) to front each pod.
|
||||
|
||||
This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node.
|
||||
|
||||
## Get started on Google Compute Engine (GCE)
|
||||
|
||||
Because we're using the `GCEPersistentDisk` type of volume for persistent storage, this example is only applicable to [Google Compute Engine](https://cloud.google.com/compute/). Take a look at the [volumes documentation](../../docs/volumes.md) for other options.
|
||||
Because we're using the `GCEPersistentDisk` type of volume for persistent storage, this example is only applicable to [Google Compute Engine](https://cloud.google.com/compute/). Take a look at the [volumes documentation](../../docs/user-guide/volumes.md) for other options.
|
||||
|
||||
First, if you have not already done so:
|
||||
|
||||
@@ -48,7 +48,7 @@ Please see the [GCE getting started guide](../../docs/getting-started-guides/gce
|
||||
|
||||
## Create two persistent disks
|
||||
|
||||
For this WordPress installation, we're going to configure our Kubernetes [pods](../../docs/pods.md) to use [persistent disks](https://cloud.google.com/compute/docs/disks). This means that we can preserve installation state across pod shutdown and re-startup.
|
||||
For this WordPress installation, we're going to configure our Kubernetes [pods](../../docs/user-guide/pods.md) to use [persistent disks](https://cloud.google.com/compute/docs/disks). This means that we can preserve installation state across pod shutdown and re-startup.
|
||||
|
||||
You will need to create the disks in the same [GCE zone](https://cloud.google.com/compute/docs/zones) as the Kubernetes cluster. The default setup script will create the cluster in the `us-central1-b` zone, as seen in the [config-default.sh](../../cluster/gce/config-default.sh) file. Replace `$ZONE` below with the appropriate zone.
|
||||
|
||||
@@ -137,8 +137,8 @@ If you want to do deeper troubleshooting, e.g. if it seems a container is not st
|
||||
|
||||
### Start the Mysql service
|
||||
|
||||
We'll define and start a [service](../../docs/services.md) that lets other pods access the mysql database on a known port and host.
|
||||
We will specifically name the service `mysql`. This will let us leverage the support for [Docker-links-compatible](../../docs/services.md#how-do-they-work) service environment variables when we set up the wordpress pod. The wordpress Docker image expects to be linked to a mysql container named `mysql`, as you can see in the "How to use this image" section on the wordpress docker hub [page](https://registry.hub.docker.com/_/wordpress/).
|
||||
We'll define and start a [service](../../docs/user-guide/services.md) that lets other pods access the mysql database on a known port and host.
|
||||
We will specifically name the service `mysql`. This will let us leverage the support for [Docker-links-compatible](../../docs/user-guide/services.md#how-do-they-work) service environment variables when we set up the wordpress pod. The wordpress Docker image expects to be linked to a mysql container named `mysql`, as you can see in the "How to use this image" section on the wordpress docker hub [page](https://registry.hub.docker.com/_/wordpress/).
|
||||
|
||||
So if we label our Kubernetes mysql service `mysql`, the wordpress pod will be able to use the Docker-links-compatible environment variables, defined by Kubernetes, to connect to the database.
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ The example combines a web frontend and an external service that provides MySQL
|
||||
|
||||
### Step Zero: Prerequisites
|
||||
|
||||
This example assumes that you have a basic understanding of kubernetes [services](../../docs/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/):
|
||||
This example assumes that you have a basic understanding of kubernetes [services](../../docs/user-guide/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/):
|
||||
|
||||
```shell
|
||||
$ cd kubernetes
|
||||
@@ -35,7 +35,7 @@ In the remaining part of this example we will assume that your instance is named
|
||||
|
||||
### Step Two: Turn up the phabricator
|
||||
|
||||
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/replication-controller.md) with a single [pod](../../docs/pods.md) running an Apache server with Phabricator PHP source:
|
||||
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/user-guide/replication-controller.md) with a single [pod](../../docs/user-guide/pods.md) running an Apache server with Phabricator PHP source:
|
||||
|
||||
```js
|
||||
{
|
||||
@@ -172,7 +172,7 @@ $ kubectl create -f examples/phabricator/authenticator-controller.json
|
||||
|
||||
### Step Four: Turn up the phabricator service
|
||||
|
||||
A Kubernetes 'service' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via *environment variables*. Services find the containers to load balance based on pod labels. These environment variables are typically referenced in application code, shell scripts, or other places where one node needs to talk to another in a distributed system. You should catch up on [kubernetes services](../../docs/services.md) before proceeding.
|
||||
A Kubernetes 'service' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via *environment variables*. Services find the containers to load balance based on pod labels. These environment variables are typically referenced in application code, shell scripts, or other places where one node needs to talk to another in a distributed system. You should catch up on [kubernetes services](../../docs/user-guide/services.md) before proceeding.
|
||||
|
||||
The pod that you created in Step One has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service. Since we are setting up a service for an external application we also need to request external static IP address (otherwise it will be assigned dynamically):
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ This example assumes that you have a Kubernetes cluster installed and running, a
|
||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||
|
||||
### Turning up an initial master/sentinel pod.
|
||||
A [_Pod_](../../docs/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
A [_Pod_](../../docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
|
||||
We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
|
||||
|
||||
@@ -36,7 +36,7 @@ kubectl create -f examples/redis/redis-master.yaml
|
||||
```
|
||||
|
||||
### Turning up a sentinel service
|
||||
In Kubernetes a [_Service_](../../docs/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
|
||||
In Kubernetes a [_Service_](../../docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
|
||||
|
||||
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
|
||||
|
||||
@@ -50,7 +50,7 @@ kubectl create -f examples/redis/redis-sentinel-service.yaml
|
||||
### Turning up replicated redis servers
|
||||
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
|
||||
|
||||
In Kubernetes a [_Replication Controller_](../../docs/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
In Kubernetes a [_Replication Controller_](../../docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml)
|
||||
|
||||
|
||||
@@ -134,7 +134,7 @@ since the ui is not stateless when playing with Web Admin UI will cause `Connect
|
||||
**BTW**
|
||||
|
||||
* `gen_pod.sh` is using to generate pod templates for my local cluster,
|
||||
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](../node-selection/)
|
||||
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](../../docs/user-guide/node-selection/)
|
||||
|
||||
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
|
||||
|
||||
@@ -38,10 +38,10 @@ instructions for your platform.
|
||||
|
||||
## Step One: Start your Master service
|
||||
|
||||
The Master [service](../../docs/services.md) is the master (or head) service for a Spark
|
||||
The Master [service](../../docs/user-guide/services.md) is the master (or head) service for a Spark
|
||||
cluster.
|
||||
|
||||
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/pods.md) running
|
||||
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/user-guide/pods.md) running
|
||||
the Master service.
|
||||
|
||||
```shell
|
||||
@@ -101,7 +101,7 @@ program.
|
||||
The Spark workers need the Master service to be running.
|
||||
|
||||
Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a
|
||||
[replication controller](../../docs/replication-controller.md) that manages the worker pods.
|
||||
[replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/spark/spark-worker-controller.json
|
||||
|
||||
@@ -41,10 +41,10 @@ instructions for your platform.
|
||||
|
||||
## Step One: Start your ZooKeeper service
|
||||
|
||||
ZooKeeper is a distributed coordination [service](../../docs/services.md) that Storm uses as a
|
||||
ZooKeeper is a distributed coordination [service](../../docs/user-guide/services.md) that Storm uses as a
|
||||
bootstrap and for state storage.
|
||||
|
||||
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](../../docs/pods.md) running
|
||||
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](../../docs/user-guide/pods.md) running
|
||||
the ZooKeeper service.
|
||||
|
||||
```shell
|
||||
@@ -128,7 +128,7 @@ The Storm workers need both the ZooKeeper and Nimbus services to be
|
||||
running.
|
||||
|
||||
Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a
|
||||
[replication controller](../../docs/replication-controller.md) that manages the worker pods.
|
||||
[replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods.
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/storm/storm-worker-controller.json
|
||||
|
||||
Reference in New Issue
Block a user