Update docs/ URLs to point to proper locations

This commit is contained in:
Christoph Blecker
2017-06-02 00:04:49 -07:00
parent a552ee61a0
commit 1bdc7a29ae
215 changed files with 3099 additions and 3099 deletions

View File

@@ -27,18 +27,18 @@ new Cassandra nodes as they join the cluster.
This example also uses some of the core components of Kubernetes:
- [_Pods_](../../../docs/user-guide/pods.md)
- [ _Services_](../../../docs/user-guide/services.md)
- [_Replication Controllers_](../../../docs/user-guide/replication-controller.md)
- [_Pods_](https://kubernetes.io/docs/user-guide/pods.md)
- [ _Services_](https://kubernetes.io/docs/user-guide/services.md)
- [_Replication Controllers_](https://kubernetes.io/docs/user-guide/replication-controller.md)
- [_Stateful Sets_](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
- [_Daemon Sets_](../../../docs/admin/daemons.md)
- [_Daemon Sets_](https://kubernetes.io/docs/admin/daemons.md)
## Prerequisites
This example assumes that you have a Kubernetes version >=1.2 cluster installed and running,
and that you have installed the [`kubectl`](../../../docs/user-guide/kubectl/kubectl.md)
and that you have installed the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl/kubectl.md)
command line tool somewhere in your path. Please see the
[getting started guides](../../../docs/getting-started-guides/)
[getting started guides](https://kubernetes.io/docs/getting-started-guides/)
for installation instructions for your platform.
This example also has a few code and configuration files needed. To avoid
@@ -113,8 +113,8 @@ kubectl delete daemonset cassandra
## Step 1: Create a Cassandra Headless Service
A Kubernetes _[Service](../../../docs/user-guide/services.md)_ describes a set of
[_Pods_](../../../docs/user-guide/pods.md) that perform the same task. In
A Kubernetes _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of
[_Pods_](https://kubernetes.io/docs/user-guide/pods.md) that perform the same task. In
Kubernetes, the atomic unit of an application is a Pod: one or more containers
that _must_ be scheduled onto the same host.
@@ -353,7 +353,7 @@ system_traces system_schema system_auth system system_distributed
```
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
`kubectl edit`. You can find more information about the edit command in the [documentation](../../../docs/user-guide/kubectl/kubectl_edit.md).
`kubectl edit`. You can find more information about the edit command in the [documentation](https://kubernetes.io/docs/user-guide/kubectl/kubectl_edit.md).
Use the following command to edit the StatefulSet.
@@ -426,7 +426,7 @@ $ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodS
## Step 5: Use a Replication Controller to create Cassandra node pods
A Kubernetes
_[Replication Controller](../../../docs/user-guide/replication-controller.md)_
_[Replication Controller](https://kubernetes.io/docs/user-guide/replication-controller.md)_
is responsible for replicating sets of identical pods. Like a
Service, it has a selector query which identifies the members of its set.
Unlike a Service, it also has a desired number of replicas, and it will create
@@ -654,7 +654,7 @@ $ kubectl delete rc cassandra
## Step 8: Use a DaemonSet instead of a Replication Controller
In Kubernetes, a [_Daemon Set_](../../../docs/admin/daemons.md) can distribute pods
In Kubernetes, a [_Daemon Set_](https://kubernetes.io/docs/admin/daemons.md) can distribute pods
onto Kubernetes nodes, one-to-one. Like a _ReplicationController_, it has a
selector query which identifies the members of its set. Unlike a
_ReplicationController_, it has a node selector to limit which nodes are
@@ -843,7 +843,7 @@ how the container docker image was built and what it contains.
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
and `HEAP_NEWSIZE`), and adding information about the
[namespace](../../../docs/user-guide/namespaces.md).
[namespace](https://kubernetes.io/docs/user-guide/namespaces.md).
We also tell Kubernetes that the container exposes
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
manager that we need 0.1 cpu (0.1 core).

View File

@@ -8,7 +8,7 @@ This document also attempts to describe the core components of Kubernetes: _Pods
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](../../../docs/getting-started-guides/) for installation instructions for your platform.
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
### A note for the impatient
@@ -23,14 +23,14 @@ Source is freely available at:
### Simple Single Pod Hazelcast Node
In Kubernetes, the atomic unit of an application is a [_Pod_](../../../docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
In Kubernetes, the atomic unit of an application is a [_Pod_](https://kubernetes.io/docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
### Adding a Hazelcast Service
In Kubernetes a _[Service](../../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
In Kubernetes a _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
Here is the service description:
@@ -65,7 +65,7 @@ $ kubectl create -f examples/storage/hazelcast/hazelcast-service.yaml
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
In Kubernetes a _[_Deployment_](../../../docs/user-guide/deployments.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of its set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with its desired state.
In Kubernetes a _[_Deployment_](https://kubernetes.io/docs/user-guide/deployments.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of its set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with its desired state.
Deployments will "adopt" existing pods that match their selector query, so let's create a Deployment with a single replica to adopt our existing Hazelcast Pod.

View File

@@ -4,7 +4,7 @@ This document explains a simple demonstration example of running MySQL synchrono
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../../docs/getting-started-guides/) for installation instructions for your platform.
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
Also, this example requires the image found in the ```image``` directory. For your convenience, it is built and available on Docker's public image repository as ```capttofu/percona_xtradb_cluster_5_6```. It can also be built which would merely require that the image in the pod or replication controller files is updated.

View File

@@ -4,7 +4,7 @@ The following document describes the deployment of a reliable, multi-node Redis
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../../docs/getting-started-guides/) for installation instructions for your platform.
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
### A note for the impatient
@@ -12,7 +12,7 @@ This is a somewhat long tutorial. If you want to jump straight to the "do it no
### Turning up an initial master/sentinel pod.
A [_Pod_](../../../docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
A [_Pod_](https://kubernetes.io/docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
We will use the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
@@ -27,7 +27,7 @@ kubectl create -f examples/storage/redis/redis-master.yaml
### Turning up a sentinel service
In Kubernetes a [_Service_](../../../docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
In Kubernetes a [_Service_](https://kubernetes.io/docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
@@ -43,7 +43,7 @@ kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
In Kubernetes a [_Replication Controller_](../../../docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
In Kubernetes a [_Replication Controller_](https://kubernetes.io/docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml)

View File

@@ -120,7 +120,7 @@ since the ui is not stateless when playing with Web Admin UI will cause `Connect
**BTW**
* `gen_pod.sh` is using to generate pod templates for my local cluster,
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](../../../docs/user-guide/node-selection/)
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](https://kubernetes.io/docs/user-guide/node-selection/)
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail

View File

@@ -20,17 +20,17 @@ You'll need to install [Go 1.4+](https://golang.org/doc/install) to build
`vtctlclient`, the command-line admin tool for Vitess.
We also assume you have a running Kubernetes cluster with `kubectl` pointing to
it by default. See the [Getting Started guides](../../../docs/getting-started-guides/)
it by default. See the [Getting Started guides](https://kubernetes.io/docs/getting-started-guides/)
for how to get to that point. Note that your Kubernetes cluster needs to have
enough resources (CPU+RAM) to schedule all the pods. By default, this example
requires a cluster-wide total of at least 6 virtual CPUs and 10GiB RAM. You can
tune these requirements in the
[resource limits](../../../docs/user-guide/compute-resources.md)
[resource limits](https://kubernetes.io/docs/user-guide/compute-resources.md)
section of each YAML file.
Lastly, you need to open ports 30000-30001 (for the Vitess admin daemon) and 80 (for
the guestbook app) in your firewall. See the
[Services and Firewalls](../../../docs/user-guide/services-firewalls.md)
[Services and Firewalls](https://kubernetes.io/docs/user-guide/services-firewalls.md)
guide for examples of how to do that.
### Configure site-local settings