mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-09 21:21:14 +00:00
add redirect notice in all readme files
This commit is contained in:
@@ -1,854 +1 @@
|
||||
|
||||
# Cloud Native Deployments of Cassandra using Kubernetes
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Cassandra Docker](#cassandra-docker)
|
||||
- [Quickstart](#quickstart)
|
||||
- [Step 1: Create a Cassandra Headless Service](#step-1-create-a-cassandra-headless-service)
|
||||
- [Step 2: Use a StatefulSet to create Cassandra Ring](#step-2-use-a-statefulset-to-create-cassandra-ring)
|
||||
- [Step 3: Validate and Modify The Cassandra StatefulSet](#step-3-validate-and-modify-the-cassandra-statefulset)
|
||||
- [Step 4: Delete Cassandra StatefulSet](#step-4-delete-cassandra-statefulset)
|
||||
- [Step 5: Use a Replication Controller to create Cassandra node pods](#step-5-use-a-replication-controller-to-create-cassandra-node-pods)
|
||||
- [Step 6: Scale up the Cassandra cluster](#step-6-scale-up-the-cassandra-cluster)
|
||||
- [Step 7: Delete the Replication Controller](#step-7-delete-the-replication-controller)
|
||||
- [Step 8: Use a DaemonSet instead of a Replication Controller](#step-8-use-a-daemonset-instead-of-a-replication-controller)
|
||||
- [Step 9: Resource Cleanup](#step-9-resource-cleanup)
|
||||
- [Seed Provider Source](#seed-provider-source)
|
||||
|
||||
The following document describes the development of a _cloud native_
|
||||
[Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say
|
||||
_cloud native_, we mean an application which understands that it is running
|
||||
within a cluster manager, and uses this cluster management infrastructure to
|
||||
help implement the application. In particular, in this instance, a custom
|
||||
Cassandra `SeedProvider` is used to enable Cassandra to dynamically discover
|
||||
new Cassandra nodes as they join the cluster.
|
||||
|
||||
This example also uses some of the core components of Kubernetes:
|
||||
|
||||
- [_Pods_](https://kubernetes.io/docs/user-guide/pods.md)
|
||||
- [ _Services_](https://kubernetes.io/docs/user-guide/services.md)
|
||||
- [_Replication Controllers_](https://kubernetes.io/docs/user-guide/replication-controller.md)
|
||||
- [_Stateful Sets_](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
|
||||
- [_Daemon Sets_](https://kubernetes.io/docs/admin/daemons.md)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes version >=1.2 cluster installed and running,
|
||||
and that you have installed the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl/kubectl.md)
|
||||
command line tool somewhere in your path. Please see the
|
||||
[getting started guides](https://kubernetes.io/docs/getting-started-guides/)
|
||||
for installation instructions for your platform.
|
||||
|
||||
This example also has a few code and configuration files needed. To avoid
|
||||
typing these out, you can `git clone` the Kubernetes repository to your local
|
||||
computer.
|
||||
|
||||
## Cassandra Docker
|
||||
|
||||
The pods use the [```gcr.io/google-samples/cassandra:v12```](image/Dockerfile)
|
||||
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
|
||||
The docker is based on `debian:jessie` and includes OpenJDK 8. This image
|
||||
includes a standard Cassandra installation from the Apache Debian repo. Through the use of environment variables you are able to change values that are inserted into the `cassandra.yaml`.
|
||||
|
||||
| ENV VAR | DEFAULT VALUE |
|
||||
| ------------- |:-------------: |
|
||||
| CASSANDRA_CLUSTER_NAME | 'Test Cluster' |
|
||||
| CASSANDRA_NUM_TOKENS | 32 |
|
||||
| CASSANDRA_RPC_ADDRESS | 0.0.0.0 |
|
||||
|
||||
## Quickstart
|
||||
|
||||
If you want to jump straight to the commands we will run,
|
||||
here are the steps:
|
||||
|
||||
```sh
|
||||
#
|
||||
# StatefulSet
|
||||
#
|
||||
|
||||
# create a service to track all cassandra statefulset nodes
|
||||
kubectl create -f examples/storage/cassandra/cassandra-service.yaml
|
||||
|
||||
# create a statefulset
|
||||
kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
|
||||
# validate the Cassandra cluster. Substitute the name of one of your pods.
|
||||
kubectl exec -ti cassandra-0 -- nodetool status
|
||||
|
||||
# cleanup
|
||||
grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
&& kubectl delete statefulset,po -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
&& kubectl delete pvc -l app=cassandra
|
||||
|
||||
#
|
||||
# Resource Controller Example
|
||||
#
|
||||
|
||||
# create a replication controller to replicate cassandra nodes
|
||||
kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
|
||||
|
||||
# validate the Cassandra cluster. Substitute the name of one of your pods.
|
||||
kubectl exec -ti cassandra-xxxxx -- nodetool status
|
||||
|
||||
# scale up the Cassandra cluster
|
||||
kubectl scale rc cassandra --replicas=4
|
||||
|
||||
# delete the replication controller
|
||||
kubectl delete rc cassandra
|
||||
|
||||
#
|
||||
# Create a DaemonSet to place a cassandra node on each kubernetes node
|
||||
#
|
||||
|
||||
kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
|
||||
|
||||
# resource cleanup
|
||||
kubectl delete service -l app=cassandra
|
||||
kubectl delete daemonset cassandra
|
||||
```
|
||||
|
||||
## Step 1: Create a Cassandra Headless Service
|
||||
|
||||
A Kubernetes _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of
|
||||
[_Pods_](https://kubernetes.io/docs/user-guide/pods.md) that perform the same task. In
|
||||
Kubernetes, the atomic unit of an application is a Pod: one or more containers
|
||||
that _must_ be scheduled onto the same host.
|
||||
|
||||
The Service is used for DNS lookups between Cassandra Pods, and Cassandra clients
|
||||
within the Kubernetes Cluster.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-service.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: cassandra
|
||||
name: cassandra
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9042
|
||||
selector:
|
||||
app: cassandra
|
||||
```
|
||||
|
||||
[Download example](cassandra-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
|
||||
|
||||
Create the service for the StatefulSet:
|
||||
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-service.yaml
|
||||
```
|
||||
|
||||
The following command shows if the service has been created.
|
||||
|
||||
```console
|
||||
$ kubectl get svc cassandra
|
||||
```
|
||||
|
||||
The response should be like:
|
||||
|
||||
```console
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
cassandra None <none> 9042/TCP 45s
|
||||
```
|
||||
|
||||
If an error is returned the service create failed.
|
||||
|
||||
## Step 2: Use a StatefulSet to create Cassandra Ring
|
||||
|
||||
StatefulSets (previously PetSets) are a feature that was upgraded to a <i>Beta</i> component in
|
||||
Kubernetes 1.5. Deploying stateful distributed applications, like Cassandra, within a clustered
|
||||
environment can be challenging. We implemented StatefulSet to greatly simplify this
|
||||
process. Multiple StatefulSet features are used within this example, but is out of
|
||||
scope of this documentation. [Please refer to the Stateful Set documentation.](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
|
||||
|
||||
The StatefulSet manifest that is included below, creates a Cassandra ring that consists
|
||||
of three pods.
|
||||
|
||||
This example includes using a GCE Storage Class, please update appropriately depending
|
||||
on the cloud you are working with.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-statefulset.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: "apps/v1beta1"
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: cassandra
|
||||
spec:
|
||||
serviceName: cassandra
|
||||
replicas: 3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cassandra
|
||||
spec:
|
||||
containers:
|
||||
- name: cassandra
|
||||
image: gcr.io/google-samples/cassandra:v12
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 7000
|
||||
name: intra-node
|
||||
- containerPort: 7001
|
||||
name: tls-intra-node
|
||||
- containerPort: 7199
|
||||
name: jmx
|
||||
- containerPort: 9042
|
||||
name: cql
|
||||
resources:
|
||||
limits:
|
||||
cpu: "500m"
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: "500m"
|
||||
memory: 1Gi
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- IPC_LOCK
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"]
|
||||
env:
|
||||
- name: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- name: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
- name: CASSANDRA_SEEDS
|
||||
value: "cassandra-0.cassandra.default.svc.cluster.local"
|
||||
- name: CASSANDRA_CLUSTER_NAME
|
||||
value: "K8Demo"
|
||||
- name: CASSANDRA_DC
|
||||
value: "DC1-K8Demo"
|
||||
- name: CASSANDRA_RACK
|
||||
value: "Rack1-K8Demo"
|
||||
- name: CASSANDRA_AUTO_BOOTSTRAP
|
||||
value: "false"
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- /ready-probe.sh
|
||||
initialDelaySeconds: 15
|
||||
timeoutSeconds: 5
|
||||
# These volume mounts are persistent. They are like inline claims,
|
||||
# but not exactly because the names need to match exactly one of
|
||||
# the stateful pod volumes.
|
||||
volumeMounts:
|
||||
- name: cassandra-data
|
||||
mountPath: /cassandra_data
|
||||
# These are converted to volume claims by the controller
|
||||
# and mounted at the paths mentioned above.
|
||||
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: cassandra-data
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: fast
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-ssd
|
||||
```
|
||||
|
||||
[Download example](cassandra-statefulset.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-statefulset.yaml -->
|
||||
|
||||
Create the Cassandra StatefulSet as follows:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
```
|
||||
|
||||
## Step 3: Validate and Modify The Cassandra StatefulSet
|
||||
|
||||
Deploying this StatefulSet shows off two of the new features that StatefulSets provides.
|
||||
|
||||
1. The pod names are known
|
||||
2. The pods deploy in incremental order
|
||||
|
||||
First validate that the StatefulSet has deployed, by running `kubectl` command below.
|
||||
|
||||
```console
|
||||
$ kubectl get statefulset cassandra
|
||||
```
|
||||
|
||||
The command should respond like:
|
||||
|
||||
```console
|
||||
NAME DESIRED CURRENT AGE
|
||||
cassandra 3 3 13s
|
||||
```
|
||||
|
||||
Next watch the Cassandra pods deploy, one after another. The StatefulSet resource
|
||||
deploys pods in a number fashion: 1, 2, 3, etc. If you execute the following
|
||||
command before the pods deploy you are able to see the ordered creation.
|
||||
|
||||
```console
|
||||
$ kubectl get pods -l="app=cassandra"
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cassandra-0 1/1 Running 0 1m
|
||||
cassandra-1 0/1 ContainerCreating 0 8s
|
||||
```
|
||||
|
||||
The above example shows two of the three pods in the Cassandra StatefulSet deployed.
|
||||
Once all of the pods are deployed the same command will respond with the full
|
||||
StatefulSet.
|
||||
|
||||
```console
|
||||
$ kubectl get pods -l="app=cassandra"
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cassandra-0 1/1 Running 0 10m
|
||||
cassandra-1 1/1 Running 0 9m
|
||||
cassandra-2 1/1 Running 0 8m
|
||||
```
|
||||
|
||||
Running the Cassandra utility `nodetool` will display the status of the ring.
|
||||
|
||||
```console
|
||||
$ kubectl exec cassandra-0 -- nodetool status
|
||||
Datacenter: DC1-K8Demo
|
||||
======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.4.2.4 65.26 KiB 32 63.7% a9d27f81-6783-461d-8583-87de2589133e Rack1-K8Demo
|
||||
UN 10.4.0.4 102.04 KiB 32 66.7% 5559a58c-8b03-47ad-bc32-c621708dc2e4 Rack1-K8Demo
|
||||
UN 10.4.1.4 83.06 KiB 32 69.6% 9dce943c-581d-4c0e-9543-f519969cc805 Rack1-K8Demo
|
||||
```
|
||||
|
||||
You can also run `cqlsh` to describe the keyspaces in the cluster.
|
||||
|
||||
```console
|
||||
$ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces'
|
||||
|
||||
system_traces system_schema system_auth system system_distributed
|
||||
```
|
||||
|
||||
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
|
||||
`kubectl edit`. You can find more information about the edit command in the [documentation](https://kubernetes.io/docs/user-guide/kubectl/kubectl_edit.md).
|
||||
|
||||
Use the following command to edit the StatefulSet.
|
||||
|
||||
```console
|
||||
$ kubectl edit statefulset cassandra
|
||||
```
|
||||
|
||||
This will create an editor in your terminal. The line you are looking to change is
|
||||
`replicas`. The example does on contain the entire contents of the terminal window, and
|
||||
the last line of the example below is the replicas line that you want to change.
|
||||
|
||||
```console
|
||||
# Please edit the object below. Lines beginning with a '#' will be ignored,
|
||||
# and an empty file will abort the edit. If an error occurs while saving this file will be
|
||||
# reopened with the relevant failures.
|
||||
#
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
creationTimestamp: 2016-08-13T18:40:58Z
|
||||
generation: 1
|
||||
labels:
|
||||
app: cassandra
|
||||
name: cassandra
|
||||
namespace: default
|
||||
resourceVersion: "323"
|
||||
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/cassandra
|
||||
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
|
||||
spec:
|
||||
replicas: 3
|
||||
```
|
||||
|
||||
Modify the manifest to the following, and save the manifest.
|
||||
|
||||
```console
|
||||
spec:
|
||||
replicas: 4
|
||||
```
|
||||
|
||||
The StatefulSet will now contain four pods.
|
||||
|
||||
```console
|
||||
$ kubectl get statefulset cassandra
|
||||
```
|
||||
|
||||
The command should respond like:
|
||||
|
||||
```console
|
||||
NAME DESIRED CURRENT AGE
|
||||
cassandra 4 4 36m
|
||||
```
|
||||
|
||||
For the Kubernetes 1.5 release, the beta StatefulSet resource does not have `kubectl scale`
|
||||
functionality, like a Deployment, ReplicaSet, Replication Controller, or Job.
|
||||
|
||||
## Step 4: Delete Cassandra StatefulSet
|
||||
|
||||
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related StatefulSet resources. Deleting the Persistent Volume Claims may result in a deletion of the associated volumes, depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
|
||||
|
||||
Use the following commands to delete the StatefulSet.
|
||||
|
||||
```console
|
||||
$ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
&& kubectl delete statefulset -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
&& kubectl delete pvc -l app=cassandra
|
||||
```
|
||||
|
||||
## Step 5: Use a Replication Controller to create Cassandra node pods
|
||||
|
||||
A Kubernetes
|
||||
_[Replication Controller](https://kubernetes.io/docs/user-guide/replication-controller.md)_
|
||||
is responsible for replicating sets of identical pods. Like a
|
||||
Service, it has a selector query which identifies the members of its set.
|
||||
Unlike a Service, it also has a desired number of replicas, and it will create
|
||||
or delete Pods to ensure that the number of Pods matches up with its
|
||||
desired state.
|
||||
|
||||
The Replication Controller, in conjunction with the Service we just defined,
|
||||
will let us easily build a replicated, scalable Cassandra cluster.
|
||||
|
||||
Let's create a replication controller with two initial replicas.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: cassandra
|
||||
# The labels will be applied automatically
|
||||
# from the labels in the pod template, if not set
|
||||
# labels:
|
||||
# app: cassandra
|
||||
spec:
|
||||
replicas: 2
|
||||
# The selector will be applied automatically
|
||||
# from the labels in the pod template, if not set.
|
||||
# selector:
|
||||
# app: cassandra
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cassandra
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /run.sh
|
||||
resources:
|
||||
limits:
|
||||
cpu: 0.5
|
||||
env:
|
||||
- name: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- name: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
- name: CASSANDRA_SEED_PROVIDER
|
||||
value: "io.k8s.cassandra.KubernetesSeedProvider"
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
image: gcr.io/google-samples/cassandra:v12
|
||||
name: cassandra
|
||||
ports:
|
||||
- containerPort: 7000
|
||||
name: intra-node
|
||||
- containerPort: 7001
|
||||
name: tls-intra-node
|
||||
- containerPort: 7199
|
||||
name: jmx
|
||||
- containerPort: 9042
|
||||
name: cql
|
||||
volumeMounts:
|
||||
- mountPath: /cassandra_data
|
||||
name: data
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-controller.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
|
||||
|
||||
There are a few things to note in this description.
|
||||
|
||||
The `selector` attribute contains the controller's selector query. It can be
|
||||
explicitly specified, or applied automatically from the labels in the pod
|
||||
template if not set, as is done here.
|
||||
|
||||
The pod template's label, `app:cassandra`, matches the Service selector
|
||||
from Step 1. This is how pods created by this replication controller are picked up
|
||||
by the Service."
|
||||
|
||||
The `replicas` attribute specifies the desired number of replicas, in this
|
||||
case 2 initially. We'll scale up to more shortly.
|
||||
|
||||
Create the Replication Controller:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
|
||||
|
||||
```
|
||||
|
||||
You can list the new controller:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get rc -o wide
|
||||
NAME DESIRED CURRENT AGE CONTAINER(S) IMAGE(S) SELECTOR
|
||||
cassandra 2 2 11s cassandra gcr.io/google-samples/cassandra:v12 app=cassandra
|
||||
|
||||
```
|
||||
|
||||
Now if you list the pods in your cluster, and filter to the label
|
||||
`app=cassandra`, you should see two Cassandra pods. (The `wide` argument lets
|
||||
you see which Kubernetes nodes the pods were scheduled onto.)
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get pods -l="app=cassandra" -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
cassandra-21qyy 1/1 Running 0 1m kubernetes-minion-b286
|
||||
cassandra-q6sz7 1/1 Running 0 1m kubernetes-minion-9ye5
|
||||
|
||||
```
|
||||
|
||||
Because these pods have the label `app=cassandra`, they map to the service we
|
||||
defined in Step 1.
|
||||
|
||||
You can check that the Pods are visible to the Service using the following service endpoints query:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get endpoints cassandra -o yaml
|
||||
apiVersion: v1
|
||||
kind: Endpoints
|
||||
metadata:
|
||||
creationTimestamp: 2015-06-21T22:34:12Z
|
||||
labels:
|
||||
app: cassandra
|
||||
name: cassandra
|
||||
namespace: default
|
||||
resourceVersion: "944373"
|
||||
selfLink: /api/v1/namespaces/default/endpoints/cassandra
|
||||
uid: a3d6c25f-1865-11e5-a34e-42010af01bcc
|
||||
subsets:
|
||||
- addresses:
|
||||
- ip: 10.244.3.15
|
||||
targetRef:
|
||||
kind: Pod
|
||||
name: cassandra
|
||||
namespace: default
|
||||
resourceVersion: "944372"
|
||||
uid: 9ef9895d-1865-11e5-a34e-42010af01bcc
|
||||
ports:
|
||||
- port: 9042
|
||||
protocol: TCP
|
||||
|
||||
```
|
||||
|
||||
To show that the `SeedProvider` logic is working as intended, you can use the
|
||||
`nodetool` command to examine the status of the Cassandra cluster. To do this,
|
||||
use the `kubectl exec` command, which lets you run `nodetool` in one of your
|
||||
Cassandra pods. Again, substitute `cassandra-xxxxx` with the actual name of one
|
||||
of your pods.
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
|
||||
Datacenter: datacenter1
|
||||
=======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1
|
||||
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
|
||||
|
||||
```
|
||||
|
||||
## Step 6: Scale up the Cassandra cluster
|
||||
|
||||
Now let's scale our Cassandra cluster to 4 pods. We do this by telling the
|
||||
Replication Controller that we now want 4 replicas.
|
||||
|
||||
```sh
|
||||
|
||||
$ kubectl scale rc cassandra --replicas=4
|
||||
|
||||
```
|
||||
|
||||
You can see the new pods listed:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get pods -l="app=cassandra" -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
cassandra-21qyy 1/1 Running 0 6m kubernetes-minion-b286
|
||||
cassandra-81m2l 1/1 Running 0 47s kubernetes-minion-b286
|
||||
cassandra-8qoyp 1/1 Running 0 47s kubernetes-minion-9ye5
|
||||
cassandra-q6sz7 1/1 Running 0 6m kubernetes-minion-9ye5
|
||||
|
||||
```
|
||||
|
||||
In a few moments, you can examine the Cassandra cluster status again, and see
|
||||
that the new pods have been detected by the custom `SeedProvider`:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
|
||||
Datacenter: datacenter1
|
||||
=======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.244.0.6 51.67 KB 256 48.9% d07b23a5-56a1-4b0b-952d-68ab95869163 rack1
|
||||
UN 10.244.1.5 84.71 KB 256 50.7% e060df1f-faa2-470c-923d-ca049b0f3f38 rack1
|
||||
UN 10.244.1.6 84.71 KB 256 47.0% 83ca1580-4f3c-4ec5-9b38-75036b7a297f rack1
|
||||
UN 10.244.0.5 68.2 KB 256 53.4% 72ca27e2-c72c-402a-9313-1e4b61c2f839 rack1
|
||||
|
||||
```
|
||||
|
||||
## Step 7: Delete the Replication Controller
|
||||
|
||||
Before you start Step 5, __delete the replication controller__ you created above:
|
||||
|
||||
```sh
|
||||
|
||||
$ kubectl delete rc cassandra
|
||||
|
||||
```
|
||||
|
||||
## Step 8: Use a DaemonSet instead of a Replication Controller
|
||||
|
||||
In Kubernetes, a [_Daemon Set_](https://kubernetes.io/docs/admin/daemons.md) can distribute pods
|
||||
onto Kubernetes nodes, one-to-one. Like a _ReplicationController_, it has a
|
||||
selector query which identifies the members of its set. Unlike a
|
||||
_ReplicationController_, it has a node selector to limit which nodes are
|
||||
scheduled with the templated pods, and replicates not based on a set target
|
||||
number of pods, but rather assigns a single pod to each targeted node.
|
||||
|
||||
An example use case: when deploying to the cloud, the expectation is that
|
||||
instances are ephemeral and might die at any time. Cassandra is built to
|
||||
replicate data across the cluster to facilitate data redundancy, so that in the
|
||||
case that an instance dies, the data stored on the instance does not, and the
|
||||
cluster can react by re-replicating the data to other running nodes.
|
||||
|
||||
`DaemonSet` is designed to place a single pod on each node in the Kubernetes
|
||||
cluster. That will give us data redundancy. Let's create a
|
||||
DaemonSet to start our storage cluster:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-daemonset.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
name: cassandra
|
||||
name: cassandra
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cassandra
|
||||
spec:
|
||||
# Filter to specific nodes:
|
||||
# nodeSelector:
|
||||
# app: cassandra
|
||||
containers:
|
||||
- command:
|
||||
- /run.sh
|
||||
env:
|
||||
- name: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
- name: HEAP_NEWSIZE
|
||||
value: 100M
|
||||
- name: CASSANDRA_SEED_PROVIDER
|
||||
value: "io.k8s.cassandra.KubernetesSeedProvider"
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
image: gcr.io/google-samples/cassandra:v12
|
||||
name: cassandra
|
||||
ports:
|
||||
- containerPort: 7000
|
||||
name: intra-node
|
||||
- containerPort: 7001
|
||||
name: tls-intra-node
|
||||
- containerPort: 7199
|
||||
name: jmx
|
||||
- containerPort: 9042
|
||||
name: cql
|
||||
# If you need it it is going away in C* 4.0
|
||||
#- containerPort: 9160
|
||||
# name: thrift
|
||||
resources:
|
||||
requests:
|
||||
cpu: 0.5
|
||||
volumeMounts:
|
||||
- mountPath: /cassandra_data
|
||||
name: data
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
[Download example](cassandra-daemonset.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
|
||||
|
||||
Most of this DaemonSet definition is identical to the ReplicationController
|
||||
definition above; it simply gives the daemon set a recipe to use when it creates
|
||||
new Cassandra pods, and targets all Cassandra nodes in the cluster.
|
||||
|
||||
Differentiating aspects are the `nodeSelector` attribute, which allows the
|
||||
DaemonSet to target a specific subset of nodes (you can label nodes just like
|
||||
other resources), and the lack of a `replicas` attribute due to the 1-to-1 node-
|
||||
pod relationship.
|
||||
|
||||
Create this DaemonSet:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml
|
||||
|
||||
```
|
||||
|
||||
You may need to disable config file validation, like so:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
|
||||
|
||||
```
|
||||
|
||||
You can see the DaemonSet running:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get daemonset
|
||||
NAME DESIRED CURRENT NODE-SELECTOR
|
||||
cassandra 3 3 <none>
|
||||
|
||||
```
|
||||
|
||||
Now, if you list the pods in your cluster, and filter to the label
|
||||
`app=cassandra`, you should see one (and only one) new cassandra pod for each
|
||||
node in your network.
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl get pods -l="app=cassandra" -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
cassandra-ico4r 1/1 Running 0 4s kubernetes-minion-rpo1
|
||||
cassandra-kitfh 1/1 Running 0 1s kubernetes-minion-9ye5
|
||||
cassandra-tzw89 1/1 Running 0 2s kubernetes-minion-b286
|
||||
|
||||
```
|
||||
|
||||
To prove that this all worked as intended, you can again use the `nodetool`
|
||||
command to examine the status of the cluster. To do this, use the `kubectl
|
||||
exec` command to run `nodetool` in one of your newly-launched cassandra pods.
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
|
||||
Datacenter: datacenter1
|
||||
=======================
|
||||
Status=Up/Down
|
||||
|/ State=Normal/Leaving/Joining/Moving
|
||||
-- Address Load Tokens Owns (effective) Host ID Rack
|
||||
UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1
|
||||
UN 10.244.4.2 32.45 KB 256 100.0% 0b1be71a-6ffb-4895-ac3e-b9791299c141 rack1
|
||||
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
|
||||
|
||||
```
|
||||
|
||||
**Note**: This example had you delete the cassandra Replication Controller before
|
||||
you created the DaemonSet. This is because – to keep this example simple – the
|
||||
RC and the DaemonSet are using the same `app=cassandra` label (so that their pods map to the
|
||||
service we created, and so that the SeedProvider can identify them).
|
||||
|
||||
If we didn't delete the RC first, the two resources would conflict with
|
||||
respect to how many pods they wanted to have running. If we wanted, we could support running
|
||||
both together by using additional labels and selectors.
|
||||
|
||||
## Step 9: Resource Cleanup
|
||||
|
||||
When you are ready to take down your resources, do the following:
|
||||
|
||||
```console
|
||||
|
||||
$ kubectl delete service -l app=cassandra
|
||||
$ kubectl delete daemonset cassandra
|
||||
|
||||
```
|
||||
|
||||
### Custom Seed Provider
|
||||
|
||||
A custom [`SeedProvider`](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java)
|
||||
is included for running Cassandra on top of Kubernetes. Only when you deploy Cassandra
|
||||
via a replication control or a daemonset, you will need to use the custom seed provider.
|
||||
In Cassandra, a `SeedProvider` bootstraps the gossip protocol that Cassandra uses to find other
|
||||
Cassandra nodes. Seed addresses are hosts deemed as contact points. Cassandra
|
||||
instances use the seed list to find each other and learn the topology of the
|
||||
ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
|
||||
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
|
||||
instances are defined within the Cassandra Service.
|
||||
|
||||
Refer to the custom seed provider [README](java/README.md) for further
|
||||
`KubernetesSeedProvider` configurations. For this example you should not need
|
||||
to customize the Seed Provider configurations.
|
||||
|
||||
See the [image](image/) directory of this example for specifics on
|
||||
how the container docker image was built and what it contains.
|
||||
|
||||
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
|
||||
and `HEAP_NEWSIZE`), and adding information about the
|
||||
[namespace](https://kubernetes.io/docs/user-guide/namespaces.md).
|
||||
We also tell Kubernetes that the container exposes
|
||||
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
|
||||
manager that we need 0.1 cpu (0.1 core).
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/cassandra/README.md](https://github.com/kubernetes/examples/blob/master/cassandra/README.md)
|
||||
|
@@ -1,34 +1 @@
|
||||
# Cassandra on Kubernetes Custom Seed Provider: releases.k8s.io/HEAD
|
||||
|
||||
Within any deployment of Cassandra a Seed Provider is used to for node discovery and communication. When a Cassandra node first starts it must discover which nodes, or seeds, for the information about the Cassandra nodes in the ring / rack / datacenter.
|
||||
|
||||
This Java project provides a custom Seed Provider which communicates with the Kubernetes API to discover the required information. This provider is bundled with the Docker provided in this example.
|
||||
|
||||
# Configuring the Seed Provider
|
||||
|
||||
The following environment variables may be used to override the default configurations:
|
||||
|
||||
| ENV VAR | DEFAULT VALUE | NOTES |
|
||||
| ------------- |:-------------: |:-------------:|
|
||||
| KUBERNETES_PORT_443_TCP_ADDR | kubernetes.default.svc.cluster.local | The hostname of the API server |
|
||||
| KUBERNETES_PORT_443_TCP_PORT | 443 | API port number |
|
||||
| CASSANDRA_SERVICE | cassandra | Default service name for lookup |
|
||||
| POD_NAMESPACE | default | Default pod service namespace |
|
||||
| K8S_ACCOUNT_TOKEN | /var/run/secrets/kubernetes.io/serviceaccount/token | Default path to service token |
|
||||
|
||||
# Using
|
||||
|
||||
|
||||
If no endpoints are discovered from the API the seeds configured in the cassandra.yaml file are used.
|
||||
|
||||
# Provider limitations
|
||||
|
||||
This Cassandra Provider implements `SeedProvider`. and utilizes `SimpleSnitch`. This limits a Cassandra Ring to a single Cassandra Datacenter and ignores Rack setup. Datastax provides more documentation on the use of [_SNITCHES_](https://docs.datastax.com/en/cassandra/3.x/cassandra/architecture/archSnitchesAbout.html). Further development is planned to
|
||||
expand this capability.
|
||||
|
||||
This in affect makes every node a seed provider, which is not a recommended best practice. This increases maintenance and reduces gossip performance.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/cassandra/java/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/cassandra/java/README.md)
|
||||
|
@@ -1,233 +1 @@
|
||||
## Cloud Native Deployments of Hazelcast using Kubernetes
|
||||
|
||||
The following document describes the development of a _cloud native_ [Hazelcast](http://hazelcast.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Hazelcast ```bootstrapper``` is used to enable Hazelcast to dynamically discover Hazelcast nodes that have already joined the cluster.
|
||||
|
||||
Any topology changes are communicated and handled by Hazelcast nodes themselves.
|
||||
|
||||
This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Deployments_.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
### A note for the impatient
|
||||
|
||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||
|
||||
### Sources
|
||||
|
||||
Source is freely available at:
|
||||
* Hazelcast Discovery - https://github.com/pires/hazelcast-kubernetes-bootstrapper
|
||||
* Dockerfile - https://github.com/pires/hazelcast-kubernetes
|
||||
* Docker Trusted Build - https://quay.io/repository/pires/hazelcast-kubernetes
|
||||
|
||||
### Simple Single Pod Hazelcast Node
|
||||
|
||||
In Kubernetes, the atomic unit of an application is a [_Pod_](https://kubernetes.io/docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
|
||||
In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
|
||||
|
||||
|
||||
### Adding a Hazelcast Service
|
||||
|
||||
In Kubernetes a _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE hazelcast-service.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
name: hazelcast
|
||||
name: hazelcast
|
||||
spec:
|
||||
ports:
|
||||
- port: 5701
|
||||
selector:
|
||||
name: hazelcast
|
||||
```
|
||||
|
||||
[Download example](hazelcast-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE hazelcast-service.yaml -->
|
||||
|
||||
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/storage/hazelcast/hazelcast-service.yaml
|
||||
```
|
||||
|
||||
### Adding replicated nodes
|
||||
|
||||
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
|
||||
|
||||
In Kubernetes a _[_Deployment_](https://kubernetes.io/docs/user-guide/deployments.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of its set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with its desired state.
|
||||
|
||||
Deployments will "adopt" existing pods that match their selector query, so let's create a Deployment with a single replica to adopt our existing Hazelcast Pod.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE hazelcast-controller.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hazelcast
|
||||
labels:
|
||||
name: hazelcast
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: hazelcast
|
||||
spec:
|
||||
containers:
|
||||
- name: hazelcast
|
||||
image: quay.io/pires/hazelcast-kubernetes:0.8.0
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: "DNS_DOMAIN"
|
||||
value: "cluster.local"
|
||||
ports:
|
||||
- name: hazelcast
|
||||
containerPort: 5701
|
||||
```
|
||||
|
||||
[Download example](hazelcast-deployment.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
|
||||
|
||||
You may note that we tell Kubernetes that the container exposes the `hazelcast` port.
|
||||
|
||||
The bulk of the replication controller config is actually identical to the Hazelcast pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the `selector` which contains the controller's selector query, and the `replicas` parameter which specifies the desired number of replicas, in this case 1.
|
||||
|
||||
Last but not least, we set `DNS_DOMAIN` environment variable according to your Kubernetes clusters DNS configuration.
|
||||
|
||||
Create this controller:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/storage/hazelcast/hazelcast-deployment.yaml
|
||||
```
|
||||
|
||||
After the controller provisions successfully the pod, you can query the service endpoints:
|
||||
```sh
|
||||
$ kubectl get endpoints hazelcast -o yaml
|
||||
apiVersion: v1
|
||||
kind: Endpoints
|
||||
metadata:
|
||||
creationTimestamp: 2017-03-15T09:40:11Z
|
||||
labels:
|
||||
name: hazelcast
|
||||
name: hazelcast
|
||||
namespace: default
|
||||
resourceVersion: "65060"
|
||||
selfLink: /api/v1/namespaces/default/endpoints/hazelcast
|
||||
uid: 62645b71-0963-11e7-b39c-080027985ce6
|
||||
subsets:
|
||||
- addresses:
|
||||
- ip: 172.17.0.2
|
||||
nodeName: minikube
|
||||
targetRef:
|
||||
kind: Pod
|
||||
name: hazelcast-4195412960-mgqtk
|
||||
namespace: default
|
||||
resourceVersion: "65058"
|
||||
uid: 7043708f-0963-11e7-b39c-080027985ce6
|
||||
ports:
|
||||
- port: 5701
|
||||
protocol: TCP
|
||||
|
||||
```
|
||||
|
||||
You can see that the _Service_ has found the pod created by the replication controller.
|
||||
|
||||
Now it gets even more interesting. Let's scale our cluster to 2 pods:
|
||||
```sh
|
||||
$ kubectl scale deployment hazelcast --replicas 2
|
||||
```
|
||||
|
||||
Now if you list the pods in your cluster, you should see two hazelcast pods:
|
||||
|
||||
```sh
|
||||
$ kubectl get deployment,pods
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deploy/hazelcast 2 2 2 2 2m
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
po/hazelcast-4195412960-0tl3w 1/1 Running 0 7s
|
||||
po/hazelcast-4195412960-mgqtk 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
To prove that this all works, you can use the `log` command to examine the logs of one pod, for example:
|
||||
|
||||
```sh
|
||||
kubectl logs -f hazelcast-4195412960-0tl3w
|
||||
2017-03-15 09:42:45.046 INFO 7 --- [ main] com.github.pires.hazelcast.Application : Starting Application on hazelcast-4195412960-0tl3w with PID 7 (/bootstrapper.jar started by root in /)
|
||||
2017-03-15 09:42:45.060 INFO 7 --- [ main] com.github.pires.hazelcast.Application : No active profile set, falling back to default profiles: default
|
||||
2017-03-15 09:42:45.128 INFO 7 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@14514713: startup date [Wed Mar 15 09:42:45 GMT 2017]; root of context hierarchy
|
||||
2017-03-15 09:42:45.989 INFO 7 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
|
||||
2017-03-15 09:42:46.001 INFO 7 --- [ main] c.g.p.h.HazelcastDiscoveryController : Asking k8s registry at https://kubernetes.default.svc.cluster.local..
|
||||
2017-03-15 09:42:46.376 INFO 7 --- [ main] c.g.p.h.HazelcastDiscoveryController : Found 2 pods running Hazelcast.
|
||||
2017-03-15 09:42:46.458 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [172.17.0.6, 172.17.0.2]
|
||||
2017-03-15 09:42:46.458 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Prefer IPv4 stack is true.
|
||||
2017-03-15 09:42:46.464 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Picked [172.17.0.6]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
|
||||
2017-03-15 09:42:46.484 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Hazelcast 3.8 (20170217 - d7998b4) starting at [172.17.0.6]:5701
|
||||
2017-03-15 09:42:46.484 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
|
||||
2017-03-15 09:42:46.485 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Configured Hazelcast Serialization version : 1
|
||||
2017-03-15 09:42:46.679 INFO 7 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.6]:5701 [someGroup] [3.8] Backpressure is disabled
|
||||
2017-03-15 09:42:47.069 INFO 7 --- [ main] com.hazelcast.instance.Node : [172.17.0.6]:5701 [someGroup] [3.8] Creating TcpIpJoiner
|
||||
2017-03-15 09:42:47.182 INFO 7 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.6]:5701 [someGroup] [3.8] Starting 2 partition threads
|
||||
2017-03-15 09:42:47.189 INFO 7 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.6]:5701 [someGroup] [3.8] Starting 3 generic threads (1 dedicated for priority tasks)
|
||||
2017-03-15 09:42:47.197 INFO 7 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.6]:5701 [someGroup] [3.8] [172.17.0.6]:5701 is STARTING
|
||||
2017-03-15 09:42:47.253 INFO 7 --- [cached.thread-3] c.hazelcast.nio.tcp.InitConnectionTask : [172.17.0.6]:5701 [someGroup] [3.8] Connecting to /172.17.0.2:5701, timeout: 0, bind-any: true
|
||||
2017-03-15 09:42:47.262 INFO 7 --- [cached.thread-3] c.h.nio.tcp.TcpIpConnectionManager : [172.17.0.6]:5701 [someGroup] [3.8] Established socket connection between /172.17.0.6:58073 and /172.17.0.2:5701
|
||||
2017-03-15 09:42:54.260 INFO 7 --- [ration.thread-0] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Cluster version set to 3.8
|
||||
2017-03-15 09:42:54.262 INFO 7 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.6]:5701 [someGroup] [3.8]
|
||||
|
||||
Members [2] {
|
||||
Member [172.17.0.2]:5701 - 170f6924-7888-442a-9875-ad4d25659a8a
|
||||
Member [172.17.0.6]:5701 - b1b82bfa-86c2-4931-af57-325c10c03b3b this
|
||||
}
|
||||
|
||||
2017-03-15 09:42:56.285 INFO 7 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.6]:5701 [someGroup] [3.8] [172.17.0.6]:5701 is STARTED
|
||||
2017-03-15 09:42:56.287 INFO 7 --- [ main] com.github.pires.hazelcast.Application : Started Application in 11.831 seconds (JVM running for 12.219)
|
||||
```
|
||||
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
```sh
|
||||
$ kubectl scale deployment hazelcast --replicas 4
|
||||
```
|
||||
|
||||
Examine the status again by checking a node's logs and you should see the 4 members connected. Something like:
|
||||
```
|
||||
(...)
|
||||
|
||||
Members [4] {
|
||||
Member [172.17.0.2]:5701 - 170f6924-7888-442a-9875-ad4d25659a8a
|
||||
Member [172.17.0.6]:5701 - b1b82bfa-86c2-4931-af57-325c10c03b3b this
|
||||
Member [172.17.0.9]:5701 - 0c7530d3-1b5a-4f40-bd59-7187e43c1110
|
||||
Member [172.17.0.10]:5701 - ad5c3000-7fd0-4ce7-8194-e9b1c2ed6dda
|
||||
}
|
||||
```
|
||||
|
||||
### tl; dr;
|
||||
|
||||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
||||
|
||||
```sh
|
||||
kubectl create -f service.yaml
|
||||
kubectl create -f deployment.yaml
|
||||
kubectl scale deployment hazelcast --replicas 2
|
||||
kubectl scale deployment hazelcast --replicas 4
|
||||
```
|
||||
|
||||
### Hazelcast Discovery Source
|
||||
|
||||
See [here](https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java)
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/hazelcast/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/hazelcast/README.md)
|
||||
|
@@ -1,341 +1 @@
|
||||
# Cloud Native Deployment of Minio using Kubernetes
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Minio Standalone Server Deployment](#minio-standalone-server-deployment)
|
||||
- [Standalone Quickstart](#standalone-quickstart)
|
||||
- [Step 1: Create Persistent Volume Claim](#step-1-create-persistent-volume-claim)
|
||||
- [Step 2: Create Deployment](#step-2-create-minio-deployment)
|
||||
- [Step 3: Create LoadBalancer Service](#step-3-create-minio-service)
|
||||
- [Step 4: Resource cleanup](#step-4-resource-cleanup)
|
||||
- [Minio Distributed Server Deployment](#minio-distributed-server-deployment)
|
||||
- [Distributed Quickstart](#distributed-quickstart)
|
||||
- [Step 1: Create Minio Headless Service](#step-1-create-minio-headless-service)
|
||||
- [Step 2: Create Minio Statefulset](#step-2-create-minio-statefulset)
|
||||
- [Step 3: Create LoadBalancer Service](#step-3-create-minio-service)
|
||||
- [Step 4: Resource cleanup](#step-4-resource-cleanup)
|
||||
|
||||
## Introduction
|
||||
Minio is an AWS S3 compatible, object storage server built for cloud applications and devops. Minio is _cloud native_, meaning Minio understands that it is running within a cluster manager, and uses the cluster management infrastructure for allocation of compute and storage resources.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes version >=1.4 cluster installed and running, and that you have installed the [`kubectl`](https://kubernetes.io/docs/tasks/kubectl/install/) command line tool in your path. Please see the
|
||||
[getting started guides](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
## Minio Standalone Server Deployment
|
||||
|
||||
The following section describes the process to deploy standalone [Minio](https://minio.io/) server on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
|
||||
|
||||
This section uses following core components of Kubernetes:
|
||||
|
||||
- [_Pods_](https://kubernetes.io/docs/user-guide/pods/)
|
||||
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
|
||||
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
|
||||
- [_Persistent Volume Claims_](https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
|
||||
|
||||
### Standalone Quickstart
|
||||
|
||||
Run the below commands to get started quickly
|
||||
|
||||
```sh
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-pvc.yaml?raw=true
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-deployment.yaml?raw=true
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-service.yaml?raw=true
|
||||
```
|
||||
|
||||
### Step 1: Create Persistent Volume Claim
|
||||
|
||||
Minio needs persistent storage to store objects. If there is no
|
||||
persistent storage, the data stored in Minio instance will be stored in the container file system and will be wiped off as soon as the container restarts.
|
||||
|
||||
Create a persistent volume claim (PVC) to request storage for the Minio instance. Kubernetes looks out for PVs matching the PVC request in the cluster and binds it to the PVC automatically.
|
||||
|
||||
This is the PVC description.
|
||||
|
||||
```sh
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
# This name uniquely identifies the PVC. Will be used in deployment below.
|
||||
name: minio-pv-claim
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: anything
|
||||
labels:
|
||||
app: minio-storage-claim
|
||||
spec:
|
||||
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
# This is the request for storage. Should be available in the cluster.
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
Create the PersistentVolumeClaim
|
||||
|
||||
```sh
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-pvc.yaml?raw=true
|
||||
persistentvolumeclaim "minio-pv-claim" created
|
||||
```
|
||||
|
||||
### Step 2: Create Minio Deployment
|
||||
|
||||
A deployment encapsulates replica sets and pods — so, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you won’t need to bother about pod failures and will have a stable Minio service available.
|
||||
|
||||
This is the deployment description.
|
||||
|
||||
```sh
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
# This name uniquely identifies the Deployment
|
||||
name: minio-deployment
|
||||
spec:
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
# Label is used as selector in the service.
|
||||
app: minio
|
||||
spec:
|
||||
# Refer to the PVC created earlier
|
||||
volumes:
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
# Name of the PVC created earlier
|
||||
claimName: minio-pv-claim
|
||||
containers:
|
||||
- name: minio
|
||||
# Pulls the default Minio image from Docker Hub
|
||||
image: minio/minio:latest
|
||||
args:
|
||||
- server
|
||||
- /storage
|
||||
env:
|
||||
# Minio access key and secret key
|
||||
- name: MINIO_ACCESS_KEY
|
||||
value: "minio"
|
||||
- name: MINIO_SECRET_KEY
|
||||
value: "minio123"
|
||||
ports:
|
||||
- containerPort: 9000
|
||||
hostPort: 9000
|
||||
# Mount the volume into the pod
|
||||
volumeMounts:
|
||||
- name: storage # must match the volume name, above
|
||||
mountPath: "/storage"
|
||||
```
|
||||
|
||||
Create the Deployment
|
||||
|
||||
```sh
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-deployment.yaml?raw=true
|
||||
deployment "minio-deployment" created
|
||||
```
|
||||
|
||||
### Step 3: Create Minio Service
|
||||
|
||||
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service types — default type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
|
||||
|
||||
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
|
||||
|
||||
```sh
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio-service
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 9000
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: minio
|
||||
```
|
||||
Create the Minio service
|
||||
|
||||
```sh
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-service.yaml?raw=true
|
||||
service "minio-service" created
|
||||
```
|
||||
|
||||
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
|
||||
|
||||
```sh
|
||||
kubectl get svc minio-service
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
|
||||
```
|
||||
|
||||
### Step 4: Resource cleanup
|
||||
|
||||
Once you are done, cleanup the cluster using
|
||||
```sh
|
||||
kubectl delete deployment minio-deployment \
|
||||
&& kubectl delete pvc minio-pv-claim \
|
||||
&& kubectl delete svc minio-service
|
||||
```
|
||||
|
||||
## Minio Distributed Server Deployment
|
||||
|
||||
The following document describes the process to deploy [distributed Minio](https://docs.minio.io/docs/distributed-minio-quickstart-guide) server on Kubernetes. This example uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
|
||||
|
||||
This example uses following core components of Kubernetes:
|
||||
|
||||
- [_Pods_](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
|
||||
- [_Services_](https://kubernetes.io/docs/concepts/services-networking/service/)
|
||||
- [_Statefulsets_](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)
|
||||
|
||||
### Distributed Quickstart
|
||||
|
||||
Run the below commands to get started quickly
|
||||
|
||||
```sh
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-headless-service.yaml?raw=true
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-statefulset.yaml?raw=true
|
||||
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-service.yaml?raw=true
|
||||
```
|
||||
|
||||
### Step 1: Create Minio Headless Service
|
||||
|
||||
Headless Service controls the domain within which StatefulSets are created. The domain managed by this Service takes the form: `$(service name).$(namespace).svc.cluster.local` (where “cluster.local” is the cluster domain), and the pods in this domain take the form: `$(pod-name-{i}).$(service name).$(namespace).svc.cluster.local`. This is required to get a DNS resolvable URL for each of the pods created within the Statefulset.
|
||||
|
||||
This is the Headless service description.
|
||||
|
||||
```sh
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
labels:
|
||||
app: minio
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9000
|
||||
name: minio
|
||||
selector:
|
||||
app: minio
|
||||
```
|
||||
|
||||
Create the Headless Service
|
||||
|
||||
```sh
|
||||
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-headless-service.yaml?raw=true
|
||||
service "minio" created
|
||||
```
|
||||
|
||||
### Step 2: Create Minio Statefulset
|
||||
|
||||
A StatefulSet provides a deterministic name and a unique identity to each pod, making it easy to deploy stateful distributed applications. To launch distributed Minio you need to pass drive locations as parameters to the minio server command. Then, you’ll need to run the same command on all the participating pods. StatefulSets offer a perfect way to handle this requirement.
|
||||
|
||||
This is the Statefulset description.
|
||||
|
||||
```sh
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: minio
|
||||
spec:
|
||||
serviceName: minio
|
||||
replicas: 4
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
pod.alpha.kubernetes.io/initialized: "true"
|
||||
labels:
|
||||
app: minio
|
||||
spec:
|
||||
containers:
|
||||
- name: minio
|
||||
env:
|
||||
- name: MINIO_ACCESS_KEY
|
||||
value: "minio"
|
||||
- name: MINIO_SECRET_KEY
|
||||
value: "minio123"
|
||||
image: minio/minio:latest
|
||||
args:
|
||||
- server
|
||||
- http://minio-0.minio.default.svc.cluster.local/data
|
||||
- http://minio-1.minio.default.svc.cluster.local/data
|
||||
- http://minio-2.minio.default.svc.cluster.local/data
|
||||
- http://minio-3.minio.default.svc.cluster.local/data
|
||||
ports:
|
||||
- containerPort: 9000
|
||||
hostPort: 9000
|
||||
# These volume mounts are persistent. Each pod in the Statefulset
|
||||
# gets a volume mounted based on this field.
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
# These are converted to volume claims by the controller
|
||||
# and mounted at the paths mentioned above.
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
annotations:
|
||||
volume.alpha.kubernetes.io/storage-class: anything
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
```
|
||||
|
||||
Create the Statefulset
|
||||
|
||||
```sh
|
||||
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-statefulset.yaml?raw=true
|
||||
statefulset "minio" created
|
||||
```
|
||||
|
||||
### Step 3: Create Minio Service
|
||||
|
||||
Now that you have a Minio statefulset running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service types — default type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
|
||||
|
||||
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
|
||||
|
||||
```sh
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio-service
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- port: 9000
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: minio
|
||||
```
|
||||
Create the Minio service
|
||||
|
||||
```sh
|
||||
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-service.yaml?raw=true
|
||||
service "minio-service" created
|
||||
```
|
||||
|
||||
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
|
||||
|
||||
```sh
|
||||
$ kubectl get svc minio-service
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
|
||||
```
|
||||
|
||||
### Step 4: Resource cleanup
|
||||
|
||||
You can cleanup the cluster using
|
||||
```sh
|
||||
kubectl delete statefulset minio \
|
||||
&& kubectl delete svc minio \
|
||||
&& kubectl delete svc minio-service
|
||||
```
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/minio/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/minio/README.md)
|
||||
|
@@ -1,137 +1 @@
|
||||
## Galera Replication for MySQL on Kubernetes
|
||||
|
||||
This document explains a simple demonstration example of running MySQL synchronous replication using Galera, specifically, Percona XtraDB cluster. The example is simplistic and used a fixed number (3) of nodes but the idea can be built upon and made more dynamic as Kubernetes matures.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
Also, this example requires the image found in the ```image``` directory. For your convenience, it is built and available on Docker's public image repository as ```capttofu/percona_xtradb_cluster_5_6```. It can also be built which would merely require that the image in the pod or replication controller files is updated.
|
||||
|
||||
This example was tested on OS X with a Galera cluster running on VMWare using the fine repo developed by Paulo Pires [https://github.com/pires/kubernetes-vagrant-coreos-cluster] and client programs built for OS X.
|
||||
|
||||
### Basic concept
|
||||
|
||||
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or slave across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
|
||||
|
||||
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
|
||||
|
||||
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the MySQL system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
|
||||
|
||||
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initially pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
|
||||
|
||||
First, create the overall cluster service that will be used to connect to the cluster:
|
||||
|
||||
```kubectl create -f examples/storage/mysql-galera/pxc-cluster-service.yaml```
|
||||
|
||||
Create the service and replication controller for the first node:
|
||||
|
||||
```kubectl create -f examples/storage/mysql-galera/pxc-node1.yaml```
|
||||
|
||||
### Create services and controllers for the remaining nodes
|
||||
|
||||
Repeat the same previous steps for ```pxc-node2``` and ```pxc-node3```
|
||||
|
||||
When complete, you should be able connect with a MySQL client to the IP address
|
||||
service ```pxc-cluster``` to find a working cluster
|
||||
|
||||
### An example of creating a cluster
|
||||
|
||||
Shown below are examples of Using ```kubectl``` from within the ```./examples/storage/mysql-galera``` directory, the status of the lauched replication controllers and services can be confirmed
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/storage/mysql-galera/pxc-cluster-service.yaml
|
||||
services/pxc-cluster
|
||||
|
||||
$ kubectl create -f examples/storage/mysql-galera/pxc-node1.yaml
|
||||
services/pxc-node1
|
||||
replicationcontrollers/pxc-node1
|
||||
|
||||
$ kubectl create -f examples/storage/mysql-galera/pxc-node2.yaml
|
||||
services/pxc-node2
|
||||
replicationcontrollers/pxc-node2
|
||||
|
||||
$ kubectl create -f examples/storage/mysql-galera/pxc-node3.yaml
|
||||
services/pxc-node3
|
||||
replicationcontrollers/pxc-node3
|
||||
|
||||
```
|
||||
|
||||
### Confirm a running cluster
|
||||
|
||||
Verify everything is running:
|
||||
|
||||
```
|
||||
$ kubectl get rc,pods,services
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
pxc-node1 pxc-node1 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node1 1
|
||||
pxc-node2 pxc-node2 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node2 1
|
||||
pxc-node3 pxc-node3 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node3 1
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pxc-node1-h6fqr 1/1 Running 0 41m
|
||||
pxc-node2-sfqm6 1/1 Running 0 41m
|
||||
pxc-node3-017b3 1/1 Running 0 40m
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
pxc-cluster <none> unit=pxc-cluster 10.100.179.58 3306/TCP
|
||||
pxc-node1 <none> name=pxc-node1 10.100.217.202 3306/TCP
|
||||
4444/TCP
|
||||
4567/TCP
|
||||
4568/TCP
|
||||
pxc-node2 <none> name=pxc-node2 10.100.47.212 3306/TCP
|
||||
4444/TCP
|
||||
4567/TCP
|
||||
4568/TCP
|
||||
pxc-node3 <none> name=pxc-node3 10.100.200.14 3306/TCP
|
||||
4444/TCP
|
||||
4567/TCP
|
||||
4568/TCP
|
||||
|
||||
```
|
||||
|
||||
The cluster should be ready for use!
|
||||
|
||||
### Connecting to the cluster
|
||||
|
||||
Using the name of ```pxc-cluster``` service running interactively using ```kubernetes exec```, it is possible to connect to any of the pods using the mysql client on the pod's container to verify the cluster size, which should be ```3```. In this example below, pxc-node3 replication controller is chosen, and to find out the pod name, ```kubectl get pods``` and ```awk``` are employed:
|
||||
|
||||
```
|
||||
$ kubectl get pods|grep pxc-node3|awk '{ print $1 }'
|
||||
pxc-node3-0b5mc
|
||||
|
||||
$ kubectl exec pxc-node3-0b5mc -i -t -- mysql -u root -p -h pxc-cluster
|
||||
|
||||
Enter password:
|
||||
Welcome to the MySQL monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 5
|
||||
Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11
|
||||
|
||||
Copyright (c) 2009-2015 Percona LLC and/or its affiliates
|
||||
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
|
||||
|
||||
Oracle is a registered trademark of Oracle Corporation and/or its
|
||||
affiliates. Other names may be trademarks of their respective
|
||||
owners.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
mysql> show status like 'wsrep_cluster_size';
|
||||
+--------------------+-------+
|
||||
| Variable_name | Value |
|
||||
+--------------------+-------+
|
||||
| wsrep_cluster_size | 3 |
|
||||
+--------------------+-------+
|
||||
1 row in set (0.06 sec)
|
||||
|
||||
```
|
||||
|
||||
At this point, there is a working cluster that can begin being used via the pxc-cluster service IP address!
|
||||
|
||||
### TODO
|
||||
|
||||
This setup certainly can become more fluid and dynamic. One idea is to perhaps use an etcd container to store information about node state. Originally, there was a read-only kubernetes API available to each container but that has since been removed. Also, Kelsey Hightower is working on moving the functionality of confd to Kubernetes. This could replace the shell duct tape that builds the cluster configuration file for the image.
|
||||
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/mysql-galera/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/mysql-galera/README.md)
|
||||
|
@@ -1,133 +1 @@
|
||||
## Reliable, Scalable Redis on Kubernetes
|
||||
|
||||
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
|
||||
|
||||
### A note for the impatient
|
||||
|
||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
||||
|
||||
### Turning up an initial master/sentinel pod.
|
||||
|
||||
A [_Pod_](https://kubernetes.io/docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
||||
|
||||
We will use the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
|
||||
|
||||
Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml)
|
||||
|
||||
|
||||
Create this master as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/storage/redis/redis-master.yaml
|
||||
```
|
||||
|
||||
### Turning up a sentinel service
|
||||
|
||||
In Kubernetes a [_Service_](https://kubernetes.io/docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
|
||||
|
||||
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
|
||||
|
||||
Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml)
|
||||
|
||||
Create this service:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
|
||||
```
|
||||
|
||||
### Turning up replicated redis servers
|
||||
|
||||
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
|
||||
|
||||
In Kubernetes a [_Replication Controller_](https://kubernetes.io/docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml)
|
||||
|
||||
The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.
|
||||
|
||||
Create this controller:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/storage/redis/redis-controller.yaml
|
||||
```
|
||||
|
||||
We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
|
||||
|
||||
We create it as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml
|
||||
```
|
||||
|
||||
### Scale our replicated pods
|
||||
|
||||
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
||||
|
||||
```sh
|
||||
kubectl scale rc redis --replicas=3
|
||||
```
|
||||
|
||||
```sh
|
||||
kubectl scale rc redis-sentinel --replicas=3
|
||||
```
|
||||
|
||||
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
||||
|
||||
Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster.
|
||||
|
||||
### Delete our manual pod
|
||||
|
||||
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
|
||||
|
||||
Delete the master as follows:
|
||||
|
||||
```sh
|
||||
kubectl delete pods redis-master
|
||||
```
|
||||
|
||||
Now let's take a close look at what happens after this pod is deleted. There are three things that happen:
|
||||
|
||||
1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3
|
||||
2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel.
|
||||
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
||||
|
||||
### Conclusion
|
||||
|
||||
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
||||
|
||||
**NOTE:** since redis 3.2 some security measures (bind to 127.0.0.1 and `--protected-mode`) are enabled by default. Please read about this in http://antirez.com/news/96
|
||||
|
||||
|
||||
### tl; dr
|
||||
|
||||
For those of you who are impatient, here is the summary of commands we ran in this tutorial:
|
||||
|
||||
```
|
||||
# Create a bootstrap master
|
||||
kubectl create -f examples/storage/redis/redis-master.yaml
|
||||
|
||||
# Create a service to track the sentinels
|
||||
kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
|
||||
|
||||
# Create a replication controller for redis servers
|
||||
kubectl create -f examples/storage/redis/redis-controller.yaml
|
||||
|
||||
# Create a replication controller for redis sentinels
|
||||
kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml
|
||||
|
||||
# Scale both replication controllers
|
||||
kubectl scale rc redis --replicas=3
|
||||
kubectl scale rc redis-sentinel --replicas=3
|
||||
|
||||
# Delete the original master pod
|
||||
kubectl delete pods redis-master
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/redis/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/redis/README.md)
|
||||
|
@@ -1,130 +1 @@
|
||||
RethinkDB Cluster on Kubernetes
|
||||
==============================
|
||||
|
||||
Setting up a [rethinkdb](http://rethinkdb.com/) cluster on [kubernetes](http://kubernetes.io)
|
||||
|
||||
**Features**
|
||||
|
||||
* Auto configuration cluster by querying info from k8s
|
||||
* Simple
|
||||
|
||||
Quick start
|
||||
-----------
|
||||
|
||||
**Step 1**
|
||||
|
||||
Rethinkdb will discover its peer using endpoints provided by kubernetes service,
|
||||
so first create a service so the following pod can query its endpoint
|
||||
|
||||
```sh
|
||||
$kubectl create -f examples/storage/rethinkdb/driver-service.yaml
|
||||
```
|
||||
|
||||
check out:
|
||||
|
||||
```sh
|
||||
$kubectl get services
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
rethinkdb-driver 10.0.27.114 <none> 28015/TCP db=rethinkdb 10m
|
||||
[...]
|
||||
```
|
||||
|
||||
**Step 2**
|
||||
|
||||
start the first server in the cluster
|
||||
|
||||
```sh
|
||||
$kubectl create -f examples/storage/rethinkdb/rc.yaml
|
||||
```
|
||||
|
||||
Actually, you can start servers as many as you want at one time, just modify the `replicas` in `rc.ymal`
|
||||
|
||||
check out again:
|
||||
|
||||
```sh
|
||||
$kubectl get pods
|
||||
NAME READY REASON RESTARTS AGE
|
||||
[...]
|
||||
rethinkdb-rc-r4tb0 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
**Done!**
|
||||
|
||||
|
||||
---
|
||||
|
||||
Scale
|
||||
-----
|
||||
|
||||
You can scale up your cluster using `kubectl scale`. The new pod will join to the existing cluster automatically, for example
|
||||
|
||||
|
||||
```sh
|
||||
$kubectl scale rc rethinkdb-rc --replicas=3
|
||||
scaled
|
||||
|
||||
$kubectl get pods
|
||||
NAME READY REASON RESTARTS AGE
|
||||
[...]
|
||||
rethinkdb-rc-f32c5 1/1 Running 0 1m
|
||||
rethinkdb-rc-m4d50 1/1 Running 0 1m
|
||||
rethinkdb-rc-r4tb0 1/1 Running 0 3m
|
||||
```
|
||||
|
||||
Admin
|
||||
-----
|
||||
|
||||
You need a separate pod (labeled as role:admin) to access Web Admin UI
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/storage/rethinkdb/admin-pod.yaml
|
||||
kubectl create -f examples/storage/rethinkdb/admin-service.yaml
|
||||
```
|
||||
|
||||
find the service
|
||||
|
||||
```console
|
||||
$kubectl get services
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
[...]
|
||||
rethinkdb-admin 10.0.131.19 104.197.19.120 8080/TCP db=rethinkdb,role=admin 10m
|
||||
rethinkdb-driver 10.0.27.114 <none> 28015/TCP db=rethinkdb 20m
|
||||
```
|
||||
|
||||
We request an external load balancer in the [admin-service.yaml](admin-service.yaml) file:
|
||||
|
||||
```
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
The external load balancer allows us to access the service from outside the firewall via an external IP, 104.197.19.120 in this case.
|
||||
|
||||
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
||||
|
||||
```console
|
||||
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
|
||||
```
|
||||
|
||||
Now you can open a web browser and access to *http://104.197.19.120:8080* to manage your cluster.
|
||||
|
||||
|
||||
|
||||
**Why not just using pods in replicas?**
|
||||
|
||||
This is because kube-proxy will act as a load balancer and send your traffic to different server,
|
||||
since the ui is not stateless when playing with Web Admin UI will cause `Connection not open on server` error.
|
||||
|
||||
|
||||
- - -
|
||||
|
||||
**BTW**
|
||||
|
||||
* `gen_pod.sh` is using to generate pod templates for my local cluster,
|
||||
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](https://kubernetes.io/docs/user-guide/node-selection/)
|
||||
|
||||
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/rethinkdb/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/rethinkdb/README.md)
|
||||
|
@@ -1,113 +1 @@
|
||||
## Vitess Example
|
||||
|
||||
This example shows how to run a [Vitess](http://vitess.io) cluster in Kubernetes.
|
||||
Vitess is a MySQL clustering system developed at YouTube that makes sharding
|
||||
transparent to the application layer. It also makes scaling MySQL within
|
||||
Kubernetes as simple as launching more pods.
|
||||
|
||||
The example brings up a database with 2 shards, and then runs a pool of
|
||||
[sharded guestbook](https://github.com/youtube/vitess/tree/master/examples/kubernetes/guestbook)
|
||||
pods. The guestbook app was ported from the original
|
||||
[guestbook](../../../examples/guestbook-go/)
|
||||
example found elsewhere in this tree, modified to use Vitess as the backend.
|
||||
|
||||
For a more detailed, step-by-step explanation of this example setup, see the
|
||||
[Vitess on Kubernetes](http://vitess.io/getting-started/) guide.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You'll need to install [Go 1.4+](https://golang.org/doc/install) to build
|
||||
`vtctlclient`, the command-line admin tool for Vitess.
|
||||
|
||||
We also assume you have a running Kubernetes cluster with `kubectl` pointing to
|
||||
it by default. See the [Getting Started guides](https://kubernetes.io/docs/getting-started-guides/)
|
||||
for how to get to that point. Note that your Kubernetes cluster needs to have
|
||||
enough resources (CPU+RAM) to schedule all the pods. By default, this example
|
||||
requires a cluster-wide total of at least 6 virtual CPUs and 10GiB RAM. You can
|
||||
tune these requirements in the
|
||||
[resource limits](https://kubernetes.io/docs/user-guide/compute-resources.md)
|
||||
section of each YAML file.
|
||||
|
||||
Lastly, you need to open ports 30000-30001 (for the Vitess admin daemon) and 80 (for
|
||||
the guestbook app) in your firewall. See the
|
||||
[Services and Firewalls](https://kubernetes.io/docs/user-guide/services-firewalls.md)
|
||||
guide for examples of how to do that.
|
||||
|
||||
### Configure site-local settings
|
||||
|
||||
Run the `configure.sh` script to generate a `config.sh` file, which will be used
|
||||
to customize your cluster settings.
|
||||
|
||||
``` console
|
||||
./configure.sh
|
||||
```
|
||||
|
||||
Currently, we have out-of-the-box support for storing
|
||||
[backups](http://vitess.io/user-guide/backup-and-restore.html) in
|
||||
[Google Cloud Storage](https://cloud.google.com/storage/).
|
||||
If you're using GCS, fill in the fields requested by the configure script.
|
||||
Note that your Kubernetes cluster must be running on instances with the
|
||||
`storage-rw` scope for this to work. With Container Engine, you can do this by
|
||||
passing `--scopes storage-rw` to the `glcoud container clusters create` command.
|
||||
|
||||
For other platforms, you'll need to choose the `file` backup storage plugin,
|
||||
and mount a read-write network volume into the `vttablet` and `vtctld` pods.
|
||||
For example, you can mount any storage service accessible through NFS into a
|
||||
Kubernetes volume. Then provide the mount path to the configure script here.
|
||||
|
||||
If you prefer to skip setting up a backup volume for the purpose of this example,
|
||||
you can choose `file` mode and set the path to `/tmp`.
|
||||
|
||||
### Start Vitess
|
||||
|
||||
``` console
|
||||
./vitess-up.sh
|
||||
```
|
||||
|
||||
This will run through the steps to bring up Vitess. At the end, you should see
|
||||
something like this:
|
||||
|
||||
``` console
|
||||
****************************
|
||||
* Complete!
|
||||
* Use the following line to make an alias to kvtctl:
|
||||
* alias kvtctl='$GOPATH/bin/vtctlclient -server 104.197.47.173:30001'
|
||||
* See the vtctld UI at: http://104.197.47.173:30000
|
||||
****************************
|
||||
```
|
||||
|
||||
### Start the Guestbook app
|
||||
|
||||
``` console
|
||||
./guestbook-up.sh
|
||||
```
|
||||
|
||||
The guestbook service is configured with `type: LoadBalancer` to tell Kubernetes
|
||||
to expose it on an external IP. It may take a minute to set up, but you should
|
||||
soon see the external IP show up under the internal one like this:
|
||||
|
||||
``` console
|
||||
$ kubectl get service guestbook
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
guestbook <none> name=guestbook 10.67.253.173 80/TCP
|
||||
104.197.151.132
|
||||
```
|
||||
|
||||
Visit the external IP in your browser to view the guestbook. Note that in this
|
||||
modified guestbook, there are multiple pages to demonstrate range-based sharding
|
||||
in Vitess. Each page number is assigned to one of the shards using a
|
||||
[consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) scheme.
|
||||
|
||||
### Tear down
|
||||
|
||||
``` console
|
||||
./guestbook-down.sh
|
||||
./vitess-down.sh
|
||||
```
|
||||
|
||||
You may also want to remove any firewall rules you created.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
||||
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/vitess/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/vitess/README.md)
|
||||
|
Reference in New Issue
Block a user