Improve CockroachDB example

* Use an init container to eliminate potential edge case where losing
  the first pet's could cause it to start a second logical cluster
* Exec the cockroach binary so that it runs as PID 1 in the container
* Make some small improvements to the README
This commit is contained in:
Alex Robinson
2016-10-31 15:42:11 -04:00
parent e6b2517feb
commit 6b98de39a5
3 changed files with 108 additions and 39 deletions

View File

@@ -12,10 +12,11 @@ a PetSet. CockroachDB is a distributed, scalable NewSQL database. Please see
Standard PetSet limitations apply: There is currently no possibility to use
node-local storage (outside of single-node tests), and so there is likely
a performance hit associated with running CockroachDB on some external storage.
Note that CockroachDB already does replication and thus should not be deployed on
a persistent volume which already replicates internally.
High-performance use cases on a private Kubernetes cluster should consider
a DaemonSet deployment.
Note that CockroachDB already does replication and thus it is unnecessary to
deploy it onto persistent volumes which already replicate internally.
For this reason, high-performance use cases on a private Kubernetes cluster
may want to consider a DaemonSet deployment until PetSets support node-local
storage (see #7562).
### Recovery after persistent storage failure
@@ -27,17 +28,25 @@ first node is special in that the administrator must manually prepopulate the
parameter. If this is not done, the first node will bootstrap a new cluster,
which will lead to a lot of trouble.
### Dynamic provisioning
### Dynamic volume provisioning
The deployment is written for a use case in which dynamic provisioning is
The deployment is written for a use case in which dynamic volume provisioning is
available. When that is not the case, the persistent volume claims need
to be created manually. See [minikube.sh](minikube.sh) for the necessary
steps.
steps. If you're on GCE or AWS, where dynamic provisioning is supported, no
manual work is needed to create the persistent volumes.
## Testing locally on minikube
Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
## Testing in the cloud on GCE or AWS
Once you have a Kubernetes cluster running, just run
`kubectl create -f cockroachdb-petset.yaml` to create your cockroachdb cluster.
This works because GCE and AWS support dynamic volume provisioning by default,
so persistent volumes will be created for the CockroachDB pods as needed.
## Accessing the database
Along with our PetSet configuration, we expose a standard Kubernetes service
@@ -48,8 +57,7 @@ Start up a client pod and open up an interactive, (mostly) Postgres-flavor
SQL shell using:
```console
$ kubectl run -it cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- bash
root@cockroach-client # ./cockroach sql --host cockroachdb-public
$ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public
```
You can see example SQL statements for inserting and querying data in the
@@ -57,6 +65,19 @@ included [demo script](demo.sh), but can use almost any Postgres-style SQL
commands. Some more basic examples can be found within
[CockroachDB's documentation](https://www.cockroachlabs.com/docs/learn-cockroachdb-sql.html).
## Accessing the admin UI
If you want to see information about how the cluster is doing, you can try
pulling up the CockroachDB admin UI by port-forwarding from your local machine
to one of the pods:
```shell
kubectl port-forward cockroachdb-0 8080
```
Once youve done that, you should be able to access the admin UI by visiting
http://localhost:8080/ in your web browser.
## Simulating failures
When all (or enough) nodes are up, simulate a failure like this:
@@ -77,10 +98,17 @@ database and ensuring the other replicas have all data that was written.
## Scaling up or down
Simply edit the PetSet (but note that you may need to create a new persistent
volume claim first). If you ran `minikube.sh`, there's a spare volume so you
can immediately scale up by one. Convince yourself that the new node
immediately serves reads and writes.
Simply patch the PetSet by running
```shell
kubectl patch petset cockroachdb -p '{"spec":{"replicas":4}}'
```
Note that you may need to create a new persistent volume claim first. If you
ran `minikube.sh`, there's a spare volume so you can immediately scale up by
one. If you're running on GCE or AWS, you can scale up by as many as you want
because new volumes will automatically be created for you. Convince yourself
that the new node immediately serves reads and writes.
## Cleaning up when you're done