mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-20 09:33:52 +00:00
Merge pull request #35776 from jimmycuadra/petset-rename-docs-examples
Automatic merge from submit-queue Rename PetSet to StatefulSet in docs and examples. **What this PR does / why we need it**: Addresses some of the pre-code-freeze changes for implementing the PetSet --> StatefulSet rename. (#35534) **Special notes for your reviewer**: This PR only changes docs and examples, as #35731 hasn't been merged yet and I don't want to create merge conflicts. I'll open another PR for any remaining code changes needed after that PR is merged. /cc @erictune @janetkuo @chrislovecnm
This commit is contained in:
@@ -1,21 +1,21 @@
|
||||
# CockroachDB on Kubernetes as a PetSet
|
||||
# CockroachDB on Kubernetes as a StatefulSet
|
||||
|
||||
This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as
|
||||
a PetSet. CockroachDB is a distributed, scalable NewSQL database. Please see
|
||||
a StatefulSet. CockroachDB is a distributed, scalable NewSQL database. Please see
|
||||
[the homepage](https://cockroachlabs.com) and the
|
||||
[documentation](https://www.cockroachlabs.com/docs/) for details.
|
||||
|
||||
## Limitations
|
||||
|
||||
### PetSet limitations
|
||||
### StatefulSet limitations
|
||||
|
||||
Standard PetSet limitations apply: There is currently no possibility to use
|
||||
Standard StatefulSet limitations apply: There is currently no possibility to use
|
||||
node-local storage (outside of single-node tests), and so there is likely
|
||||
a performance hit associated with running CockroachDB on some external storage.
|
||||
Note that CockroachDB already does replication and thus it is unnecessary to
|
||||
deploy it onto persistent volumes which already replicate internally.
|
||||
For this reason, high-performance use cases on a private Kubernetes cluster
|
||||
may want to consider a DaemonSet deployment until PetSets support node-local
|
||||
may want to consider a DaemonSet deployment until Stateful Sets support node-local
|
||||
storage (see #7562).
|
||||
|
||||
### Recovery after persistent storage failure
|
||||
@@ -43,13 +43,13 @@ Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
|
||||
## Testing in the cloud on GCE or AWS
|
||||
|
||||
Once you have a Kubernetes cluster running, just run
|
||||
`kubectl create -f cockroachdb-petset.yaml` to create your cockroachdb cluster.
|
||||
`kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster.
|
||||
This works because GCE and AWS support dynamic volume provisioning by default,
|
||||
so persistent volumes will be created for the CockroachDB pods as needed.
|
||||
|
||||
## Accessing the database
|
||||
|
||||
Along with our PetSet configuration, we expose a standard Kubernetes service
|
||||
Along with our StatefulSet configuration, we expose a standard Kubernetes service
|
||||
that offers a load-balanced virtual IP for clients to access the database
|
||||
with. In our example, we've called this service `cockroachdb-public`.
|
||||
|
||||
@@ -98,10 +98,10 @@ database and ensuring the other replicas have all data that was written.
|
||||
|
||||
## Scaling up or down
|
||||
|
||||
Simply patch the PetSet by running
|
||||
Simply patch the Stateful Set by running
|
||||
|
||||
```shell
|
||||
kubectl patch petset cockroachdb -p '{"spec":{"replicas":4}}'
|
||||
kubectl patch statefulset cockroachdb -p '{"spec":{"replicas":4}}'
|
||||
```
|
||||
|
||||
Note that you may need to create a new persistent volume claim first. If you
|
||||
@@ -116,7 +116,7 @@ Because all of the resources in this example have been tagged with the label `ap
|
||||
we can clean up everything that we created in one quick command using a selector on that label:
|
||||
|
||||
```shell
|
||||
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
|
||||
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
|
||||
```
|
||||
|
||||
|
||||
|
@@ -23,10 +23,10 @@ spec:
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
# This service only exists to create DNS entries for each pet in the petset
|
||||
# such that they can resolve each other's IP addresses. It does not create a
|
||||
# load-balanced ClusterIP and should not be used directly by clients in most
|
||||
# circumstances.
|
||||
# This service only exists to create DNS entries for each pod in the stateful
|
||||
# set such that they can resolve each other's IP addresses. It does not
|
||||
# create a load-balanced ClusterIP and should not be used directly by clients
|
||||
# in most circumstances.
|
||||
name: cockroachdb
|
||||
labels:
|
||||
app: cockroachdb
|
||||
@@ -55,7 +55,7 @@ spec:
|
||||
app: cockroachdb
|
||||
---
|
||||
apiVersion: apps/v1beta1
|
||||
kind: PetSet
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: cockroachdb
|
||||
spec:
|
||||
@@ -71,8 +71,8 @@ spec:
|
||||
# it's started up for the first time. It has to exit successfully
|
||||
# before the pod's main containers are allowed to start.
|
||||
# This particular init container does a DNS lookup for other pods in
|
||||
# the petset to help determine whether or not a cluster already exists.
|
||||
# If any other pets exist, it creates a file in the cockroach-data
|
||||
# the set to help determine whether or not a cluster already exists.
|
||||
# If any other pods exist, it creates a file in the cockroach-data
|
||||
# directory to pass that information along to the primary container that
|
||||
# has to decide what command-line flags to use when starting CockroachDB.
|
||||
# This only matters when a pod's persistent volume is empty - if it has
|
@@ -14,7 +14,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Run the CockroachDB PetSet example on a minikube instance.
|
||||
# Run the CockroachDB StatefulSet example on a minikube instance.
|
||||
#
|
||||
# For a fresh start, run the following first:
|
||||
# minikube delete
|
||||
@@ -29,7 +29,7 @@
|
||||
set -exuo pipefail
|
||||
|
||||
# Clean up anything from a prior run:
|
||||
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
|
||||
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
|
||||
|
||||
# Make persistent volumes and (correctly named) claims. We must create the
|
||||
# claims here manually even though that sounds counter-intuitive. For details
|
||||
@@ -69,4 +69,4 @@ spec:
|
||||
EOF
|
||||
done;
|
||||
|
||||
kubectl create -f cockroachdb-petset.yaml
|
||||
kubectl create -f cockroachdb-statefulset.yaml
|
||||
|
@@ -218,10 +218,10 @@ func TestExampleObjectSchemas(t *testing.T) {
|
||||
"rbd-with-secret": &api.Pod{},
|
||||
},
|
||||
"../examples/storage/cassandra": {
|
||||
"cassandra-daemonset": &extensions.DaemonSet{},
|
||||
"cassandra-controller": &api.ReplicationController{},
|
||||
"cassandra-service": &api.Service{},
|
||||
"cassandra-petset": &apps.StatefulSet{},
|
||||
"cassandra-daemonset": &extensions.DaemonSet{},
|
||||
"cassandra-controller": &api.ReplicationController{},
|
||||
"cassandra-service": &api.Service{},
|
||||
"cassandra-statefulset": &apps.StatefulSet{},
|
||||
},
|
||||
"../examples/cluster-dns": {
|
||||
"dns-backend-rc": &api.ReplicationController{},
|
||||
|
@@ -7,9 +7,9 @@
|
||||
- [Cassandra Docker](#cassandra-docker)
|
||||
- [Quickstart](#quickstart)
|
||||
- [Step 1: Create a Cassandra Headless Service](#step-1-create-a-cassandra-headless-service)
|
||||
- [Step 2: Use a Pet Set to create Cassandra Ring](#step-2-create-a-cassandra-petset)
|
||||
- [Step 3: Validate and Modify The Cassandra Pet Set](#step-3-validate-and-modify-the-cassandra-pet-set)
|
||||
- [Step 4: Delete Cassandra Pet Set](#step-4-delete-cassandra-pet-set)
|
||||
- [Step 2: Use a StatefulSet to create Cassandra Ring](#step-2-use-a-statefulset-to-create-cassandra-ring)
|
||||
- [Step 3: Validate and Modify The Cassandra StatefulSet](#step-3-validate-and-modify-the-cassandra-statefulset)
|
||||
- [Step 4: Delete Cassandra StatefulSet](#step-4-delete-cassandra-statefulset)
|
||||
- [Step 5: Use a Replication Controller to create Cassandra node pods](#step-5-use-a-replication-controller-to-create-cassandra-node-pods)
|
||||
- [Step 6: Scale up the Cassandra cluster](#step-6-scale-up-the-cassandra-cluster)
|
||||
- [Step 7: Delete the Replication Controller](#step-7-delete-the-replication-controller)
|
||||
@@ -30,7 +30,7 @@ This example also uses some of the core components of Kubernetes:
|
||||
- [_Pods_](../../../docs/user-guide/pods.md)
|
||||
- [ _Services_](../../../docs/user-guide/services.md)
|
||||
- [_Replication Controllers_](../../../docs/user-guide/replication-controller.md)
|
||||
- [_Pet Sets_](http://kubernetes.io/docs/user-guide/petset/)
|
||||
- [_Stateful Sets_](http://kubernetes.io/docs/user-guide/petset/)
|
||||
- [_Daemon Sets_](../../../docs/admin/daemons.md)
|
||||
|
||||
## Prerequisites
|
||||
@@ -65,21 +65,21 @@ here are the steps:
|
||||
|
||||
```sh
|
||||
#
|
||||
# Pet Set
|
||||
# StatefulSet
|
||||
#
|
||||
|
||||
# create a service to track all cassandra petset nodes
|
||||
# create a service to track all cassandra statefulset nodes
|
||||
kubectl create -f examples/storage/cassandra/cassandra-service.yaml
|
||||
|
||||
# create a petset
|
||||
kubectl create -f examples/storage/cassandra/cassandra-petset.yaml
|
||||
# create a statefulset
|
||||
kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
|
||||
# validate the Cassandra cluster. Substitute the name of one of your pods.
|
||||
kubectl exec -ti cassandra-0 -- nodetool status
|
||||
|
||||
# cleanup
|
||||
grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
&& kubectl delete petset,po -l app=cassandra \
|
||||
&& kubectl delete statefulset,po -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
&& kubectl delete pvc -l app=cassandra
|
||||
@@ -143,7 +143,7 @@ spec:
|
||||
[Download example](cassandra-service.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
|
||||
|
||||
Create the service for the Pet Set:
|
||||
Create the service for the StatefulSet:
|
||||
|
||||
|
||||
```console
|
||||
@@ -165,18 +165,18 @@ cassandra None <none> 9042/TCP 45s
|
||||
|
||||
If an error is returned the service create failed.
|
||||
|
||||
## Step 2: Use a Pet Set to create Cassandra Ring
|
||||
## Step 2: Use a StatefulSet to create Cassandra Ring
|
||||
|
||||
Pet Sets are a new feature that was added as an <i>Alpha</i> component in Kubernetes
|
||||
1.3. Deploying stateful distributed applications, like Cassandra, within a clustered
|
||||
environment can be challenging. We implemented Pet Set to greatly simplify this
|
||||
process. Multiple Pet Set features are used within this example, but is out of
|
||||
scope of this documentation. [Please refer to the Pet Set documentation.](http://kubernetes.io/docs/user-guide/petset/)
|
||||
StatefulSets (previously PetSets) are a new feature that was added as an <i>Alpha</i> component in
|
||||
Kubernetes 1.3. Deploying stateful distributed applications, like Cassandra, within a clustered
|
||||
environment can be challenging. We implemented StatefulSet to greatly simplify this
|
||||
process. Multiple StatefulSet features are used within this example, but is out of
|
||||
scope of this documentation. [Please refer to the PetSet documentation.](http://kubernetes.io/docs/user-guide/petset/)
|
||||
|
||||
The Pet Set manifest that is included below, creates a Cassandra ring that consists
|
||||
The StatefulSet manifest that is included below, creates a Cassandra ring that consists
|
||||
of three pods.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-petset.yaml -->
|
||||
<!-- BEGIN MUNGE: EXAMPLE cassandra-statefulset.yaml -->
|
||||
|
||||
```yaml
|
||||
apiVersion: "apps/v1beta1"
|
||||
@@ -246,7 +246,7 @@ spec:
|
||||
timeoutSeconds: 5
|
||||
# These volume mounts are persistent. They are like inline claims,
|
||||
# but not exactly because the names need to match exactly one of
|
||||
# the pet volumes.
|
||||
# the stateful pod volumes.
|
||||
volumeMounts:
|
||||
- name: cassandra-data
|
||||
mountPath: /cassandra_data
|
||||
@@ -265,26 +265,26 @@ spec:
|
||||
storage: 1Gi
|
||||
```
|
||||
|
||||
[Download example](cassandra-petset.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-petset.yaml -->
|
||||
[Download example](cassandra-statefulset.yaml?raw=true)
|
||||
<!-- END MUNGE: EXAMPLE cassandra-statefulset.yaml -->
|
||||
|
||||
Create the Cassandra Pet Set as follows:
|
||||
Create the Cassandra StatefulSet as follows:
|
||||
|
||||
```console
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-petset.yaml
|
||||
$ kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
|
||||
```
|
||||
|
||||
## Step 3: Validate and Modify The Cassandra Pet Set
|
||||
## Step 3: Validate and Modify The Cassandra StatefulSet
|
||||
|
||||
Deploying this Pet Set shows off two of the new features that Pet Sets provides.
|
||||
Deploying this StatefulSet shows off two of the new features that StatefulSets provides.
|
||||
|
||||
1. The pod names are known
|
||||
2. The pods deploy in incremental order
|
||||
|
||||
First validate that the Pet Set has deployed, by running `kubectl` command below.
|
||||
First validate that the StatefulSet has deployed, by running `kubectl` command below.
|
||||
|
||||
```console
|
||||
$ kubectl get petset cassandra
|
||||
$ kubectl get statefulset cassandra
|
||||
```
|
||||
|
||||
The command should respond like:
|
||||
@@ -294,7 +294,7 @@ NAME DESIRED CURRENT AGE
|
||||
cassandra 3 3 13s
|
||||
```
|
||||
|
||||
Next watch the Cassandra pods deploy, one after another. The Pet Set resource
|
||||
Next watch the Cassandra pods deploy, one after another. The StatefulSet resource
|
||||
deploys pods in a number fashion: 1, 2, 3, etc. If you execute the following
|
||||
command before the pods deploy you are able to see the ordered creation.
|
||||
|
||||
@@ -305,9 +305,9 @@ cassandra-0 1/1 Running 0 1m
|
||||
cassandra-1 0/1 ContainerCreating 0 8s
|
||||
```
|
||||
|
||||
The above example shows two of the three pods in the Cassandra Pet Set deployed.
|
||||
The above example shows two of the three pods in the Cassandra StatefulSet deployed.
|
||||
Once all of the pods are deployed the same command will respond with the full
|
||||
Pet Set.
|
||||
StatefulSet.
|
||||
|
||||
```console
|
||||
$ kubectl get pods -l="app=cassandra"
|
||||
@@ -339,13 +339,13 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces'
|
||||
system_traces system_schema system_auth system system_distributed
|
||||
```
|
||||
|
||||
In order to increase or decrease the size of the Cassandra Pet Set, you must use
|
||||
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
|
||||
`kubectl edit`. You can find more information about the edit command in the [documentation](../../../docs/user-guide/kubectl/kubectl_edit.md).
|
||||
|
||||
Use the following command to edit the Pet Set.
|
||||
Use the following command to edit the StatefulSet.
|
||||
|
||||
```console
|
||||
$ kubectl edit petset cassandra
|
||||
$ kubectl edit statefulset cassandra
|
||||
```
|
||||
|
||||
This will create an editor in your terminal. The line you are looking to change is
|
||||
@@ -357,8 +357,8 @@ the last line of the example below is the replicas line that you want to change.
|
||||
# and an empty file will abort the edit. If an error occurs while saving this file will be
|
||||
# reopened with the relevant failures.
|
||||
#
|
||||
apiVersion: apps/v1alpha1
|
||||
kind: PetSet
|
||||
apiVersion: apps/v1beta1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
creationTimestamp: 2016-08-13T18:40:58Z
|
||||
generation: 1
|
||||
@@ -367,7 +367,7 @@ metadata:
|
||||
name: cassandra
|
||||
namespace: default
|
||||
resourceVersion: "323"
|
||||
selfLink: /apis/apps/v1alpha1/namespaces/default/petsets/cassandra
|
||||
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/cassandra
|
||||
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
|
||||
spec:
|
||||
replicas: 3
|
||||
@@ -380,10 +380,10 @@ spec:
|
||||
replicas: 4
|
||||
```
|
||||
|
||||
The Pet Set will now contain four Pets.
|
||||
The StatefulSet will now contain four pods.
|
||||
|
||||
```console
|
||||
$ kubectl get petset cassandra
|
||||
$ kubectl get statefulset cassandra
|
||||
```
|
||||
|
||||
The command should respond like:
|
||||
@@ -393,21 +393,21 @@ NAME DESIRED CURRENT AGE
|
||||
cassandra 4 4 36m
|
||||
```
|
||||
|
||||
For the Alpha release of Kubernetes 1.3 the Pet Set resource does not have `kubectl scale`
|
||||
For the Kubernetes 1.5 release, the beta StatefulSet resource does not have `kubectl scale`
|
||||
functionality, like a Deployment, ReplicaSet, Replication Controller, or Job.
|
||||
|
||||
## Step 4: Delete Cassandra Pet Set
|
||||
## Step 4: Delete Cassandra StatefulSet
|
||||
|
||||
There are some limitations with the Alpha release of Pet Set in 1.3. From the [documentation](http://kubernetes.io/docs/user-guide/petset/):
|
||||
There are some limitations with the Alpha release of PetSet in 1.3. From the [documentation](http://kubernetes.io/docs/user-guide/petset/):
|
||||
|
||||
"Deleting the Pet Set will not delete any pets. You will either have to manually scale it down to 0 pets first, or delete the pets yourself.
|
||||
Deleting and/or scaling a Pet Set down will not delete the volumes associated with the Pet Set. This is done to ensure safety first, your data is more valuable than an auto purge of all related Pet Set resources. Deleting the Persistent Volume Claims will result in a deletion of the associated volumes."
|
||||
"Deleting the StatefulSet will not delete any pods. You will either have to manually scale it down to 0 pods first, or delete the pods yourself.
|
||||
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related StatefulSet resources. Deleting the Persistent Volume Claims will result in a deletion of the associated volumes."
|
||||
|
||||
Use the following commands to delete the Pet Set.
|
||||
Use the following commands to delete the StatefulSet.
|
||||
|
||||
```console
|
||||
$ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
|
||||
&& kubectl delete petset,po -l app=cassandra \
|
||||
&& kubectl delete statefulset,po -l app=cassandra \
|
||||
&& echo "Sleeping $grace" \
|
||||
&& sleep $grace \
|
||||
&& kubectl delete pvc -l app=cassandra
|
||||
|
@@ -65,7 +65,7 @@ spec:
|
||||
timeoutSeconds: 5
|
||||
# These volume mounts are persistent. They are like inline claims,
|
||||
# but not exactly because the names need to match exactly one of
|
||||
# the pet volumes.
|
||||
# the stateful pod volumes.
|
||||
volumeMounts:
|
||||
- name: cassandra-data
|
||||
mountPath: /cassandra_data
|
@@ -18,7 +18,7 @@ set -e
|
||||
CONF_DIR=/etc/cassandra
|
||||
CFG=$CONF_DIR/cassandra.yaml
|
||||
|
||||
# we are doing PetSet or just setting our seeds
|
||||
# we are doing StatefulSet or just setting our seeds
|
||||
if [ -z "$CASSANDRA_SEEDS" ]; then
|
||||
HOSTNAME=$(hostname -f)
|
||||
fi
|
||||
@@ -78,7 +78,7 @@ echo "auto_bootstrap: ${CASSANDRA_AUTO_BOOTSTRAP}" >> $CFG
|
||||
# it will be able to get seeds from the seed provider
|
||||
if [[ $CASSANDRA_SEEDS == 'false' ]]; then
|
||||
sed -ri 's/- seeds:.*/- seeds: "'"$POD_IP"'"/' $CFG
|
||||
else # if we have seeds set them. Probably PetSet
|
||||
else # if we have seeds set them. Probably StatefulSet
|
||||
sed -ri 's/- seeds:.*/- seeds: "'"$CASSANDRA_SEEDS"'"/' $CFG
|
||||
fi
|
||||
|
||||
|
Reference in New Issue
Block a user