Merge pull request #35776 from jimmycuadra/petset-rename-docs-examples

Automatic merge from submit-queue

Rename PetSet to StatefulSet in docs and examples.

**What this PR does / why we need it**: Addresses some of the pre-code-freeze changes for implementing the PetSet --> StatefulSet rename. (#35534)

**Special notes for your reviewer**: This PR only changes docs and examples, as #35731 hasn't been merged yet and I don't want to create merge conflicts. I'll open another PR for any remaining code changes needed after that PR is merged. /cc @erictune @janetkuo @chrislovecnm
This commit is contained in:
Kubernetes Submit Queue 2016-11-06 13:30:21 -08:00 committed by GitHub
commit b75c3a45a1
15 changed files with 97 additions and 98 deletions

View File

@ -481,7 +481,7 @@ The multiple substitution approach:
for very large jobs, the work-queue style or another type of controller, such as
map-reduce or spark, may be a better fit.)
- Drawback: is a form of server-side templating, which we want in Kubernetes but
have not fully designed (see the [PetSets proposal](https://github.com/kubernetes/kubernetes/pull/18016/files?short_path=61f4179#diff-61f41798f4bced6e42e45731c1494cee)).
have not fully designed (see the [StatefulSets proposal](https://github.com/kubernetes/kubernetes/pull/18016/files?short_path=61f4179#diff-61f41798f4bced6e42e45731c1494cee)).
The index-only approach:
@ -874,24 +874,24 @@ admission time; it will need to understand indexes.
previous container failures.
- modify the job template, affecting all indexes.
#### Comparison to PetSets
#### Comparison to StatefulSets (previously named PetSets)
The *Index substitution-only* option corresponds roughly to PetSet Proposal 1b.
The `perCompletionArgs` approach is similar to PetSet Proposal 1e, but more
The *Index substitution-only* option corresponds roughly to StatefulSet Proposal 1b.
The `perCompletionArgs` approach is similar to StatefulSet Proposal 1e, but more
restrictive and thus less verbose.
It would be easier for users if Indexed Job and PetSet are similar where
possible. However, PetSet differs in several key respects:
It would be easier for users if Indexed Job and StatefulSet are similar where
possible. However, StatefulSet differs in several key respects:
- PetSet is for ones to tens of instances. Indexed job should work with tens of
- StatefulSet is for ones to tens of instances. Indexed job should work with tens of
thousands of instances.
- When you have few instances, you may want to given them pet names. When you
have many instances, you that many instances, integer indexes make more sense.
- When you have few instances, you may want to give them names. When you have many instances,
integer indexes make more sense.
- When you have thousands of instances, storing the work-list in the JobSpec
is verbose. For PetSet, this is less of a problem.
- PetSets (apparently) need to differ in more fields than indexed Jobs.
is verbose. For StatefulSet, this is less of a problem.
- StatefulSets (apparently) need to differ in more fields than indexed Jobs.
This differs from PetSet in that PetSet uses names and not indexes. PetSet is
This differs from StatefulSet in that StatefulSet uses names and not indexes. StatefulSet is
intended to support ones to tens of things.

View File

@ -11,7 +11,7 @@ Anyone making user facing changes to kubernetes. This is especially important f
### When making Api changes
*e.g. adding Deployments*
* Always make sure docs for downstream effects are updated *(PetSet -> PVC, Deployment -> ReplicationController)*
* Always make sure docs for downstream effects are updated *(StatefulSet -> PVC, Deployment -> ReplicationController)*
* Add or update the corresponding *[Glossary](http://kubernetes.io/docs/reference/)* item
* Verify the guides / walkthroughs do not require any changes:
* **If your change will be recommended over the approaches shown in these guides, then they must be updated to reflect your change**

View File

@ -165,7 +165,7 @@ due to a CVE that just came out (fictional scenario). In this scenario:
up and not scale down the old one.
- an existing replicaSet will be unable to create Pods that replace ones which are terminated. If this is due to
slow loss of nodes, then there should be time to react before significant loss of capacity.
- For non-replicated things (size 1 ReplicaSet, PetSet), a single node failure may disable it.
- For non-replicated things (size 1 ReplicaSet, StatefulSet), a single node failure may disable it.
- a node rolling update will eventually check for liveness of replacements, and would be throttled if
in the case when the image was no longer allowed and so replacements could not be started.
- rapid node restarts will cause existing pod objects to be restarted by kubelet.

View File

@ -158,7 +158,7 @@ Finalizer breaks an assumption that many Kubernetes components have: a deletion
**Replication controller manager**, **Job controller**, and **ReplicaSet controller** ignore pods in terminated phase, so pods with pending finalizers will not block these controllers.
**PetSet controller** will be blocked by a pod with pending finalizers, so synchronous GC might slow down its progress.
**StatefulSet controller** will be blocked by a pod with pending finalizers, so synchronous GC might slow down its progress.
**kubectl**: synchronous GC can simplify the **kubectl delete** reapers. Let's take the `deployment reaper` as an example, since it's the most complicated one. Currently, the reaper finds all `RS` with matching labels, scales them down, polls until `RS.Status.Replica` reaches 0, deletes the `RS`es, and finally deletes the `deployment`. If using synchronous GC, `kubectl delete deployment` is as easy as sending a synchronous GC delete request for the deployment, and polls until the deployment is deleted from the key-value store.

View File

@ -11,7 +11,7 @@ There are two main motivators for Template functionality in Kubernetes: Control
Today the replication controller defines a PodTemplate which allows it to instantiate multiple pods with identical characteristics.
This is useful but limited. Stateful applications have a need to instantiate multiple instances of a more sophisticated topology
than just a single pod (e.g. they also need Volume definitions). A Template concept would allow a Controller to stamp out multiple
instances of a given Template definition. This capability would be immediately useful to the [PetSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.
instances of a given Template definition. This capability would be immediately useful to the [StatefulSet](https://github.com/kubernetes/kubernetes/pull/18016) proposal.
Similarly the [Service Catalog proposal](https://github.com/kubernetes/kubernetes/pull/17543) could leverage template instantiation as a mechanism for claiming service instances.
@ -47,7 +47,7 @@ values are appropriate for a deployer to tune or what the parameters control.
* Providing a library of predefined application definitions that users can select from
* Enabling the creation of user interfaces that can guide an application deployer through the deployment process with descriptive help about the configuration value decisions they are making, and useful default values where appropriate
* Exporting a set of objects in a namespace as a template so the topology can be inspected/visualized or recreated in another environment
* Controllers that need to instantiate multiple instances of identical objects (e.g. PetSets).
* Controllers that need to instantiate multiple instances of identical objects (e.g. StatefulSets).
### Use cases for parameters within templates
@ -65,7 +65,7 @@ values are appropriate for a deployer to tune or what the parameters control.
a pod as a TLS cert).
* Provide guidance to users for parameters such as default values, descriptions, and whether or not a particular parameter value
is required or can be left blank.
* Parameterize the replica count of a deployment or [PetSet](https://github.com/kubernetes/kubernetes/pull/18016)
* Parameterize the replica count of a deployment or [StatefulSet](https://github.com/kubernetes/kubernetes/pull/18016)
* Parameterize part of the labels and selector for a DaemonSet
* Parameterize quota/limit values for a pod
* Parameterize a secret value so a user can provide a custom password or other secret at deployment time

View File

@ -1,21 +1,21 @@
# CockroachDB on Kubernetes as a PetSet
# CockroachDB on Kubernetes as a StatefulSet
This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as
a PetSet. CockroachDB is a distributed, scalable NewSQL database. Please see
a StatefulSet. CockroachDB is a distributed, scalable NewSQL database. Please see
[the homepage](https://cockroachlabs.com) and the
[documentation](https://www.cockroachlabs.com/docs/) for details.
## Limitations
### PetSet limitations
### StatefulSet limitations
Standard PetSet limitations apply: There is currently no possibility to use
Standard StatefulSet limitations apply: There is currently no possibility to use
node-local storage (outside of single-node tests), and so there is likely
a performance hit associated with running CockroachDB on some external storage.
Note that CockroachDB already does replication and thus it is unnecessary to
deploy it onto persistent volumes which already replicate internally.
For this reason, high-performance use cases on a private Kubernetes cluster
may want to consider a DaemonSet deployment until PetSets support node-local
may want to consider a DaemonSet deployment until Stateful Sets support node-local
storage (see #7562).
### Recovery after persistent storage failure
@ -43,13 +43,13 @@ Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
## Testing in the cloud on GCE or AWS
Once you have a Kubernetes cluster running, just run
`kubectl create -f cockroachdb-petset.yaml` to create your cockroachdb cluster.
`kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster.
This works because GCE and AWS support dynamic volume provisioning by default,
so persistent volumes will be created for the CockroachDB pods as needed.
## Accessing the database
Along with our PetSet configuration, we expose a standard Kubernetes service
Along with our StatefulSet configuration, we expose a standard Kubernetes service
that offers a load-balanced virtual IP for clients to access the database
with. In our example, we've called this service `cockroachdb-public`.
@ -98,10 +98,10 @@ database and ensuring the other replicas have all data that was written.
## Scaling up or down
Simply patch the PetSet by running
Simply patch the Stateful Set by running
```shell
kubectl patch petset cockroachdb -p '{"spec":{"replicas":4}}'
kubectl patch statefulset cockroachdb -p '{"spec":{"replicas":4}}'
```
Note that you may need to create a new persistent volume claim first. If you
@ -116,7 +116,7 @@ Because all of the resources in this example have been tagged with the label `ap
we can clean up everything that we created in one quick command using a selector on that label:
```shell
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
```

View File

@ -23,10 +23,10 @@ spec:
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pet in the petset
# such that they can resolve each other's IP addresses. It does not create a
# load-balanced ClusterIP and should not be used directly by clients in most
# circumstances.
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: cockroachdb
labels:
app: cockroachdb
@ -55,7 +55,7 @@ spec:
app: cockroachdb
---
apiVersion: apps/v1beta1
kind: PetSet
kind: StatefulSet
metadata:
name: cockroachdb
spec:
@ -71,8 +71,8 @@ spec:
# it's started up for the first time. It has to exit successfully
# before the pod's main containers are allowed to start.
# This particular init container does a DNS lookup for other pods in
# the petset to help determine whether or not a cluster already exists.
# If any other pets exist, it creates a file in the cockroach-data
# the set to help determine whether or not a cluster already exists.
# If any other pods exist, it creates a file in the cockroach-data
# directory to pass that information along to the primary container that
# has to decide what command-line flags to use when starting CockroachDB.
# This only matters when a pod's persistent volume is empty - if it has

View File

@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Run the CockroachDB PetSet example on a minikube instance.
# Run the CockroachDB StatefulSet example on a minikube instance.
#
# For a fresh start, run the following first:
# minikube delete
@ -29,7 +29,7 @@
set -exuo pipefail
# Clean up anything from a prior run:
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
kubectl delete statefulsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
# Make persistent volumes and (correctly named) claims. We must create the
# claims here manually even though that sounds counter-intuitive. For details
@ -69,4 +69,4 @@ spec:
EOF
done;
kubectl create -f cockroachdb-petset.yaml
kubectl create -f cockroachdb-statefulset.yaml

View File

@ -218,10 +218,10 @@ func TestExampleObjectSchemas(t *testing.T) {
"rbd-with-secret": &api.Pod{},
},
"../examples/storage/cassandra": {
"cassandra-daemonset": &extensions.DaemonSet{},
"cassandra-controller": &api.ReplicationController{},
"cassandra-service": &api.Service{},
"cassandra-petset": &apps.StatefulSet{},
"cassandra-daemonset": &extensions.DaemonSet{},
"cassandra-controller": &api.ReplicationController{},
"cassandra-service": &api.Service{},
"cassandra-statefulset": &apps.StatefulSet{},
},
"../examples/cluster-dns": {
"dns-backend-rc": &api.ReplicationController{},

View File

@ -7,9 +7,9 @@
- [Cassandra Docker](#cassandra-docker)
- [Quickstart](#quickstart)
- [Step 1: Create a Cassandra Headless Service](#step-1-create-a-cassandra-headless-service)
- [Step 2: Use a Pet Set to create Cassandra Ring](#step-2-create-a-cassandra-petset)
- [Step 3: Validate and Modify The Cassandra Pet Set](#step-3-validate-and-modify-the-cassandra-pet-set)
- [Step 4: Delete Cassandra Pet Set](#step-4-delete-cassandra-pet-set)
- [Step 2: Use a StatefulSet to create Cassandra Ring](#step-2-use-a-statefulset-to-create-cassandra-ring)
- [Step 3: Validate and Modify The Cassandra StatefulSet](#step-3-validate-and-modify-the-cassandra-statefulset)
- [Step 4: Delete Cassandra StatefulSet](#step-4-delete-cassandra-statefulset)
- [Step 5: Use a Replication Controller to create Cassandra node pods](#step-5-use-a-replication-controller-to-create-cassandra-node-pods)
- [Step 6: Scale up the Cassandra cluster](#step-6-scale-up-the-cassandra-cluster)
- [Step 7: Delete the Replication Controller](#step-7-delete-the-replication-controller)
@ -30,7 +30,7 @@ This example also uses some of the core components of Kubernetes:
- [_Pods_](../../../docs/user-guide/pods.md)
- [ _Services_](../../../docs/user-guide/services.md)
- [_Replication Controllers_](../../../docs/user-guide/replication-controller.md)
- [_Pet Sets_](http://kubernetes.io/docs/user-guide/petset/)
- [_Stateful Sets_](http://kubernetes.io/docs/user-guide/petset/)
- [_Daemon Sets_](../../../docs/admin/daemons.md)
## Prerequisites
@ -65,21 +65,21 @@ here are the steps:
```sh
#
# Pet Set
# StatefulSet
#
# create a service to track all cassandra petset nodes
# create a service to track all cassandra statefulset nodes
kubectl create -f examples/storage/cassandra/cassandra-service.yaml
# create a petset
kubectl create -f examples/storage/cassandra/cassandra-petset.yaml
# create a statefulset
kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
# validate the Cassandra cluster. Substitute the name of one of your pods.
kubectl exec -ti cassandra-0 -- nodetool status
# cleanup
grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
&& kubectl delete petset,po -l app=cassandra \
&& kubectl delete statefulset,po -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra
@ -143,7 +143,7 @@ spec:
[Download example](cassandra-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
Create the service for the Pet Set:
Create the service for the StatefulSet:
```console
@ -165,18 +165,18 @@ cassandra None <none> 9042/TCP 45s
If an error is returned the service create failed.
## Step 2: Use a Pet Set to create Cassandra Ring
## Step 2: Use a StatefulSet to create Cassandra Ring
Pet Sets are a new feature that was added as an <i>Alpha</i> component in Kubernetes
1.3. Deploying stateful distributed applications, like Cassandra, within a clustered
environment can be challenging. We implemented Pet Set to greatly simplify this
process. Multiple Pet Set features are used within this example, but is out of
scope of this documentation. [Please refer to the Pet Set documentation.](http://kubernetes.io/docs/user-guide/petset/)
StatefulSets (previously PetSets) are a new feature that was added as an <i>Alpha</i> component in
Kubernetes 1.3. Deploying stateful distributed applications, like Cassandra, within a clustered
environment can be challenging. We implemented StatefulSet to greatly simplify this
process. Multiple StatefulSet features are used within this example, but is out of
scope of this documentation. [Please refer to the PetSet documentation.](http://kubernetes.io/docs/user-guide/petset/)
The Pet Set manifest that is included below, creates a Cassandra ring that consists
The StatefulSet manifest that is included below, creates a Cassandra ring that consists
of three pods.
<!-- BEGIN MUNGE: EXAMPLE cassandra-petset.yaml -->
<!-- BEGIN MUNGE: EXAMPLE cassandra-statefulset.yaml -->
```yaml
apiVersion: "apps/v1beta1"
@ -246,7 +246,7 @@ spec:
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the pet volumes.
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
@ -265,26 +265,26 @@ spec:
storage: 1Gi
```
[Download example](cassandra-petset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-petset.yaml -->
[Download example](cassandra-statefulset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-statefulset.yaml -->
Create the Cassandra Pet Set as follows:
Create the Cassandra StatefulSet as follows:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-petset.yaml
$ kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
```
## Step 3: Validate and Modify The Cassandra Pet Set
## Step 3: Validate and Modify The Cassandra StatefulSet
Deploying this Pet Set shows off two of the new features that Pet Sets provides.
Deploying this StatefulSet shows off two of the new features that StatefulSets provides.
1. The pod names are known
2. The pods deploy in incremental order
First validate that the Pet Set has deployed, by running `kubectl` command below.
First validate that the StatefulSet has deployed, by running `kubectl` command below.
```console
$ kubectl get petset cassandra
$ kubectl get statefulset cassandra
```
The command should respond like:
@ -294,7 +294,7 @@ NAME DESIRED CURRENT AGE
cassandra 3 3 13s
```
Next watch the Cassandra pods deploy, one after another. The Pet Set resource
Next watch the Cassandra pods deploy, one after another. The StatefulSet resource
deploys pods in a number fashion: 1, 2, 3, etc. If you execute the following
command before the pods deploy you are able to see the ordered creation.
@ -305,9 +305,9 @@ cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
```
The above example shows two of the three pods in the Cassandra Pet Set deployed.
The above example shows two of the three pods in the Cassandra StatefulSet deployed.
Once all of the pods are deployed the same command will respond with the full
Pet Set.
StatefulSet.
```console
$ kubectl get pods -l="app=cassandra"
@ -339,13 +339,13 @@ $ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces'
system_traces system_schema system_auth system system_distributed
```
In order to increase or decrease the size of the Cassandra Pet Set, you must use
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
`kubectl edit`. You can find more information about the edit command in the [documentation](../../../docs/user-guide/kubectl/kubectl_edit.md).
Use the following command to edit the Pet Set.
Use the following command to edit the StatefulSet.
```console
$ kubectl edit petset cassandra
$ kubectl edit statefulset cassandra
```
This will create an editor in your terminal. The line you are looking to change is
@ -357,8 +357,8 @@ the last line of the example below is the replicas line that you want to change.
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1alpha1
kind: PetSet
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
@ -367,7 +367,7 @@ metadata:
name: cassandra
namespace: default
resourceVersion: "323"
selfLink: /apis/apps/v1alpha1/namespaces/default/petsets/cassandra
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/cassandra
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
@ -380,10 +380,10 @@ spec:
replicas: 4
```
The Pet Set will now contain four Pets.
The StatefulSet will now contain four pods.
```console
$ kubectl get petset cassandra
$ kubectl get statefulset cassandra
```
The command should respond like:
@ -393,21 +393,21 @@ NAME DESIRED CURRENT AGE
cassandra 4 4 36m
```
For the Alpha release of Kubernetes 1.3 the Pet Set resource does not have `kubectl scale`
For the Kubernetes 1.5 release, the beta StatefulSet resource does not have `kubectl scale`
functionality, like a Deployment, ReplicaSet, Replication Controller, or Job.
## Step 4: Delete Cassandra Pet Set
## Step 4: Delete Cassandra StatefulSet
There are some limitations with the Alpha release of Pet Set in 1.3. From the [documentation](http://kubernetes.io/docs/user-guide/petset/):
There are some limitations with the Alpha release of PetSet in 1.3. From the [documentation](http://kubernetes.io/docs/user-guide/petset/):
"Deleting the Pet Set will not delete any pets. You will either have to manually scale it down to 0 pets first, or delete the pets yourself.
Deleting and/or scaling a Pet Set down will not delete the volumes associated with the Pet Set. This is done to ensure safety first, your data is more valuable than an auto purge of all related Pet Set resources. Deleting the Persistent Volume Claims will result in a deletion of the associated volumes."
"Deleting the StatefulSet will not delete any pods. You will either have to manually scale it down to 0 pods first, or delete the pods yourself.
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related StatefulSet resources. Deleting the Persistent Volume Claims will result in a deletion of the associated volumes."
Use the following commands to delete the Pet Set.
Use the following commands to delete the StatefulSet.
```console
$ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
&& kubectl delete petset,po -l app=cassandra \
&& kubectl delete statefulset,po -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra

View File

@ -65,7 +65,7 @@ spec:
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the pet volumes.
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data

View File

@ -18,7 +18,7 @@ set -e
CONF_DIR=/etc/cassandra
CFG=$CONF_DIR/cassandra.yaml
# we are doing PetSet or just setting our seeds
# we are doing StatefulSet or just setting our seeds
if [ -z "$CASSANDRA_SEEDS" ]; then
HOSTNAME=$(hostname -f)
fi
@ -78,7 +78,7 @@ echo "auto_bootstrap: ${CASSANDRA_AUTO_BOOTSTRAP}" >> $CFG
# it will be able to get seeds from the seed provider
if [[ $CASSANDRA_SEEDS == 'false' ]]; then
sed -ri 's/- seeds:.*/- seeds: "'"$POD_IP"'"/' $CFG
else # if we have seeds set them. Probably PetSet
else # if we have seeds set them. Probably StatefulSet
sed -ri 's/- seeds:.*/- seeds: "'"$CASSANDRA_SEEDS"'"/' $CFG
fi

View File

@ -2464,7 +2464,7 @@ __EOF__
# Pre-condition: no statefulset exists
kube::test::get_object_assert statefulset "{{range.items}}{{$id_field}}:{{end}}" ''
# Command: create statefulset
kubectl create -f hack/testdata/nginx-petset.yaml "${kube_flags[@]}"
kubectl create -f hack/testdata/nginx-statefulset.yaml "${kube_flags[@]}"
### Scale statefulset test with current-replicas and replicas
# Pre-condition: 0 replicas
@ -2476,12 +2476,12 @@ __EOF__
# Typically we'd wait and confirm that N>1 replicas are up, but this framework
# doesn't start the scheduler, so pet-0 will block all others.
# TODO: test robust scaling in an e2e.
wait-for-pods-with-label "app=nginx-petset" "nginx-0"
wait-for-pods-with-label "app=nginx-statefulset" "nginx-0"
### Clean up
kubectl delete -f hack/testdata/nginx-petset.yaml "${kube_flags[@]}"
kubectl delete -f hack/testdata/nginx-statefulset.yaml "${kube_flags[@]}"
# Post-condition: no pods from statefulset controller
wait-for-pods-with-label "app=nginx-petset" ""
wait-for-pods-with-label "app=nginx-statefulset" ""
######################

View File

@ -6,7 +6,7 @@ metadata:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx-petset
app: nginx-statefulset
spec:
ports:
- port: 80
@ -14,7 +14,7 @@ spec:
# *.nginx.default.svc.cluster.local
clusterIP: None
selector:
app: nginx-petset
app: nginx-statefulset
---
apiVersion: apps/v1beta1
kind: StatefulSet
@ -26,7 +26,7 @@ spec:
template:
metadata:
labels:
app: nginx-petset
app: nginx-statefulset
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
@ -41,4 +41,3 @@ spec:
- sh
- -c
- 'while true; do sleep 1; done'