Edited README

This commit is contained in:
markturansky 2015-04-17 10:42:25 -04:00
parent e1b885c9ad
commit 49883e7d01
2 changed files with 58 additions and 36 deletions

View File

@ -1,69 +1,91 @@
# How To Use Persistent Volumes # How To Use Persistent Volumes
The purpose of this guide is to help you become familiar with Kubernetes Persistent Volumes. By the end of the guide, we'll have
nginx serving content from your persistent volume.
This guide assumes knowledge of Kubernetes fundamentals and that a user has a cluster up and running. This guide assumes knowledge of Kubernetes fundamentals and that a user has a cluster up and running.
## Create volumes ## Provisioning
Persistent Volumes are intended for "network volumes", such as GCE Persistent Disks, NFS shares, and AWS EBS volumes. A PersistentVolume in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their GCE disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
The `HostPath` VolumeSource was included in the Persistent Volumes implementation for ease of testing. PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. `HostPath` was included
for ease of development and testing. You'll create a local `HostPath` for this example.
Create persistent volumes by posting them to the API server:
```
// this will be nginx's webroot
mkdir /tmp/data01
echo 'I love Kubernetes storage!' > /tmp/data01/index.html
```
PVs are created by posting them to the API server.
``` ```
cluster/kubectl.sh create -f examples/persistent-volumes/volumes/local-01.yaml cluster/kubectl.sh create -f examples/persistent-volumes/volumes/local-01.yaml
cluster/kubectl.sh create -f examples/persistent-volumes/volumes/local-02.yaml
cluster/kubectl.sh get pv cluster/kubectl.sh get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO pv0001 map[] 10737418240 RWO Available
pv0002 map[] 5368709120 RWO
In the log:
I0302 10:20:45.663225 1920 persistent_volume_manager.go:115] Managing PersistentVolume[UID=b16e91d6-c0ef-11e4-8be4-80e6500a981e]
I0302 10:20:55.667945 1920 persistent_volume_manager.go:115] Managing PersistentVolume[UID=b41f4f0e-c0ef-11e4-8be4-80e6500a981e]
``` ```
## Create claims ## Requesting storage
Users of Kubernetes request persistent storage for their pods. They don't know how the underlying cluster is provisioned.
They just know they can rely on their claim to storage and they can manage its lifecycle independently from the many pods that may use it.
You must be in a namespace to create claims. You must be in a namespace to create claims.
``` ```
cluster/kubectl.sh create -f examples/persistent-volumes/claims/claim-01.yaml cluster/kubectl.sh create -f examples/persistent-volumes/claims/claim-01.yaml
cluster/kubectl.sh create -f examples/persistent-volumes/claims/claim-02.yaml cluster/kubectl.sh get pvc
NAME LABELS STATUS VOLUME NAME LABELS STATUS VOLUME
myclaim-1 map[] myclaim-1 map[]
myclaim-2 map[]
# A background process will attempt to match this claim to a volume.
# The eventual state of your claim will look something like this:
``` cluster/kubectl.sh get pvc
## Matching and binding
```
PersistentVolumeClaim[UID=f4b3d283-c0ef-11e4-8be4-80e6500a981e] bound to PersistentVolume[UID=b16e91d6-c0ef-11e4-8be4-80e6500a981e]
NAME LABELS STATUS VOLUME
myclaim-1 map[] Bound f5c3a89a-e50a-11e4-972f-80e6500a981e
cluster/kubectl.sh get pv cluster/kubectl.sh get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM
pv0001 map[] 10737418240 RWO myclaim-1 / f4b3d283-c0ef-11e4-8be4-80e6500a981e pv0001 map[] 10737418240 RWO Bound myclaim-1 / 6bef4c40-e50b-11e4-972f-80e6500a981e
pv0002 map[] 5368709120 RWO myclaim-2 / f70da891-c0ef-11e4-8be4-80e6500a981e
cluster/kubectl.sh get pvc
NAME LABELS STATUS VOLUME
myclaim-1 map[] b16e91d6-c0ef-11e4-8be4-80e6500a981e
myclaim-2 map[] b41f4f0e-c0ef-11e4-8be4-80e6500a981e
``` ```
## Using your claim as a volume
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
```
k create -f examples/persistent-volumes/simpletest/pod.yaml
cluster/kubectl.sh get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED
mypod 172.17.0.2 myfrontend nginx 127.0.0.1/127.0.0.1 <none> Running 12 minutes
k create -f examples/persistent-volumes/simpletest/service.json
cluster/kubectl.sh get services
Running: cluster/../cluster/gce/../../cluster/../_output/local/bin/darwin/amd64/kubectl --v=5 get services
NAME LABELS SELECTOR IP PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.2 443/TCP
kubernetes-ro component=apiserver,provider=kubernetes <none> 10.0.0.1 80/TCP
curl 10.0.0.168:3000
I love Kubernetes storage!
```

View File

@ -139,7 +139,7 @@ func matchStorageCapacity(pvA, pvB *api.PersistentVolume) bool {
// filterBoundVolumes is a matchPredicate that filters bound volumes before comparing storage capacity // filterBoundVolumes is a matchPredicate that filters bound volumes before comparing storage capacity
func filterBoundVolumes(compareThis, toThis *api.PersistentVolume) bool { func filterBoundVolumes(compareThis, toThis *api.PersistentVolume) bool {
if toThis.Spec.ClaimRef != nil || compareThis.Spec.ClaimRef != nil { if compareThis.Spec.ClaimRef != nil || toThis.Spec.ClaimRef != nil {
return false return false
} }
return matchStorageCapacity(compareThis, toThis) return matchStorageCapacity(compareThis, toThis)