Consolidated examples into storage/ and volume/ folders

Search and replace for references to moved examples

Reverted find and replace paths on auto gen docs

Reverting changes to changelog

Fix bugs in test-cmd.sh

Fixed path in examples README

ran update-all successfully

Updated verify-flags exceptions to include renamed files
This commit is contained in:
Cindy Wang
2016-06-21 15:43:29 -07:00
parent 04602bb9e5
commit fedc513658
137 changed files with 324 additions and 409 deletions

View File

@@ -0,0 +1,65 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
This is a simple web server pod which serves HTML from an AWS EBS
volume.
If you did not use kube-up script, make sure that your minions have the following IAM permissions ([Amazon IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role-console)):
```shell
ec2:AttachVolume
ec2:DetachVolume
ec2:DescribeInstances
ec2:DescribeVolumes
```
Create a volume in the same region as your node.
Add your volume information in the pod description file aws-ebs-web.yaml then create the pod:
```shell
$ kubectl create -f examples/volumes/aws_ebs/aws-ebs-web.yaml
```
Add some data to the volume if is empty:
```sh
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
```
You should now be able to query your web server:
```sh
$ curl <Pod IP address>
$ Hello World
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/aws_ebs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: aws-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: tcp
volumeMounts:
- name: html-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: html-volume
awsElasticBlockStore:
# Enter the volume ID below
volumeID: volume_ID
fsType: ext4

View File

@@ -0,0 +1,64 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# How to Use it?
Install *cifs-utils* on the Kubernetes host. For example, on Fedora based Linux
# yum -y install cifs-utils
Note, as explained in [Azure File Storage for Linux](https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/), the Linux hosts and the file share must be in the same Azure region.
Obtain an Microsoft Azure storage account and create a [secret](secret/azure-secret.yaml) that contains the base64 encoded Azure Storage account name and key. In the secret file, base64-encode Azure Storage account name and pair it with name *azurestorageaccountname*, and base64-encode Azure Storage access key and pair it with name *azurestorageaccountkey*.
Then create a Pod using the volume spec based on [azure](azure.yaml).
In the pod, you need to provide the following information:
- *secretName*: the name of the secret that contains both Azure storage account name and key.
- *shareName*: The share name to be used.
- *readOnly*: Whether the filesystem is used as readOnly.
Create the secret:
```console
# kubectl create -f examples/volumes/azure_file/secret/azure-secret.yaml
```
You should see the account name and key from `kubectl get secret`
Then create the Pod:
```console
# kubectl create -f examples/volumes/azure_file/azure.yaml
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/azure_file/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Pod
metadata:
name: azure
spec:
containers:
- image: kubernetes/pause
name: azure
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: k8stest
readOnly: false

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
type: Opaque
data:
azurestorageaccountname: azhzdGVzdA==
azurestorageaccountkey: eElGMXpKYm5ub2pGTE1Ta0JwNTBteDAyckhzTUsyc2pVN21GdDRMMTNob0I3ZHJBYUo4akQ2K0E0NDNqSm9nVjd5MkZVT2hRQ1dQbU02WWFOSHk3cWc9PQ==

View File

@@ -0,0 +1,67 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# How to Use it?
Install Ceph on the Kubernetes host. For example, on Fedora 21
# yum -y install ceph
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/rootfs/ceph_docker)
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
Once you have installed Ceph and a Kubernetes cluster, you can create a pod based on my examples [cephfs.yaml](cephfs.yaml) and [cephfs-with-secret.yaml](cephfs-with-secret.yaml). In the pod yaml, you need to provide the following information.
- *monitors*: Array of Ceph monitors.
- *path*: Used as the mounted root, rather than the full Ceph tree. If not provided, default */* is used.
- *user*: The RADOS user name. If not provided, default *admin* is used.
- *secretFile*: The path to the keyring file. If not provided, default */etc/ceph/user.secret* is used.
- *secretRef*: Reference to Ceph authentication secrets. If provided, *secret* overrides *secretFile*.
- *readOnly*: Whether the filesystem is used as readOnly.
Here are the commands:
```console
# kubectl create -f examples/volumes/cephfs/cephfs.yaml
# create a secret if you want to use Ceph secret instead of secret file
# kubectl create -f examples/volumes/cephfs/secret/ceph-secret.yaml
# kubectl create -f examples/volumes/cephfs/cephfs-with-secret.yaml
# kubectl get pods
```
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/cephfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
name: cephfs2
spec:
containers:
- name: cephfs-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/cephfs"
name: cephfs
volumes:
- name: cephfs
cephfs:
monitors:
- 10.16.154.78:6789
- 10.16.154.82:6789
- 10.16.154.83:6789
user: admin
secretRef:
name: ceph-secret
readOnly: true

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: Pod
metadata:
name: cephfs
spec:
containers:
- name: cephfs-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/cephfs"
name: cephfs
volumes:
- name: cephfs
cephfs:
monitors:
- 10.16.154.78:6789
- 10.16.154.82:6789
- 10.16.154.83:6789
# by default the path is /, but you can override and mount a specific path of the filesystem by using the path attribute
# path: /some/path/in/side/cephfs
user: admin
secretFile: "/etc/ceph/admin.secret"
readOnly: true

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ==

View File

@@ -0,0 +1,73 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Step 1. Setting up Fibre Channel Target
On your FC SAN Zone manager, allocate and mask LUNs so Kubernetes hosts can access them.
## Step 2. Creating the Pod with Fibre Channel persistent storage
Once you have installed Fibre Channel initiator and new Kubernetes, you can create a pod based on my example [fc.yaml](fc.yaml). In the pod JSON, you need to provide *targetWWNs* (array of Fibre Channel target's World Wide Names), *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
Once your pod is created, run it on the Kubernetes master:
```console
kubectl create -f ./your_new_pod.json
```
Here is my command and output:
```console
# kubectl create -f examples/volumes/fibre_channel/fc.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
fcpd 2/2 Running 0 10m
```
On the Kubernetes host, I got these in mount output
```console
#mount |grep /var/lib/kubelet/plugins/kubernetes.io
/dev/mapper/360a98000324669436c2b45666c567946 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-2 type ext4 (ro,relatime,seclabel,stripe=16,data=ordered)
/dev/mapper/360a98000324669436c2b45666c567944 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-1 type ext4 (rw,relatime,seclabel,stripe=16,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod.
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090ac457ddc2 kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-rw.aae720ec_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_99eb5415
5e2629cf3e7b kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-ro.857720dc_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_c0175742
2948683253f7 gcr.io/google_containers/pause:0.8.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD.7be6d81d_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_8d9dd7bf
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/fibre_channel/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Pod
metadata:
name: fc
spec:
containers:
- image: kubernetes/pause
name: fc
volumeMounts:
- name: fc-vol
mountPath: /mnt/fc
volumes:
- name: fc-vol
fc:
targetWWNs: ['500a0982991b8dc5', '500a0982891b8dc5']
lun: 2
fsType: ext4
readOnly: true

View File

@@ -0,0 +1,113 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Flexvolume
Flexvolume enables users to mount vendor volumes into kubernetes. It expects vendor drivers are installed in the volume plugin path on every kubelet node.
It allows for vendors to develop their own drivers to mount volumes on nodes.
*Note: Flexvolume is an alpha feature and is most likely to change in future*
## Prerequisites
Install the vendor driver on all nodes in the kubelet plugin path. Path for installing the plugin: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/\<vendor~driver\>/\<driver\>
For example to add a 'cifs' driver, by vendor 'foo' install the driver at: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/\<foo~cifs\>/cifs
## Plugin details
Driver will be invoked with 'Init' to initialize the driver. It will be invoked with 'attach' to attach the volume and with 'detach' to detach the volume from the kubelet node. It also supports custom mounts using 'mount' and 'unmount' callouts to the driver.
### Driver invocation model:
Init:
```
<driver executable> init
```
Attach:
```
<driver executable> attach <json options>
```
Detach:
```
<driver executable> detach <mount device>
```
Mount:
```
<driver executable> mount <target mount dir> <mount device> <json options>
```
Unmount:
```
<driver executable> unmount <mount dir>
```
See lvm[lvm] for a quick example on how to write a simple flexvolume driver.
### Driver output:
Flexvolume expects the driver to reply with the status of the operation in the
following format.
```
{
"status": "<Success/Failure>",
"message": "<Reason for success/failure>",
"device": "<Path to the device attached. This field is valid only for attach calls>"
}
```
### Default Json options
In addition to the flags specified by the user in the Options field of the FlexVolumeSource, the following flags are also passed to the executable.
```
"kubernetes.io/fsType":"<FS type>",
"kubernetes.io/readwrite":"<rw>",
"kubernetes.io/secret/key1":"<secret1>"
...
"kubernetes.io/secret/keyN":"<secretN>"
```
### Example of Flexvolume
See [nginx.yaml](nginx.yaml) for a quick example on how to use Flexvolume in a pod.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/flexvolume/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

152
examples/volumes/flexvolume/lvm Executable file
View File

@@ -0,0 +1,152 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Notes:
# - Please install "jq" package before using this driver.
usage() {
err "Invalid usage. Usage: "
err "\t$0 init"
err "\t$0 attach <json params>"
err "\t$0 detach <mount device>"
err "\t$0 mount <mount dir> <mount device> <json params>"
err "\t$0 unmount <mount dir>"
exit 1
}
err() {
echo -ne $* 1>&2
}
log() {
echo -ne $* >&1
}
ismounted() {
MOUNT=`findmnt -n ${MNTPATH} 2>/dev/null | cut -d' ' -f1`
if [ "${MOUNT}" == "${MNTPATH}" ]; then
echo "1"
else
echo "0"
fi
}
attach() {
VOLUMEID=$(echo $1 | jq -r '.volumeID')
SIZE=$(echo $1 | jq -r '.size')
VG=$(echo $1|jq -r '.volumegroup')
# LVM substitutes - with --
VOLUMEID=`echo $VOLUMEID|sed s/-/--/g`
VG=`echo $VG|sed s/-/--/g`
DMDEV="/dev/mapper/${VG}-${VOLUMEID}"
if [ ! -b "${DMDEV}" ]; then
err "{\"status\": \"Failure\", \"message\": \"Volume ${VOLUMEID} does not exist\"}"
exit 1
fi
log "{\"status\": \"Success\", \"device\":\"${DMDEV}\"}"
exit 0
}
detach() {
log "{\"status\": \"Success\"}"
exit 0
}
domount() {
MNTPATH=$1
DMDEV=$2
FSTYPE=$(echo $3|jq -r '.["kubernetes.io/fsType"]')
if [ ! -b "${DMDEV}" ]; then
err "{\"status\": \"Failure\", \"message\": \"${DMDEV} does not exist\"}"
exit 1
fi
if [ $(ismounted) -eq 1 ] ; then
log "{\"status\": \"Success\"}"
exit 0
fi
VOLFSTYPE=`blkid -o udev ${DMDEV} 2>/dev/null|grep "ID_FS_TYPE"|cut -d"=" -f2`
if [ "${VOLFSTYPE}" == "" ]; then
mkfs -t ${FSTYPE} ${DMDEV} >/dev/null 2>&1
if [ $? -ne 0 ]; then
err "{ \"status\": \"Failure\", \"message\": \"Failed to create fs ${FSTYPE} on device ${DMDEV}\"}"
exit 1
fi
fi
mkdir -p ${MNTPATH} &> /dev/null
mount ${DMDEV} ${MNTPATH} &> /dev/null
if [ $? -ne 0 ]; then
err "{ \"status\": \"Failure\", \"message\": \"Failed to mount device ${DMDEV} at ${MNTPATH}\"}"
exit 1
fi
log "{\"status\": \"Success\"}"
exit 0
}
unmount() {
MNTPATH=$1
if [ $(ismounted) -eq 0 ] ; then
log "{\"status\": \"Success\"}"
exit 0
fi
umount ${MNTPATH} &> /dev/null
if [ $? -ne 0 ]; then
err "{ \"status\": \"Failed\", \"message\": \"Failed to unmount volume at ${MNTPATH}\"}"
exit 1
fi
rmdir ${MNTPATH} &> /dev/null
log "{\"status\": \"Success\"}"
exit 0
}
op=$1
if [ "$op" = "init" ]; then
log "{\"status\": \"Success\"}"
exit 0
fi
if [ $# -lt 2 ]; then
usage
fi
shift
case "$op" in
attach)
attach $*
;;
detach)
detach $*
;;
mount)
domount $*
;;
unmount)
unmount $*
;;
*)
usage
esac
exit 1

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: test
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: test
flexVolume:
driver: "kubernetes.io/lvm"
fsType: "ext4"
options:
volumeID: "vol1"
size: "1000m"
volumegroup: "kube_vg"

View File

@@ -0,0 +1,144 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Using Flocker volumes
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
This example provides information about how to set-up a Flocker installation and configure it in Kubernetes, as well as how to use the plugin to use Flocker datasets as volumes in Kubernetes.
### Prerequisites
A Flocker cluster is required to use Flocker with Kubernetes. A Flocker cluster comprises:
- *Flocker Control Service*: provides a REST over HTTP API to modify the desired configuration of the cluster;
- *Flocker Dataset Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration;
- *Flocker Container Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration (unused in this configuration but still required in the cluster).
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Slave node.
It is recommended to follow [Installing Flocker](https://docs.clusterhq.com/en/latest/install/index.html) and the instructions below to set-up the Flocker cluster to be used with Kubernetes.
#### Flocker Control Service
The Flocker Control Service should be installed manually on a host. In the future, this may be deployed in pod(s) and exposed as a Kubernetes service.
#### Flocker Agent(s)
The Flocker Agents should be manually installed on *all* Kubernetes nodes. These agents are responsible for (de)attachment and (un)mounting and are therefore services that should be run with appropriate privileges on these hosts.
In order for the plugin to connect to Flocker (via REST API), several environment variables must be specified on *all* Kubernetes nodes. This may be specified in an init script for the node's Kubelet service, for example, you could store the below environment variables in a file called `/etc/flocker/env` and place `EnvironmentFile=/etc/flocker/env` into `/etc/systemd/system/kubelet.service` or wherever the `kubelet.service` file lives.
The environment variables that need to be set are:
- `FLOCKER_CONTROL_SERVICE_HOST` should refer to the hostname of the Control Service
- `FLOCKER_CONTROL_SERVICE_PORT` should refer to the port of the Control Service (the API service defaults to 4523 but this must still be specified)
The following environment variables should refer to keys and certificates on the host that are specific to that host.
- `FLOCKER_CONTROL_SERVICE_CA_FILE` should refer to the full path to the cluster certificate file
- `FLOCKER_CONTROL_SERVICE_CLIENT_KEY_FILE` should refer to the full path to the [api key](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
- `FLOCKER_CONTROL_SERVICE_CLIENT_CERT_FILE` should refer to the full path to the [api certificate](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
More details regarding cluster authentication can be found at the documentation: [Flocker Cluster Security & Authentication](https://docs.clusterhq.com/en/latest/concepts/security.html) and [Configuring Cluster Authentication](https://docs.clusterhq.com/en/latest/config/configuring-authentication.html).
### Create a pod with a Flocker volume
**Note**: A new dataset must first be provisioned using the Flocker tools or Docker CLI *(To use the Docker CLI, you need the [Flocker plugin for Docker](https://clusterhq.com/docker-plugin/) installed along with Docker 1.9+)*. For example, using the [Volumes CLI](https://docs.clusterhq.com/en/latest/labs/volumes-cli.html), create a new dataset called 'my-flocker-vol' of size 10GB:
```sh
flocker-volumes create -m name=my-flocker-vol -s 10G -n <node-uuid>
# -n or --node= Is the initial primary node for dataset (any unique
# prefix of node uuid, see flocker-volumes list-nodes)
```
The following *volume* spec from the [example pod](flocker-pod.yml) illustrates how to use this Flocker dataset as a volume.
> Note, the [example pod](flocker-pod.yml) used here does not include a replication controller, therefore the POD will not be rescheduled upon failure. If your looking for an example that does include a replication controller and service spec you can use [this example pod including a replication controller](flocker-pod-with-rc.yml)
```yaml
volumes:
- name: www-root
flocker:
datasetName: my-flocker-vol
```
- **datasetName** is the unique name for the Flocker dataset and should match the *name* in the metadata.
Use `kubetctl` to create the pod.
```sh
$ kubectl create -f examples/volumes/flocker/flocker-pod.yml
```
You should now verify that the pod is running and determine it's IP address:
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
flocker 1/1 Running 0 3m
$ kubectl get pods flocker -t '{{.status.hostIP}}{{"\n"}}'
172.31.25.62
```
An `ls` of the `/flocker` directory on the host (identified by the IP as above) will show the mount point for the volume.
```sh
$ ls /flocker
0cf8789f-00da-4da0-976a-b6b1dc831159
```
You can also see the mountpoint by inspecting the docker container on that host.
```sh
$ docker inspect -f "{{.Mounts}}" <container-id> | grep flocker
...{ /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159 /usr/share/nginx/html true}
```
Add an index.html inside this directory and use `curl` to see this HTML file served up by nginx.
```sh
$ echo "<h1>Hello, World</h1>" | tee /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159/index.html
$ curl ip
```
### More Info
Read more about the [Flocker Cluster Architecture](https://docs.clusterhq.com/en/latest/concepts/architecture.html) and learn more about Flocker by visiting the [Flocker Documentation](https://docs.clusterhq.com/).
#### Video Demo
To see a demo example of using Kubernetes and Flocker, visit [Flocker's blog post on High Availability with Kubernetes and Flocker](https://clusterhq.com/2015/12/22/ha-demo-kubernetes-flocker/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/flocker/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,47 @@
apiVersion: v1
kind: Service
metadata:
name: flocker-ghost
labels:
app: flocker-ghost
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: flocker-ghost
---
apiVersion: v1
kind: ReplicationController
metadata:
name: flocker-ghost
# these labels can be applied automatically
# from the labels in the pod template if not set
labels:
purpose: demo
spec:
replicas: 1
template:
metadata:
labels:
app: flocker-ghost
spec:
containers:
- name: flocker-ghost
image: ghost:0.7.1
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 2368
hostPort: 80
protocol: TCP
volumeMounts:
# name must match the volume name below
- name: ghost-data
mountPath: "/var/lib/ghost"
volumes:
- name: ghost-data
flocker:
datasetName: my-flocker-vol

View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
name: flocker-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: www-root
mountPath: "/usr/share/nginx/html"
volumes:
- name: www-root
flocker:
datasetName: my-flocker-vol

View File

@@ -0,0 +1,133 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Glusterfs
[Glusterfs](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use Glusterfs volumes.
The example assumes that you have already set up a Glusterfs server cluster and the Glusterfs client package is installed on all Kubernetes nodes.
### Prerequisites
Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers))
### Create endpoints
Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
```
"addresses": [
{
"IP": "10.240.106.152"
}
],
"ports": [
{
"port": 1
}
]
```
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
Create the endpoints,
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-endpoints.json
```
You can verify that the endpoints are successfully created by running
```sh
$ kubectl get endpoints
NAME ENDPOINTS
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
```
We need also create a service for this endpoints, so that the endpoints will be persistented. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
Use this command to create the service:
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-service.json
```
### Create a POD
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
```json
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
```
The parameters are explained as the followings.
- **endpoints** is endpoints name that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
- **path** is the Glusterfs volume name.
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
Create a pod that has a container using Glusterfs volume,
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-pod.json
```
You can verify that the pod is running:
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 3m
$ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
10.240.169.172
```
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
```sh
$ mount | grep kube_vol
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
```
You may also run `docker ps` on the host to see the actual container.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/glusterfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,33 @@
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "10.240.106.152"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "10.240.79.157"
}
],
"ports": [
{
"port": 1
}
]
}
]
}

View File

@@ -0,0 +1,32 @@
{
"apiVersion": "v1",
"id": "glusterfs",
"kind": "Pod",
"metadata": {
"name": "glusterfs"
},
"spec": {
"containers": [
{
"name": "glusterfs",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/glusterfs",
"name": "glusterfsvol"
}
]
}
],
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
]
}
}

View File

@@ -0,0 +1,12 @@
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}

View File

@@ -0,0 +1,119 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Introduction
The Kubernetes iSCSI implementation can connect to iSCSI devices via open-iscsi and multipathd on Linux.
Currently supported features are
* Connecting to one portal
* Mounting a device directly or via multipathd
* Formatting and partitioning any new device connected
## Prerequisites
This example expects there to be a working iSCSI target to connect to.
If there isn't one in place then it is possible to setup a software version on Linux by following these guides
* [Setup a iSCSI target on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi)
* [Install the iSCSI initiator on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2)
* [Install multipathd for mpio support if required](http://www.linuxstories.eu/2014/07/how-to-setup-dm-multipath-on-rhel.html)
## Creating the pod with iSCSI persistent storage
Once you have configured the iSCSI initiator, you can create a pod based on the example *iscsi.yaml*. In the pod YAML, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean. No initiator information is required.
If you want to use an iSCSI offload card or other open-iscsi transports besides tcp, setup an iSCSI interface and provide *iscsiInterface* in the pod YAML. The default name for an iscsi iface (open-iscsi parameter iface.iscsi\_ifacename) is in the format transport\_name.hwaddress when generated by iscsiadm. See [open-iscsi](http://www.open-iscsi.org/docs/README) or [openstack](http://docs.openstack.org/kilo/config-reference/content/iscsi-iface-config.html) for detailed configuration information.
**Note:** If you have followed the instructions in the links above you
may have partitioned the device, the iSCSI volume plugin does not
currently support partitions so format the device as one partition or leave the device raw and Kubernetes will partition and format it one first mount.
Once the pod config is created, run it on the Kubernetes master:
```console
kubectl create -f ./your_new_pod.yaml
```
Here is the example pod created and expected output:
```console
# kubectl create -f examples/volumes/iscsi/iscsi.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
iscsipd 2/2 RUNNING 0 2m
```
On the Kubernetes node, verify the mount output
For a non mpio device the output should look like the following
```console
# mount |grep kub
/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (ro,relatime,data=ordered)
/dev/sdb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-1 type ext4 (rw,relatime,data=ordered)
/dev/sdc on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
```
And for a node with mpio enabled the expected output would be similar to the following
```console
# mount |grep kub
/dev/mapper/mpatha on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (ro,relatime,data=ordered)
/dev/mapper/mpatha on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
/dev/mapper/mpathb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-1 type ext4 (rw,relatime,data=ordered)
/dev/mapper/mpathb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod.
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f855336407f4 kubernetes/pause "/pause" 6 minutes ago Up 6 minutes k8s_iscsipd-ro.d130ec3e_iscsipd_default_f527ca5b-6d87-11e5-aa7e-080027ff6387_5409a4cb
3b8a772515d2 kubernetes/pause "/pause" 6 minutes ago Up 6 minutes k8s_iscsipd-rw.ed58ec4e_iscsipd_default_f527ca5b-6d87-11e5-aa7e-080027ff6387_d25592c5
```
Run *docker inspect* and verify the container mounted the host directory into the their */mnt/iscsipd* directory.
```console
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' f855336407f4
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' 3b8a772515d2
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/iscsi/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,32 @@
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-ro
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-ro
- name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
volumes:
- name: iscsipd-ro
iscsi:
targetPortal: 10.0.2.15:3260
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
lun: 0
fsType: ext4
readOnly: true
- name: iscsipd-rw
iscsi:
targetPortal: 10.0.2.15:3260
iqn: iqn.2001-04.com.example:storage.kube.sys1.xyz
lun: 1
fsType: ext4
readOnly: false

View File

@@ -0,0 +1,204 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<!-- TAG RELEASE_LINK, added by the munger automatically -->
<strong>
The latest release of this document can be found
[here](http://releases.k8s.io/release-1.2/examples/volumes/nfs/README.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Outline
This example describes how to create Web frontend server, an auto-provisioned persistent volume on GCE, and an NFS-backed persistent claim.
Demonstrated Kubernetes Concepts:
* [Persistent Volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/) to
define persistent disks (disk lifecycle not tied to the Pods).
* [Services](http://kubernetes.io/docs/user-guide/services/) to enable Pods to
locate one another.
![alt text][nfs pv example]
As illustrated above, two persistent volumes are used in this example:
- Web frontend Pod uses a persistent volume based on NFS server, and
- NFS server uses an auto provisioned [persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) from GCE PD or AWS EBS.
Note, this example uses an NFS container that doesn't support NFSv4.
[nfs pv example]: nfs-pv.png
## tl;dr Quickstart
```console
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
# get the cluster IP of the server using the following command
$ kubectl describe services nfs-server
# use the NFS server IP to update nfs-pv.yaml and execute the following
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
# run a fake backend
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
# get pod name from this command
$ kubectl get pod -l name=nfs-busybox
# use the pod name to check the test file
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
```
## Example of NFS based persistent volume
See [NFS Service and Replication Controller](nfs-web-rc.yaml) for a quick example of how to use an NFS
volume claim in a replication controller. It relies on the
[NFS persistent volume](nfs-pv.yaml) and
[NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
## Complete setup
The example below shows how to export a NFS share from a single pod replication
controller and import it into two replication controllers.
### NFS server part
Define [the NFS Service and Replication Controller](nfs-server-rc.yaml) and
[NFS service](nfs-server-service.yaml):
The NFS server exports an an auto-provisioned persistent volume backed by GCE PD:
```console
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
```
```console
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
```
The directory contains dummy `index.html`. Wait until the pod is running
by checking `kubectl get pods -l role=nfs-server`.
### Create the NFS based persistent volume claim
The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to
generate data written to the NFS server we just started. First, you'll need to
find the cluster IP of the server:
```console
$ kubectl describe services nfs-server
```
Replace the invalid IP in the [nfs PV](nfs-pv.yaml). (In the future,
we'll be able to tie these together using the service names, but for
now, you have to hardcode the IP.)
Create the the [persistent volume](../../../docs/user-guide/persistent-volumes.md)
and the persistent volume claim for your NFS server. The persistent volume and
claim gives us an indirection that allow multiple pods to refer to the NFS
server using a symbolic name rather than the hardcoded server address.
```console
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
```
## Setup the fake backend
The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the
NFS server every 10 seconds. Let's start that now:
```console
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
```
Conveniently, it's also a `busybox` pod, so we can get an early check
that our mounts are working now. Find a busybox pod and exec:
```console
$ kubectl get pod -l name=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 25m
nfs-busybox-w3s4t 1/1 Running 0 25m
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
Thu Oct 22 19:20:18 UTC 2015
nfs-busybox-w3s4t
```
You should see output similar to the above if everything is working well. If
it's not, make sure you changed the invalid IP in the [NFS PV](nfs-pv.yaml) file
and make sure the `describe services` command above had endpoints listed
(indicating the service was associated with a running pod).
### Setup the web server
The [web server controller](nfs-web-rc.yaml) is an another simple replication
controller demonstrates reading from the NFS share exported above as a NFS
volume and runs a simple web server on it.
Define the pod:
```console
$ kubectl create -f examples/volumes/nfs/nfs-web-rc.yaml
```
This creates two pods, each of which serve the `index.html` from above. We can
then use a simple service to front it:
```console
kubectl create -f examples/volumes/nfs/nfs-web-service.yaml
```
We can then use the busybox container we launched before to check that `nginx`
is serving the data appropriately:
```console
$ kubectl get pod -l name=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 1h
nfs-busybox-w3s4t 1/1 Running 0 1h
$ kubectl get services nfs-web
NAME LABELS SELECTOR IP(S) PORT(S)
nfs-web <none> role=web-frontend 10.0.68.37 80/TCP
$ kubectl exec nfs-busybox-jdhf3 -- wget -qO- http://10.0.68.37
Thu Oct 22 19:28:55 UTC 2015
nfs-busybox-w3s4t
```
<!-- BEGIN MUNGE: IS_VERSIONED -->
<!-- TAG IS_VERSIONED -->
<!-- END MUNGE: IS_VERSIONED -->
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/nfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,32 @@
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs

View File

@@ -0,0 +1,26 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM centos
MAINTAINER Jan Safranek, jsafrane@redhat.com; Huamin Chen, hchen@redhat.com
RUN yum -y install /usr/bin/ps nfs-utils && yum clean all
RUN mkdir -p /exports
ADD run_nfs.sh /usr/local/bin/
ADD index.html /tmp/index.html
RUN chmod 644 /tmp/index.html
# expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
ENTRYPOINT ["/usr/local/bin/run_nfs.sh", "/exports"]

View File

@@ -0,0 +1,42 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# NFS-exporter container with a file
This container exports /exports with index.html in it via NFS. Based on
../exports. Since some Linux kernels have issues running NFSv4 daemons in containers,
only NFSv3 is opened in this container.
Available as `gcr.io/google-samples/nfs-server`
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/nfs/nfs-data/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1 @@
Hello world!

View File

@@ -0,0 +1,72 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function start()
{
# prepare /etc/exports
for i in "$@"; do
# fsid=0: needed for NFSv4
echo "$i *(rw,fsid=0,insecure,no_root_squash)" >> /etc/exports
# move index.html to here
/bin/cp /tmp/index.html $i/
chmod 644 $i/index.html
echo "Serving $i"
done
# start rpcbind if it is not started yet
/usr/sbin/rpcinfo 127.0.0.1 > /dev/null; s=$?
if [ $s -ne 0 ]; then
echo "Starting rpcbind"
/usr/sbin/rpcbind -w
fi
mount -t nfsd nfds /proc/fs/nfsd
# -N 4.x: disable NFSv4
# -V 3: enable NFSv3
/usr/sbin/rpc.mountd -N 2 -V 3 -N 4 -N 4.1
/usr/sbin/exportfs -r
# -G 10 to reduce grace time to 10 seconds (the lowest allowed)
/usr/sbin/rpc.nfsd -G 10 -N 2 -V 3 -N 4 -N 4.1 2
/usr/sbin/rpc.statd --no-notify
echo "NFS started"
}
function stop()
{
echo "Stopping NFS"
/usr/sbin/rpc.nfsd 0
/usr/sbin/exportfs -au
/usr/sbin/exportfs -f
kill $( pidof rpc.mountd )
umount /proc/fs/nfsd
echo > /etc/exports
exit 0
}
trap stop TERM
start "$@"
# Ugly hack to do nothing and wait for SIGTERM
while true; do
sleep 5
done

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/exports"

View File

@@ -0,0 +1,10 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi

View File

@@ -0,0 +1,32 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google-samples/nfs-server:1.1
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo

View File

@@ -0,0 +1,14 @@
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server

View File

@@ -0,0 +1,30 @@
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs

View File

@@ -0,0 +1,9 @@
kind: Service
apiVersion: v1
metadata:
name: nfs-web
spec:
ports:
- port: 80
selector:
role: web-frontend

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
annotations:
volume.alpha.kubernetes.io/storage-class: any
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi

View File

@@ -0,0 +1,88 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# How to Use it?
Install Ceph on the Kubernetes host. For example, on Fedora 21
# yum -y install ceph-common
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/ceph/ceph-docker)
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
Once you have installed Ceph and new Kubernetes, you can create a pod based on my examples [rbd.json](rbd.json) [rbd-with-secret.json](rbd-with-secret.json). In the pod JSON, you need to provide the following information.
- *monitors*: Ceph monitors.
- *pool*: The name of the RADOS pool, if not provided, default *rbd* pool is used.
- *image*: The image name that rbd has created.
- *user*: The RADOS user name. If not provided, default *admin* is used.
- *keyring*: The path to the keyring file. If not provided, default */etc/ceph/keyring* is used.
- *secretName*: The name of the authentication secrets. If provided, *secretName* overrides *keyring*. Note, see below about how to create a secret.
- *fsType*: The filesystem type (ext4, xfs, etc) that formatted on the device.
- *readOnly*: Whether the filesystem is used as readOnly.
# Use Ceph Authentication Secret
If Ceph authentication secret is provided, the secret should be first be *base64 encoded*, then encoded string is placed in a secret yaml. For example, getting Ceph user `kube`'s base64 encoded secret can use the following command:
```console
# grep key /etc/ceph/ceph.client.kube.keyring |awk '{printf "%s", $NF}'|base64
QVFBTWdYaFZ3QkNlRGhBQTlubFBhRnlmVVNhdEdENGRyRldEdlE9PQ==
```
An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command.
```console
# kubectl create -f examples/volumes/rbd/secret/ceph-secret.yaml
```
# Get started
Here are my commands:
```console
# kubectl create -f examples/volumes/rbd/rbd.json
# kubectl get pods
```
On the Kubernetes host, I got these in mount output
```console
#mount |grep kub
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kube-image-foo type ext4 (ro,relatime,stripe=4096,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/ec2166b4-de07-11e4-aaf5-d4bed9b39058/volumes/kubernetes.io~rbd/rbdpd type ext4 (ro,relatime,stripe=4096,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/rbd/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@@ -0,0 +1,42 @@
{
"apiVersion": "v1",
"id": "rbdpd2",
"kind": "Pod",
"metadata": {
"name": "rbd2"
},
"spec": {
"containers": [
{
"name": "rbd-rw",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/rbd",
"name": "rbdpd"
}
]
}
],
"volumes": [
{
"name": "rbdpd",
"rbd": {
"monitors": [
"10.16.154.78:6789",
"10.16.154.82:6789",
"10.16.154.83:6789"
],
"pool": "kube",
"image": "foo",
"user": "admin",
"secretRef": {
"name": "ceph-secret"
},
"fsType": "ext4",
"readOnly": true
}
}
]
}
}

View File

@@ -0,0 +1,40 @@
{
"apiVersion": "v1",
"id": "rbdpd",
"kind": "Pod",
"metadata": {
"name": "rbd"
},
"spec": {
"containers": [
{
"name": "rbd-rw",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/rbd",
"name": "rbdpd"
}
]
}
],
"volumes": [
{
"name": "rbdpd",
"rbd": {
"monitors": [
"10.16.154.78:6789",
"10.16.154.82:6789",
"10.16.154.83:6789"
],
"pool": "kube",
"image": "foo",
"user": "admin",
"keyring": "/etc/ceph/keyring",
"fsType": "ext4",
"readOnly": true
}
}
]
}
}

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ==