Merge pull request #16069 from zmerlynn/nfs-1.1

Auto commit by PR queue bot
This commit is contained in:
k8s-merge-robot 2015-10-28 22:39:09 -07:00
commit 63af7c227e
17 changed files with 217 additions and 219 deletions

View File

@ -10,6 +10,7 @@ pkg-core:
- apt-transport-https
- python-apt
- glusterfs-client
- nfs-common
- socat
{% endif %}
# Ubuntu installs netcat-openbsd by default, but on GCE/Debian netcat-traditional is installed.

View File

@ -291,15 +291,6 @@ before you can use it__
See the [NFS example](../../examples/nfs/) for more details.
For example, [this file](../../examples/nfs/nfs-web-pod.yaml) demonstrates how to
specify the usage of an NFS volume within a pod.
In this example one can see that a `volumeMount` called `nfs` is being mounted
onto `/usr/share/nginx/html` in the container `web`. The volume "nfs" is defined as
type `nfs`, with the NFS server serving from `nfs-server.default.kube.local`
and exporting directory `/` as the share. The mount being created in this
example is writeable.
### iscsi
An `iscsi` volume allows an existing iSCSI (SCSI over IP) volume to be mounted

View File

@ -308,9 +308,13 @@ func TestExampleObjectSchemas(t *testing.T) {
"wordpress": &api.Pod{},
},
"../examples/nfs": {
"nfs-server-pod": &api.Pod{},
"nfs-busybox-rc": &api.ReplicationController{},
"nfs-server-rc": &api.ReplicationController{},
"nfs-server-service": &api.Service{},
"nfs-web-pod": &api.Pod{},
"nfs-pv": &api.PersistentVolume{},
"nfs-pvc": &api.PersistentVolumeClaim{},
"nfs-web-rc": &api.ReplicationController{},
"nfs-web-service": &api.Service{},
},
"../docs/user-guide/node-selection": {
"pod": &api.Pod{},

View File

@ -33,57 +33,115 @@ Documentation for other releases can be found at
# Example of NFS volume
See [nfs-web-pod.yaml](nfs-web-pod.yaml) for a quick example, how to use NFS volume
in a pod.
See [nfs-web-rc.yaml](nfs-web-rc.yaml) for a quick example of how to use an NFS
volume claim in a replication controller. It relies on the
[NFS persistent volume](nfs-pv.yaml) and
[NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
## Complete setup
The example below shows how to export a NFS share from a pod and imports it
into another one.
### Prerequisites
The nfs server pod creates a privileged container, so if you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you have to enable the ability to create privileged containers by API.
```sh
# At the root of Kubernetes source code
$ vi cluster/saltbase/pillar/privilege.sls
# If true, allow privileged containers to be created by API
allow_privileged: true
```
For other non-salt based provider, you can set `--allow-privileged=true` for both api-server and kubelet, and then restart these components.
Rebuild the Kubernetes and spin up a cluster using your preferred KUBERNETES_PROVIDER.
The example below shows how to export a NFS share from a single pod replication
controller and import it into two replication controllers.
### NFS server part
Define [NFS server pod](nfs-server-pod.yaml) and
Define [NFS server controller](nfs-server-rc.yaml) and
[NFS service](nfs-server-service.yaml):
$ kubectl create -f nfs-server-pod.yaml
$ kubectl create -f nfs-server-service.yaml
```console
$ kubectl create -f examples/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/nfs/nfs-server-service.yaml
```
The server exports `/mnt/data` directory as `/` (fsid=0). The directory contains
dummy `index.html`. Wait until the pod is running!
The server exports `/mnt/data` directory as `/` (fsid=0). The
directory contains dummy `index.html`. Wait until the pod is running
by checking `kubectl get pods -lrole=nfs-server`.
### NFS client
### Create the NFS claim
[WEB server pod](nfs-web-pod.yaml) uses the NFS share exported above as a NFS
volume and runs simple web server on it. The pod assumes your DNS is configured
and the NFS service is reachable as `nfs-server.default.kube.local`. Edit the
yaml file to supply another name or directly its IP address (use
`kubectl get services` to get it).
The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to
generate data written to the NFS server we just started. First, you'll need to
find the cluster IP of the server:
```console
$ kubectl describe services nfs-server
```
Replace the invalid IP in the [nfs PV](nfs-pv.yaml). (In the future,
we'll be able to tie these together using the service names, but for
now, you have to hardcode the IP.)
Create the the [persistent volume](../../docs/user-guide/persistent-volumes.md)
and the persistent volume claim for your NFS server. The persistent volume and
claim gives us an indirection that allow multiple pods to refer to the NFS
server using a symbolic name rather than the hardcoded server address.
```console
$ kubectl create -f examples/nfs/nfs-pv.yaml
$ kubectl create -f examples/nfs/nfs-pvc.yaml
```
## Setup the fake backend
The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the
NFS server every 10 seconds. Let's start that now:
```console
$ kubectl create -f examples/nfs/nfs-busybox-rc.yaml
```
Conveniently, it's also a `busybox` pod, so we can get an early check
that our mounts are working now. Find a busybox pod and exec:
```console
$ kubectl get pod -lname=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 25m
nfs-busybox-w3s4t 1/1 Running 0 25m
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
Thu Oct 22 19:20:18 UTC 2015
nfs-busybox-w3s4t
```
You should see output similar to the above if everything is working well. If
it's not, make sure you changed the invalid IP in the [NFS PV](nfs-pv.yaml) file
and make sure the `describe services` command above had endpoints listed
(indicating the service was associated with a running pod).
### Setup the web server
The [web server controller](nfs-web-rc.yaml) is an another simple replication
controller demonstrates reading from the NFS share exported above as a NFS
volume and runs a simple web server on it.
Define the pod:
$ kubectl create -f nfs-web-pod.yaml
```console
$ kubectl create -f examples/nfs/nfs-web-rc.yaml
```
Now the pod serves `index.html` from the NFS server:
This creates two pods, each of which serve the `index.html` from above. We can
then use a simple service to front it:
$ curl http://<the container IP address>/
Hello World!
```console
kubectl create -f examples/nfs/nfs-web-service.yaml
```
We can then use the busybox container we launched before to check that `nginx`
is serving the data appropriately:
```console
$ kubectl get pod -lname=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 1h
nfs-busybox-w3s4t 1/1 Running 0 1h
$ kubectl get services nfs-web
NAME LABELS SELECTOR IP(S) PORT(S)
nfs-web <none> role=web-frontend 10.0.68.37 80/TCP
$ kubectl exec nfs-busybox-jdhf3 -- wget -qO- http://10.0.68.37
Thu Oct 22 19:28:55 UTC 2015
nfs-busybox-w3s4t
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,11 +0,0 @@
FROM fedora:21
MAINTAINER Jan Safranek <jsafrane@redhat.com>
EXPOSE 2049/tcp
RUN yum -y install nfs-utils && yum clean all
ADD run_nfs /usr/local/bin/
RUN chmod +x /usr/local/bin/run_nfs
ENTRYPOINT ["/usr/local/bin/run_nfs"]

View File

@ -1,48 +0,0 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/examples/nfs/exporter/README.md).
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# NFS-exporter container
Inspired by https://github.com/cpuguy83/docker-nfs-server. Rewritten for
Fedora.
Serves NFS4 exports, defined on command line. At least one export must be defined!
Usage::
docker run -d --name nfs --privileged jsafrane/nfsexporter /path/to/share /path/to/share2 ...
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nfs/exporter/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,72 +0,0 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function start()
{
# prepare /etc/exports
seq=0
for i in "$@"; do
echo "$i *(rw,sync,no_root_squash,insecure,fsid=$seq)" >> /etc/exports
seq=$(($seq + 1))
echo "Serving $i"
done
# from /lib/systemd/system/proc-fs-nfsd.mount
mount -t nfsd nfds /proc/fs/nfsd
# from /lib/systemd/system/nfs-config.service
/usr/lib/systemd/scripts/nfs-utils_env.sh
# from /lib/systemd/system/nfs-mountd.service
. /run/sysconfig/nfs-utils
/usr/sbin/rpc.mountd $RPCMOUNTDARGS
# from /lib/systemd/system/nfs-server.service
. /run/sysconfig/nfs-utils
/usr/sbin/exportfs -r
/usr/sbin/rpc.nfsd -N 2 -N 3 -V 4 -V 4.1 $RPCNFSDARGS
echo "NFS started"
}
function stop()
{
echo "Stopping NFS"
# from /lib/systemd/system/nfs-server.service
/usr/sbin/rpc.nfsd 0
/usr/sbin/exportfs -au
/usr/sbin/exportfs -f
# from /lib/systemd/system/nfs-mountd.service
kill $( pidof rpc.mountd )
# from /lib/systemd/system/proc-fs-nfsd.mount
umount /proc/fs/nfsd
echo > /etc/exports
exit 0
}
trap stop TERM
start "$@"
# Ugly hack to do nothing and wait for SIGTERM
while true; do
read
done

View File

@ -0,0 +1,32 @@
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs

13
examples/nfs/nfs-pv.yaml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.999.999.999
path: "/"

10
examples/nfs/nfs-pvc.yaml Normal file
View File

@ -0,0 +1,10 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi

View File

@ -1,15 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: jsafrane/nfs-data
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true

View File

@ -1,25 +0,0 @@
#
# This pod imports nfs-server.default.kube.local:/ into /usr/share/nginx/html
#
apiVersion: v1
kind: Pod
metadata:
name: nfs-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
nfs:
# FIXME: use the right hostname
server: nfs-server.default.kube.local
path: "/"

View File

@ -0,0 +1,30 @@
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-web
spec:
replicas: 2
selector:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs

View File

@ -0,0 +1,9 @@
kind: Service
apiVersion: v1
metadata:
name: nfs-web
spec:
ports:
- port: 80
selector:
role: web-frontend

View File

@ -1,6 +1,6 @@
all: push
TAG = 0.3
TAG = 0.4
container:
docker build -t gcr.io/google_containers/volume-nfs . # Build new image and automatically tag it as latest

View File

@ -20,7 +20,7 @@ function start()
# prepare /etc/exports
for i in "$@"; do
# fsid=0: needed for NFSv4
echo "$i *(rw,fsid=0,no_root_squash)" >> /etc/exports
echo "$i *(rw,fsid=0,insecure,no_root_squash)" >> /etc/exports
echo "Serving $i"
done