Initial publication of ZFSSA CSI driver

This commit is contained in:
JEONGTAE.KIM@ORACLE.COM
2021-08-24 16:30:55 -06:00
parent 202c8dd706
commit d0182b4eb3
112 changed files with 11929 additions and 1 deletions

View File

@@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-vs-example
version: 0.0.1
description: Deploys an end to end NFS volume snapshot example for Oracle ZFS Storage Appliance CSI driver.

193
examples/nfs-vsc/README.md Normal file
View File

@@ -0,0 +1,193 @@
# Introduction
This is an end-to-end example of taking a snapshot of an NFS filesystem
volume on a target Oracle ZFS Storage Appliance and making use of it
on another pod by creating (restoring) a volume from the snapshot.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can contain others. The minimum set of values to
customize are:
* appliance:
* pool: the pool to create shares in
* project: the project to create shares in
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
[INSTALLATION](../../INSTALLATION.md).
## Deployment
This step includes deploying a pod with an NFS volume attached using a regular
storage class and a persistent volume claim. It also deploys a volume snapshot class
required to take snapshots of the persistent volume.
Assuming there is a set of values in the local-values directory, deploy using Helm 3. If you plan to exercise creating volume from a snapshot with given yaml files as they are, define the names in the local-values.yaml as follows. You can modify them as per your preference.
```text
scNfsName: zfssa-nfs-vs-example-sc
vscNfsName: zfssa-nfs-vs-example-vsc
pvcNfsName: zfssa-nfs-vs-example-pvc
podNfsName: zfssa-nfs-vs-example-pod
```
```text
helm ../install -f local-values/local-values.yaml zfssa-nfs-vsc ./
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-nfs-vs-example-sc zfssa-csi-driver Delete Immediate false 86s
```
2. Display the volume claim
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 86s
```
3. Display the volume snapshot class
The command `kubectl get volumesnapshotclass` should now return something similar to this:
```text
NAME DRIVER DELETIONPOLICY AGE
zfssa-nfs-vs-example-vsc zfssa-csi-driver Delete 86s
```
4. Display the pod mounting the volume
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 6d6h
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 24m
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 24m
zfssa-csi-provisioner-0 4/4 Running 0 24m
zfssa-nfs-vs-example-pod 1/1 Running 0 86s
```
## Writing data
Once the pod is deployed, verify the volume is mounted and can be written.
```text
kubectl exec -it zfssa-nfs-vs-example-pod -- /bin/sh
/ # cd /mnt
/mnt #
/mnt # date > timestamp.txt
/mnt # cat timestamp.txt
Tue Jan 19 23:13:10 UTC 2021
```
## Creating snapshot
Use configuration files in examples/nfs-snapshot directory with proper modifications
for the rest of the example steps.
Create a snapshot of the volume by running the command below:
```text
kubectl apply -f ../nfs-snapshot/nfs-snapshot.yaml
```
Verify the volume snapshot is created and available by running the following command:
```text
kubectl get volumesnapshot
```
Wait until the READYTOUSE of the snapshot becomes true before moving on to the next steps.
It is important to use the RESTORESIZE value of the volume snapshot just created when specifying
the storage capacity of a persistent volume claim to provision a persistent volume using this
snapshot. For example, the storage capacity in ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
Optionally, verify the volume snapshot exists on ZFS Storage Appliance. The snapshot name
on ZFS Storage Appliance should have the volume snapshot UID as the suffix.
## Creating persistent volume claim
Create a persistent volume claim to provision a volume from the snapshot by running
the command below. Be aware that the persistent volume provisioned by this persistent volume claim
is not expandable. Create a new storage class with allowVolumeExpansion: true and use it when
specifying the persistent volume claim.
```text
kubectl apply -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
```
Verify the persistent volume claim is created and a volume is provisioned by running the following command:
```text
kubectl get pv,pvc
```
The command `kubectl get pv,pvc` should return something similar to this:
```text
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX Delete Bound default/zfssa-nfs-vs-example-pvc zfssa-nfs-vs-example-sc 34m
persistentvolume/pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO Delete Bound default/zfssa-nfs-vs-restore-pvc zfssa-nfs-vs-example-sc 112s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 34m
persistentvolumeclaim/zfssa-nfs-vs-restore-pvc Bound pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO zfssa-nfs-vs-example-sc 116s
```
Optionally, verify the new volume exists on ZFS Storage Appliance. Notice that the new
volume is a clone off the snapshot taken from the original volume.
## Creating pod using restored volume
Create a pod with the persistent volume claim created from the above step by running the command below:
```text
kubectl apply -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
```
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 6d7h
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 68m
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 68m
zfssa-csi-provisioner-0 4/4 Running 0 68m
zfssa-nfs-vs-example-pod 1/1 Running 0 46m
zfssa-nfs-vs-restore-pod 1/1 Running 0 37s
```
Verify the new volume has the contents of the original volume at the point in time
when the snapsnot was taken.
```text
kubectl exec -it zfssa-nfs-vs-restore-pod -- /bin/sh
/ # cd /mnt
/mnt #
/mnt # cat timestamp.txt
Tue Jan 19 23:13:10 UTC 2021
```
## Deleting pod, persistent volume claim and volume snapshot
To delete the pod, persistent volume claim and volume snapshot created from the above steps,
run the following commands below. Wait until the resources being deleted disappear from
the list that `kubectl get ...` command displays before running the next command.
```text
kubectl delete -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
kubectl delete -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
kubectl delete -f ../nfs-snapshot/nfs-snapshot.yaml
```

View File

@@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsName }}

View File

@@ -0,0 +1,6 @@
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: {{ .Values.vscNfsName }}
driver: zfssa-csi-driver
deletionPolicy: Delete

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}
readOnly: false

View File

@@ -0,0 +1,20 @@
# Various names used through example
scNfsName: zfssa-nfs-vs-example-sc
vscNfsName: zfssa-nfs-vs-example-vsc
pvcNfsName: zfssa-nfs-vs-example-pvc
podNfsName: zfssa-nfs-vs-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE