Initial publication of ZFSSA CSI driver

This commit is contained in:
JEONGTAE.KIM@ORACLE.COM
2021-08-24 16:30:55 -06:00
parent 202c8dd706
commit d0182b4eb3
112 changed files with 11929 additions and 1 deletions

View File

@@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-existing-fs-example
version: 0.0.1
description: Deploys an end to end filesystem example for an existing Oracle ZFS Storage Appliance CSI filesystem.

91
examples/nfs-pv/README.md Normal file
View File

@@ -0,0 +1,91 @@
# Introduction
This is an end-to-end example of using an existing filesystem share on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
This flow to use an existing volume is:
* create a persistent volume (PV) object
* allocate it to a persistent volume claim (PVC)
* use the PVC from a pod
The following must be set up:
* the volume handle must be a fully formed volume id
* there must be volume attributes defined as part of the persistent volume
In this example, the volume handle is constructed via values in the helm
chart. The only new attribute necessary is the name of the volume on the
target appliance. The remaining is assembled from the information that is still
in the local-values.yaml file (appliance name, pool, project, etc...).
The resulting VolumeHandle appears similar to the following, with the values
in ```<>``` filled in from the helm variables:
```
volumeHandle: /nfs/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
```
From the above, note that the volumeHandle is in the form of an ID with the components:
* 'nfs' - denoting an exported NFS share
* 'appliance name' - this is the management path of the ZFSSA target appliance
* 'volume name' - the name of the share on the appliance
* 'pool name' - the pool on the target appliance
* 'local' - denotes that the pool is owned by the head
* 'project' - the project that the share is in
In the volume attributes, nfsServer must be defined.
Once created, a persistent volume claim can be made for this share and used in a pod.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* applianceName: the existing appliance name (this is the management path)
* pvExistingFilesystemName: the name of the filesystem share on the target appliance
* volMountPoint: the mount point on the target appliance of the filesystem share
* volSize: the size of the filesystem share
On the target appliance, ensure that the filesystem share is exported via NFS.
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-nfs-existing ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-fs-existing-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scExistingFilesystemName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.pvExistingFilesystemName }}
annotations:
pv.kubernetes.io/provisioned-by: zfssa-csi-driver
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.volSize }}
csi:
driver: zfssa-csi-driver
volumeHandle: /nfs/{{ .Values.applianceName }}/{{ .Values.pvExistingFilesystemName }}/{{ .Values.appliance.pool }}/local/{{ .Values.appliance.project }}/{{ .Values.pvExistingFilesystemName }}
readOnly: false
volumeAttributes:
nfsServer: {{ .Values.appliance.nfsServer }}
share: {{ .Values.volMountPoint }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
rootUser: {{ .Values.appliance.rootUser }}
claimRef:
namespace: default
name: {{ .Values.pvcExistingFilesystemName }}

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcExistingFilesystemName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scExistingFilesystemName }}

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podExistingFilesystemName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcExistingFilesystemName }}

View File

@@ -0,0 +1,21 @@
# Various names used through example
scExistingFilesystemName: zfssa-fs-existing-sc
pvExistingFilesystemName: OVERRIDE
pvcExistingFilesystemName: zfssa-fs-existing-pvc
podExistingFilesystemName: zfssa-fs-existing-pod
applianceName: OVERRIDE
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volMountPoint: OVERRIDE
volSize: OVERRIDE