Initial publication of ZFSSA CSI driver

This commit is contained in:
JEONGTAE.KIM@ORACLE.COM
2021-08-24 16:30:55 -06:00
parent 202c8dd706
commit d0182b4eb3
112 changed files with 11929 additions and 1 deletions

View File

@@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-block-example
version: 0.0.1
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.

63
examples/block/README.md Normal file
View File

@@ -0,0 +1,63 @@
# Introduction
This is an end-to-end example of using iSCSI block devices on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the iSCSI environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* targetGroup: the target iSCSI group to use on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the block volume (iSCSI LUN) to create
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-block ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target LUNs:
* Protocol -> iSCSI bytes broken down by initiator
* Protocol -> iSCSI bytes broken down by target
* Protocol -> iSCSI bytes broken down by LUN
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-block-example-pod -- /bin/sh
/ # cd /dev
/dev # ls
block fd mqueue ptmx random stderr stdout tty zero
core full null pts shm stdin termination-log urandom
/dev # dd if=/dev/zero of=/dev/block count=1024 bs=1024
1024+0 records in
1024+0 records out
/dev #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scBlockName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcBlockName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scBlockName }}

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podBlockName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeDevices:
- name: vol
devicePath: /dev/block
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcBlockName }}

View File

@@ -0,0 +1,18 @@
# Various names used through example
scBlockName: zfssa-block-example-sc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volSize: OVERRIDE