mirror of
https://github.com/oracle/zfssa-csi-driver.git
synced 2025-09-13 13:24:52 +00:00
Initial publication of ZFSSA CSI driver
This commit is contained in:
4
examples/block-pv/Chart.yaml
Normal file
4
examples/block-pv/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-existing-block-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.
|
110
examples/block-pv/README.md
Normal file
110
examples/block-pv/README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using an existing iSCSI block devices on a target
|
||||
Oracle ZFS Storage Appliance.
|
||||
|
||||
Prior to running this example, the iSCSI environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
This flow to use an existing volume is:
|
||||
* create a persistent volume (PV) object
|
||||
* allocate it to a persistent volume claim (PVC)
|
||||
* use the PVC from a pod
|
||||
|
||||
The following must be set up:
|
||||
* the volume handle must be a fully formed volume id
|
||||
* there must be volume attributes defined as part of the persistent volume
|
||||
* the initiator group for the block volume *must* be set to ```com.sun.ms.vss.hg.maskAll```
|
||||
|
||||
In this example, the volume handle is constructed via values in the helm
|
||||
chart. The only new attribute necessary is the name of the volume on the
|
||||
target appliance. The remaining is assembled from the information that is still
|
||||
in the local-values.yaml file (appliance name, pool, project, etc...).
|
||||
|
||||
The resulting VolumeHandle appears similar to the following, with the values
|
||||
in ```<>``` filled in from the helm variables:
|
||||
|
||||
```
|
||||
volumeHandle: /iscsi/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
|
||||
```
|
||||
From the above, note that the volumeHandle is in the form of an ID with the components:
|
||||
* 'iscsi' - denoting a block volume
|
||||
* 'appliance name' - this is the management path of the ZFSSA target appliance
|
||||
* 'volume name' - the name of the share on the appliance
|
||||
* 'pool name' - the pool on the target appliance
|
||||
* 'local' - denotes that the pool is owned by the head
|
||||
* 'project' - the project that the share is in
|
||||
|
||||
In the volume attributes, the targetGroup and targetPortal must be defined. This should be similar
|
||||
to that in the storage class.
|
||||
|
||||
Once created, a persistent volume claim can be made for this share and used in a pod.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* targetGroup: the target iSCSI group to use on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* applianceName: name of the appliance
|
||||
* pvExistingName: the name of the iSCSI LUN share on the target appliance
|
||||
* volSize: the size of the iSCSI LUN share specified by pvExistingName
|
||||
|
||||
On the target appliance, locate the share via the CLI:
|
||||
|
||||
```
|
||||
appliance> shares
|
||||
appliance> select <pool name>
|
||||
appliance> <project name>
|
||||
appliance> <share name>
|
||||
appliance> set initatorgroup=com.sun.ms.vss.hg.maskAll
|
||||
appliance> commit
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f ../local-values/local-values.yaml zfssa-block-existing ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
```
|
||||
kubectl get sc
|
||||
kubectl get pvc
|
||||
kubectl get pod
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, for demo, start the following analytics in a worksheet on
|
||||
the Oracle ZFS Storage Appliance that is hosting the target LUNs:
|
||||
|
||||
* Protocol -> iSCSI bytes broken down by initiator
|
||||
* Protocol -> iSCSI bytes broken down by target
|
||||
* Protocol -> iSCSI bytes broken down by LUN
|
||||
|
||||
Exec into the pod and write some data to the block volume:
|
||||
```yaml
|
||||
kubectl exec -it zfssa-block-existing-pod -- /bin/sh
|
||||
/ # cd /dev
|
||||
/dev # ls
|
||||
block fd mqueue ptmx random stderr stdout tty zero
|
||||
core full null pts shm stdin termination-log urandom
|
||||
/dev # dd if=/dev/zero of=/dev/block count=1024 bs=1024
|
||||
1024+0 records in
|
||||
1024+0 records out
|
||||
/dev #
|
||||
```
|
||||
|
||||
The analytics on the appliance should have seen the spikes as data was written.
|
20
examples/block-pv/templates/00-storage-class.yaml
Normal file
20
examples/block-pv/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scBlockName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: "on"
|
||||
restrictChown: "false"
|
24
examples/block-pv/templates/01-pv.yaml
Normal file
24
examples/block-pv/templates/01-pv.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: {{ .Values.pvExistingName }}
|
||||
annotations:
|
||||
pv.kubernetes.io/provisioned-by: zfssa-csi-driver
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
capacity:
|
||||
storage: {{ .Values.volSize }}
|
||||
csi:
|
||||
driver: zfssa-csi-driver
|
||||
volumeHandle: /iscsi/{{ .Values.applianceName }}/{{ .Values.pvExistingName }}/{{ .Values.appliance.pool }}/local/{{ .Values.appliance.project }}/{{ .Values.pvExistingName }}
|
||||
readOnly: false
|
||||
volumeAttributes:
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
claimRef:
|
||||
namespace: default
|
||||
name: {{ .Values.pvcExistingName }}
|
||||
|
12
examples/block-pv/templates/02-pvc.yaml
Normal file
12
examples/block-pv/templates/02-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcExistingName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scBlockName }}
|
20
examples/block-pv/templates/03-pod.yaml
Normal file
20
examples/block-pv/templates/03-pod.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podBlockName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeDevices:
|
||||
- name: vol
|
||||
devicePath: /dev/block
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcExistingName }}
|
20
examples/block-pv/values.yaml
Normal file
20
examples/block-pv/values.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
# Various names used through example
|
||||
scBlockName: zfssa-block-existing-sc
|
||||
pvExistingName: OVERRIDE
|
||||
pvcExistingName: zfssa-block-existing-pvc
|
||||
podBlockName: zfssa-block-existing-pod
|
||||
applianceName: OVERRIDE
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
21
examples/block-snapshot/block-pod-restored-volume.yaml
Normal file
21
examples/block-snapshot/block-pod-restored-volume.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: zfssa-block-vs-restore-pod
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeDevices:
|
||||
- name: vol
|
||||
devicePath: /dev/block
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: zfssa-block-vs-restore-pvc
|
||||
readOnly: false
|
16
examples/block-snapshot/block-pvc-from-snapshot.yaml
Normal file
16
examples/block-snapshot/block-pvc-from-snapshot.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfssa-block-vs-restore-pvc
|
||||
spec:
|
||||
storageClassName: zfssa-block-vs-example-sc
|
||||
dataSource:
|
||||
name: zfssa-block-vs-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
resources:
|
||||
requests:
|
||||
storage: 68796
|
8
examples/block-snapshot/block-snapshot.yaml
Normal file
8
examples/block-snapshot/block-snapshot.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: zfssa-block-vs-snapshot
|
||||
spec:
|
||||
volumeSnapshotClassName: zfssa-block-vs-example-vsc
|
||||
source:
|
||||
persistentVolumeClaimName: zfssa-block-vs-example-pvc
|
4
examples/block-vsc/Chart.yaml
Normal file
4
examples/block-vsc/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-csi-block-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.
|
193
examples/block-vsc/README.md
Normal file
193
examples/block-vsc/README.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of taking a snapshot of a block volume (iSCSI Lun)
|
||||
on a target Oracle ZFS Storage Appliance and making use of it
|
||||
on another pod by creating (restoring) a volume from the snapshot.
|
||||
|
||||
Prior to running this example, the iSCSI environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can contain others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* targetGroup: the target iSCSI group to use on the appliance
|
||||
* volSize: the size of the iSCSI LUN share to create
|
||||
|
||||
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
|
||||
|
||||
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
|
||||
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
|
||||
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
|
||||
[INSTALLATION](../../INSTALLATION.md).
|
||||
|
||||
## Deployment
|
||||
|
||||
This step includes deploying a pod with a block volume attached using a regular
|
||||
storage class and a persistent volume claim. It also deploys a volume snapshot class
|
||||
required to take snapshots of the persistent volume.
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```text
|
||||
helm ../install -f local-values/local-values.yaml zfssa-block-vsc ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
1. Display the storage class (SC)
|
||||
The command `kubectl get sc` should now return something similar to this:
|
||||
|
||||
```text
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
zfssa-block-vs-example-sc zfssa-csi-driver Delete Immediate false 86s
|
||||
```
|
||||
2. Display the volume claim
|
||||
The command `kubectl get pvc` should now return something similar to this:
|
||||
```text
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
zfssa-block-vs-example-pvc Bound pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO zfssa-block-vs-example-sc 62s
|
||||
```
|
||||
3. Display the volume snapshot class
|
||||
The command `kubectl get volumesnapshotclass` should now return something similar to this:
|
||||
```text
|
||||
NAME DRIVER DELETIONPOLICY AGE
|
||||
zfssa-block-vs-example-vsc zfssa-csi-driver Delete 100s
|
||||
```
|
||||
4. Display the pod mounting the volume
|
||||
|
||||
The command `kubectl get pod` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snapshot-controller-0 1/1 Running 0 14d
|
||||
zfssa-block-vs-example-pod 1/1 Running 0 2m11s
|
||||
zfssa-csi-nodeplugin-7kj5m 2/2 Running 0 3m11s
|
||||
zfssa-csi-nodeplugin-rgfzf 2/2 Running 0 3m11s
|
||||
zfssa-csi-provisioner-0 4/4 Running 0 3m11s
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, verify the block volume is mounted and can be written.
|
||||
|
||||
```text
|
||||
kubectl exec -it zfssa-block-vs-example-pod -- /bin/sh
|
||||
|
||||
/ # cd /dev
|
||||
/dev #
|
||||
/dev # date > block
|
||||
/dev # dd if=block bs=64 count=1
|
||||
Wed Jan 27 22:06:36 UTC 2021
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
/dev #
|
||||
```
|
||||
Alternatively, `cat /dev/block` followed by `CTRL-C` can be used to view the timestamp written on th /dev/block device file.
|
||||
|
||||
## Creating snapshot
|
||||
|
||||
Use configuration files in examples/block-snapshot directory with proper modifications
|
||||
for the rest of the example steps.
|
||||
|
||||
Create a snapshot of the volume by running the command below:
|
||||
|
||||
```text
|
||||
kubectl apply -f ../block-snapshot/block-snapshot.yaml
|
||||
```
|
||||
|
||||
Verify the volume snapshot is created and available by running the following command:
|
||||
|
||||
```text
|
||||
kubectl get volumesnapshot
|
||||
```
|
||||
|
||||
Wait until the READYTOUSE of the snapshot becomes true before moving on to the next steps.
|
||||
It is important to use the RESTORESIZE value of the volume snapshot just created when specifying
|
||||
the storage capacity of a persistent volume claim to provision a persistent volume using this
|
||||
snapshot. For example, the storage capacity in ../block-snapshot/block-pvc-from-snapshot.yaml
|
||||
|
||||
Optionally, verify the volume snapshot exists on ZFS Storage Appliance. The snapshot name
|
||||
on ZFS Storage Appliance should have the volume snapshot UID as the suffix.
|
||||
|
||||
## Creating persistent volume claim
|
||||
|
||||
Create a persistent volume claim to provision a volume from the snapshot by running
|
||||
the command below. Be aware that the persistent volume provisioned by this persistent volume claim
|
||||
is not expandable. Create a new storage class with allowVolumeExpansion: true and use it when
|
||||
specifying the persistent volume claim.
|
||||
|
||||
```text
|
||||
kubectl apply -f ../block-snapshot/block-pvc-from-snapshot.yaml
|
||||
```
|
||||
|
||||
Verify the persistent volume claim is created and a volume is provisioned by running the following command:
|
||||
|
||||
```text
|
||||
kubectl get pv,pvc
|
||||
```
|
||||
|
||||
The command `kubectl get pv,pvc` should return something similar to this:
|
||||
```text
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
persistentvolume/pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO Delete Bound default/zfssa-block-vs-example-pvc zfssa-block-vs-example-sc 13m
|
||||
persistentvolume/pvc-91f949f6-5d77-4183-bab5-adfdb1452a90 10Gi RWO Delete Bound default/zfssa-block-vs-restore-pvc zfssa-block-vs-example-sc 11s
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
persistentvolumeclaim/zfssa-block-vs-example-pvc Bound pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO zfssa-block-vs-example-sc 13m
|
||||
persistentvolumeclaim/zfssa-block-vs-restore-pvc Bound pvc-91f949f6-5d77-4183-bab5-adfdb1452a90 10Gi RWO zfssa-block-vs-example-sc 16s
|
||||
```
|
||||
|
||||
Optionally, verify the new volume exists on ZFS Storage Appliance. Notice that the new
|
||||
volume is a clone off the snapshot taken from the original volume.
|
||||
|
||||
## Creating pod using restored volume
|
||||
|
||||
Create a pod with the persistent volume claim created from the above step by running the command below:
|
||||
|
||||
```text
|
||||
kubectl apply -f ../block-snapshot/block-pod-restored-volume.yaml
|
||||
```
|
||||
|
||||
The command `kubectl get pod` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snapshot-controller-0 1/1 Running 0 14d
|
||||
zfssa-block-vs-example-pod 1/1 Running 0 15m
|
||||
zfssa-block-vs-restore-pod 1/1 Running 0 21s
|
||||
zfssa-csi-nodeplugin-7kj5m 2/2 Running 0 16m
|
||||
zfssa-csi-nodeplugin-rgfzf 2/2 Running 0 16m
|
||||
zfssa-csi-provisioner-0 4/4 Running 0 16m
|
||||
```
|
||||
|
||||
Verify the new volume has the contents of the original volume at the point in time
|
||||
when the snapsnot was taken.
|
||||
|
||||
```text
|
||||
kubectl exec -it zfssa-block-vs-restore-pod -- /bin/sh
|
||||
|
||||
/ # cd /dev
|
||||
/dev # dd if=block bs=64 count=1
|
||||
Wed Jan 27 22:06:36 UTC 2021
|
||||
1+0 records in
|
||||
1+0 records out
|
||||
/dev #
|
||||
```
|
||||
|
||||
## Deleting pod, persistent volume claim and volume snapshot
|
||||
|
||||
To delete the pod, persistent volume claim and volume snapshot created from the above steps,
|
||||
run the following commands below. Wait until the resources being deleted disappear from
|
||||
the list that `kubectl get ...` command displays before running the next command.
|
||||
|
||||
```text
|
||||
kubectl delete -f ../block-snapshot/block-pod-restored-volume.yaml
|
||||
kubectl delete -f ../block-snapshot/block-pvc-from-snapshot.yaml
|
||||
kubectl delete -f ../block-snapshot/block-snapshot.yaml
|
||||
```
|
20
examples/block-vsc/templates/00-storage-class.yaml
Normal file
20
examples/block-vsc/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scBlockName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: "on"
|
||||
restrictChown: "false"
|
12
examples/block-vsc/templates/01-pvc.yaml
Normal file
12
examples/block-vsc/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcBlockName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scBlockName }}
|
@@ -0,0 +1,6 @@
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: {{ .Values.vscBlockName }}
|
||||
driver: zfssa-csi-driver
|
||||
deletionPolicy: Delete
|
20
examples/block-vsc/templates/03-pod.yaml
Normal file
20
examples/block-vsc/templates/03-pod.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podBlockName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeDevices:
|
||||
- name: vol
|
||||
devicePath: /dev/block
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcBlockName }}
|
19
examples/block-vsc/values.yaml
Normal file
19
examples/block-vsc/values.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# Various names used through example
|
||||
scBlockName: zfssa-block-example-sc
|
||||
vscBlockName: zfssa-block-example-vsc
|
||||
pvcBlockName: zfssa-block-example-pvc
|
||||
podBlockName: zfssa-block-example-pod
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
4
examples/block/Chart.yaml
Normal file
4
examples/block/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-csi-block-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.
|
63
examples/block/README.md
Normal file
63
examples/block/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using iSCSI block devices on a target
|
||||
Oracle ZFS Storage Appliance.
|
||||
|
||||
Prior to running this example, the iSCSI environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* targetGroup: the target iSCSI group to use on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* volSize: the size of the block volume (iSCSI LUN) to create
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f ../local-values/local-values.yaml zfssa-block ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
```
|
||||
kubectl get sc
|
||||
kubectl get pvc
|
||||
kubectl get pod
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, for demo, start the following analytics in a worksheet on
|
||||
the Oracle ZFS Storage Appliance that is hosting the target LUNs:
|
||||
|
||||
* Protocol -> iSCSI bytes broken down by initiator
|
||||
* Protocol -> iSCSI bytes broken down by target
|
||||
* Protocol -> iSCSI bytes broken down by LUN
|
||||
|
||||
Exec into the pod and write some data to the block volume:
|
||||
```yaml
|
||||
kubectl exec -it zfssa-block-example-pod -- /bin/sh
|
||||
/ # cd /dev
|
||||
/dev # ls
|
||||
block fd mqueue ptmx random stderr stdout tty zero
|
||||
core full null pts shm stdin termination-log urandom
|
||||
/dev # dd if=/dev/zero of=/dev/block count=1024 bs=1024
|
||||
1024+0 records in
|
||||
1024+0 records out
|
||||
/dev #
|
||||
```
|
||||
|
||||
The analytics on the appliance should have seen the spikes as data was written.
|
20
examples/block/templates/00-storage-class.yaml
Normal file
20
examples/block/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scBlockName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: "on"
|
||||
restrictChown: "false"
|
12
examples/block/templates/01-pvc.yaml
Normal file
12
examples/block/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcBlockName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Block
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scBlockName }}
|
20
examples/block/templates/02-pod.yaml
Normal file
20
examples/block/templates/02-pod.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podBlockName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeDevices:
|
||||
- name: vol
|
||||
devicePath: /dev/block
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcBlockName }}
|
18
examples/block/values.yaml
Normal file
18
examples/block/values.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
# Various names used through example
|
||||
scBlockName: zfssa-block-example-sc
|
||||
pvcBlockName: zfssa-block-example-pvc
|
||||
podBlockName: zfssa-block-example-pod
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
4
examples/helm/sauron-storage/Chart.yaml
Normal file
4
examples/helm/sauron-storage/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
description: Creates Storageclass and Persistent Volume Claim used by Sauron.
|
||||
name: sauron-storage
|
||||
version: 3.0.1
|
20
examples/helm/sauron-storage/templates/00-sauron-sc.yaml
Normal file
20
examples/helm/sauron-storage/templates/00-sauron-sc.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.storageClass.name }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.storageClass.volumeType }}
|
||||
targetGroup: {{ .Values.storageClass.targetGroup }}
|
||||
blockSize: {{ .Values.storageClass.blockSize }}
|
||||
pool: {{ .Values.storageClass.pool }}
|
||||
project: {{ .Values.storageClass.project }}
|
||||
targetPortal: {{ .Values.storageClass.targetPortal }}
|
||||
nfsServer: {{ .Values.storageClass.nfsServer }}
|
||||
rootUser: {{ .Values.storageClass.rootUser }}
|
||||
rootGroup: {{ .Values.storageClass.rootGroup }}
|
||||
rootPermissions: {{ .Values.storageClass.rootPermissions }}
|
||||
shareNFS: {{ .Values.storageClass.shareNFS }}
|
||||
restrictChown: {{ .Values.storageClass.restrictChown }}
|
75
examples/helm/sauron-storage/templates/01-sauron-pvc.yaml
Normal file
75
examples/helm/sauron-storage/templates/01-sauron-pvc.yaml
Normal file
@@ -0,0 +1,75 @@
|
||||
{{- if .Values.persistentVolumeClaim.enabled -}}
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssec0
|
||||
namespace: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolumeClaim.size }}
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssec1
|
||||
namespace: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolumeClaim.size }}
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssec2
|
||||
namespace: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolumeClaim.size }}
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssg
|
||||
namespace: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolumeClaim.size }}
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssp-many
|
||||
namespace: {{ .Values.persistentVolumeClaim.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.persistentVolumeClaim.size }}
|
||||
storageClassName: {{ .Values.storageClass.name }}
|
||||
{{- end }}
|
21
examples/helm/sauron-storage/values.yaml
Normal file
21
examples/helm/sauron-storage/values.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
# Define Storage Class Parameters
|
||||
storageClass:
|
||||
name: "sauron-sc"
|
||||
blockSize: '"8192"'
|
||||
pool: h1-pool1
|
||||
project: pmonday
|
||||
targetPortal: '"10.80.44.65:3260"'
|
||||
nfsServer: '"10.80.44.65"'
|
||||
rootUser: nobody
|
||||
rootGroup: other
|
||||
rootPermissions: '"777"'
|
||||
shareNFS: '"on"'
|
||||
restrictChown: '"false"'
|
||||
volumeType: '"thin"'
|
||||
targetGroup: '"csi-data-path-target"'
|
||||
|
||||
# Define Persistent Volume Claim Parameters.
|
||||
persistentVolumeClaim:
|
||||
enabled: true
|
||||
namespace: sauron
|
||||
size: 100Gi
|
6
examples/local-values/README.md
Normal file
6
examples/local-values/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Introduction
|
||||
|
||||
This directory can contain local values files to be used with the helm charts.
|
||||
|
||||
Files in this directory should not be checked in to a source code control
|
||||
system as they may contain passwords.
|
4
examples/nfs-exp/Chart.yaml
Normal file
4
examples/nfs-exp/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-csi-nfs-exp-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.
|
174
examples/nfs-exp/README.md
Normal file
174
examples/nfs-exp/README.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using NFS filesystems and expanding the volume size
|
||||
on a target Oracle ZFS Storage Appliance.
|
||||
|
||||
Prior to running this example, the NFS environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* volSize: the size of the filesystem share to create
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f ../local-values/local-values.yaml zfssa-nfs-exp ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
1. Display the storage class (SC)
|
||||
The command `kubectl get sc` should now return something similar to this:
|
||||
|
||||
```text
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
zfssa-nfs-exp-example-sc zfssa-csi-driver Delete Immediate true 15s
|
||||
```
|
||||
2. Display the volume claim
|
||||
The command `kubectl get pvc` should now return something similar to this:
|
||||
```text
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
zfssa-nfs-exp-example-pvc Bound pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 10Gi RWX zfssa-nfs-exp-example-sc 108s
|
||||
```
|
||||
3. Display the pod mounting the volume
|
||||
|
||||
The command `kubectl get pod` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
zfssa-csi-nodeplugin-xmv96 2/2 Running 0 43m
|
||||
zfssa-csi-nodeplugin-z5tmm 2/2 Running 0 43m
|
||||
zfssa-csi-provisioner-0 4/4 Running 0 43m
|
||||
zfssa-nfs-exp-example-pod 1/1 Running 0 3m23s
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, for demo, start the following analytics in a worksheet on
|
||||
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
|
||||
|
||||
Exec into the pod and write some data to the NFS volume:
|
||||
```text
|
||||
kubectl exec -it zfssa-nfs-exp-example-pod -- /bin/sh
|
||||
|
||||
/ # cd /mnt
|
||||
/mnt # df -h
|
||||
Filesystem Size Used Available Use% Mounted on
|
||||
overlay 38.4G 15.0G 23.4G 39% /
|
||||
tmpfs 64.0M 0 64.0M 0% /dev
|
||||
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
|
||||
shm 64.0M 0 64.0M 0% /dev/shm
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
|
||||
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
|
||||
10.0G 0 10.0G 0% /mnt
|
||||
...
|
||||
/mnt # dd if=/dev/zero of=/mnt/data count=1024 bs=1024
|
||||
1024+0 records in
|
||||
1024+0 records out
|
||||
/mnt # df -h
|
||||
Filesystem Size Used Available Use% Mounted on
|
||||
overlay 38.4G 15.0G 23.4G 39% /
|
||||
tmpfs 64.0M 0 64.0M 0% /dev
|
||||
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
|
||||
shm 64.0M 0 64.0M 0% /dev/shm
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
|
||||
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
|
||||
10.0G 1.0M 10.0G 0% /mnt
|
||||
|
||||
/mnt #
|
||||
```
|
||||
|
||||
The analytics on the appliance should have seen the spikes as data was written.
|
||||
|
||||
## Expanding volume capacity
|
||||
|
||||
After verifying the initially requested capaicy of the NFS volume is provisioned and usable,
|
||||
exercise expanding of the volume capacity by editing the deployed Persistent Volume Claim.
|
||||
|
||||
Copy ./templates/01-pvc.yaml to /tmp/nfs-exp-pvc.yaml and modify this yaml file for volume expansion, for example:
|
||||
```text
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfssa-nfs-exp-example-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: "20Gi"
|
||||
storageClassName: zfssa-nfs-exp-example-sc
|
||||
```
|
||||
Then, apply the updated PVC configuration by running 'kubectl apply -f /tmp/nfs-exp-pvc.yaml' command. Note that the command will return a warning message similar to the following:
|
||||
```text
|
||||
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
|
||||
```
|
||||
|
||||
Alternatively, you can perform volume expansion on the fly using 'kubectl edit' command.
|
||||
```text
|
||||
kubectl edit pvc/zfssa-nfs-exp-example-pvc
|
||||
|
||||
...
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
storageClassName: zfssa-nfs-exp-example-sc
|
||||
volumeMode: Filesystem
|
||||
volumeName: pvc-27281fde-be45-436d-99a3-b45cddbc74d1
|
||||
status:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
phase: Bound
|
||||
...
|
||||
|
||||
Modify the capacity from 10Gi to 20Gi on both spec and status sectioins, then save and exit the edit mode.
|
||||
```
|
||||
|
||||
The command `kubectl get pv,pvc` should now return something similar to this:
|
||||
```text
|
||||
kubectl get pv,pvc,sc
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
persistentvolume/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 20Gi RWX Delete Bound default/zfssa-nfs-exp-example-pvc zfssa-nfs-exp-example-sc 129s
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
persistentvolumeclaim/zfssa-nfs-exp-example-pvc Bound pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 20Gi RWX zfssa-nfs-exp-example-sc 132s
|
||||
```
|
||||
|
||||
Exec into the pod and Verify the size of the mounted NFS volume is expanded:
|
||||
```text
|
||||
kubectl exec -it zfssa-nfs-exp-example-pod -- /bin/sh
|
||||
|
||||
/ # cd /mnt
|
||||
/mnt # df -h
|
||||
Filesystem Size Used Available Use% Mounted on
|
||||
overlay 38.4G 15.0G 23.4G 39% /
|
||||
tmpfs 64.0M 0 64.0M 0% /dev
|
||||
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
|
||||
shm 64.0M 0 64.0M 0% /dev/shm
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
|
||||
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
|
||||
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
|
||||
20.0G 1.0M 20.0G 0% /mnt
|
||||
...
|
||||
```
|
21
examples/nfs-exp/templates/00-storage-class.yaml
Normal file
21
examples/nfs-exp/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scNfsName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: {{ .Values.appliance.shareNFS }}
|
||||
restrictChown: "false"
|
12
examples/nfs-exp/templates/01-pvc.yaml
Normal file
12
examples/nfs-exp/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcNfsName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsName }}
|
21
examples/nfs-exp/templates/02-pod.yaml
Normal file
21
examples/nfs-exp/templates/02-pod.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podNfsName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcNfsName }}
|
||||
readOnly: false
|
19
examples/nfs-exp/values.yaml
Normal file
19
examples/nfs-exp/values.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# Various names used through example
|
||||
scNfsName: zfssa-nfs-exp-example-sc
|
||||
pvcNfsName: zfssa-nfs-exp-example-pvc
|
||||
podNfsName: zfssa-nfs-exp-example-pod
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
shareNFS: "on"
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
4
examples/nfs-multi/Chart.yaml
Normal file
4
examples/nfs-multi/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-nfs-multi-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.
|
46
examples/nfs-multi/README.md
Normal file
46
examples/nfs-multi/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using NFS filesystems on a target
|
||||
Oracle ZFS Storage Appliance. It creates several PVCs and optionally
|
||||
creates a pod to consume them.
|
||||
|
||||
This example also illustrates the use of namespaces with PVCs and pods.
|
||||
Be aware that PVCs and pods will be created in the user defined namespace
|
||||
not in the default namespace as in other examples.
|
||||
|
||||
Prior to running this example, the NFS environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* volSize: the size of the filesystem share to create
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f ../local-values/local-values.yaml zfssa-nfs-multi ./
|
||||
```
|
||||
|
||||
## Check pod mounts
|
||||
|
||||
If you enabled the use of the test pod, exec into it and check the NFS volumes:
|
||||
|
||||
```
|
||||
kubectl exec -n zfssa-nfs-multi -it zfssa-nfs-multi-example-pod -- /bin/sh
|
||||
/ # cd /mnt
|
||||
/mnt # ls
|
||||
ssec0 ssec1 ssec2 ssg ssp-many
|
||||
```
|
20
examples/nfs-multi/templates/00-storage-class.yaml
Normal file
20
examples/nfs-multi/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scNfsMultiName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: {{ .Values.appliance.shareNFS }}
|
||||
restrictChown: "false"
|
74
examples/nfs-multi/templates/01-pvc.yaml
Normal file
74
examples/nfs-multi/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Values.namespace }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvc0 }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsMultiName }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvc1 }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsMultiName }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvc2 }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsMultiName }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvc3 }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsMultiName }}
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvc4 }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsMultiName }}
|
49
examples/nfs-multi/templates/02-pod.yaml
Normal file
49
examples/nfs-multi/templates/02-pod.yaml
Normal file
@@ -0,0 +1,49 @@
|
||||
{{- if .Values.deployPod -}}
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podNfsMultiName }}
|
||||
namespace: {{ .Values.namespace }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol0
|
||||
mountPath: /mnt/{{ .Values.pvc0 }}
|
||||
- name: vol1
|
||||
mountPath: /mnt/{{ .Values.pvc1 }}
|
||||
- name: vol2
|
||||
mountPath: /mnt/{{ .Values.pvc2 }}
|
||||
- name: vol3
|
||||
mountPath: /mnt/{{ .Values.pvc3 }}
|
||||
- name: vol4
|
||||
mountPath: /mnt/{{ .Values.pvc4 }}
|
||||
volumes:
|
||||
- name: vol0
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvc0 }}
|
||||
readOnly: false
|
||||
- name: vol1
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvc1 }}
|
||||
readOnly: false
|
||||
- name: vol2
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvc2 }}
|
||||
readOnly: false
|
||||
- name: vol3
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvc3 }}
|
||||
readOnly: false
|
||||
- name: vol4
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvc4 }}
|
||||
readOnly: false
|
||||
{{- end }}
|
27
examples/nfs-multi/values.yaml
Normal file
27
examples/nfs-multi/values.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
# Various names used through example
|
||||
scNfsMultiName: zfssa-nfs-multi-example-sc
|
||||
pvc0: ssec0
|
||||
pvc1: ssec1
|
||||
pvc2: ssec2
|
||||
pvc3: ssg
|
||||
pvc4: ssp-many
|
||||
podNfsMultiName: zfssa-nfs-multi-example-pod
|
||||
namespace: zfssa-nfs-multi
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
shareNFS: "on"
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
||||
|
||||
# Deploy a pod to consume the volumes
|
||||
deployPod: true
|
4
examples/nfs-pv/Chart.yaml
Normal file
4
examples/nfs-pv/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-existing-fs-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end filesystem example for an existing Oracle ZFS Storage Appliance CSI filesystem.
|
91
examples/nfs-pv/README.md
Normal file
91
examples/nfs-pv/README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using an existing filesystem share on a target
|
||||
Oracle ZFS Storage Appliance.
|
||||
|
||||
Prior to running this example, the NFS environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
This flow to use an existing volume is:
|
||||
* create a persistent volume (PV) object
|
||||
* allocate it to a persistent volume claim (PVC)
|
||||
* use the PVC from a pod
|
||||
|
||||
The following must be set up:
|
||||
* the volume handle must be a fully formed volume id
|
||||
* there must be volume attributes defined as part of the persistent volume
|
||||
|
||||
In this example, the volume handle is constructed via values in the helm
|
||||
chart. The only new attribute necessary is the name of the volume on the
|
||||
target appliance. The remaining is assembled from the information that is still
|
||||
in the local-values.yaml file (appliance name, pool, project, etc...).
|
||||
|
||||
The resulting VolumeHandle appears similar to the following, with the values
|
||||
in ```<>``` filled in from the helm variables:
|
||||
|
||||
```
|
||||
volumeHandle: /nfs/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
|
||||
```
|
||||
From the above, note that the volumeHandle is in the form of an ID with the components:
|
||||
* 'nfs' - denoting an exported NFS share
|
||||
* 'appliance name' - this is the management path of the ZFSSA target appliance
|
||||
* 'volume name' - the name of the share on the appliance
|
||||
* 'pool name' - the pool on the target appliance
|
||||
* 'local' - denotes that the pool is owned by the head
|
||||
* 'project' - the project that the share is in
|
||||
|
||||
In the volume attributes, nfsServer must be defined.
|
||||
|
||||
Once created, a persistent volume claim can be made for this share and used in a pod.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* applianceName: the existing appliance name (this is the management path)
|
||||
* pvExistingFilesystemName: the name of the filesystem share on the target appliance
|
||||
* volMountPoint: the mount point on the target appliance of the filesystem share
|
||||
* volSize: the size of the filesystem share
|
||||
|
||||
On the target appliance, ensure that the filesystem share is exported via NFS.
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f ../local-values/local-values.yaml zfssa-nfs-existing ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
```
|
||||
kubectl get sc
|
||||
kubectl get pvc
|
||||
kubectl get pod
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, for demo, start the following analytics in a worksheet on
|
||||
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
|
||||
|
||||
Exec into the pod and write some data to the block volume:
|
||||
```yaml
|
||||
kubectl exec -it zfssa-fs-existing-pod -- /bin/sh
|
||||
/ # cd /mnt
|
||||
/mnt # ls
|
||||
/mnt # echo "hello world" > demo.txt
|
||||
/mnt #
|
||||
```
|
||||
|
||||
The analytics on the appliance should have seen the spikes as data was written.
|
20
examples/nfs-pv/templates/00-storage-class.yaml
Normal file
20
examples/nfs-pv/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scExistingFilesystemName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: "on"
|
||||
restrictChown: "false"
|
28
examples/nfs-pv/templates/01-pv.yaml
Normal file
28
examples/nfs-pv/templates/01-pv.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: {{ .Values.pvExistingFilesystemName }}
|
||||
annotations:
|
||||
pv.kubernetes.io/provisioned-by: zfssa-csi-driver
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Filesystem
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
capacity:
|
||||
storage: {{ .Values.volSize }}
|
||||
csi:
|
||||
driver: zfssa-csi-driver
|
||||
volumeHandle: /nfs/{{ .Values.applianceName }}/{{ .Values.pvExistingFilesystemName }}/{{ .Values.appliance.pool }}/local/{{ .Values.appliance.project }}/{{ .Values.pvExistingFilesystemName }}
|
||||
readOnly: false
|
||||
volumeAttributes:
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
share: {{ .Values.volMountPoint }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
|
||||
claimRef:
|
||||
namespace: default
|
||||
name: {{ .Values.pvcExistingFilesystemName }}
|
||||
|
12
examples/nfs-pv/templates/02-pvc.yaml
Normal file
12
examples/nfs-pv/templates/02-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcExistingFilesystemName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scExistingFilesystemName }}
|
20
examples/nfs-pv/templates/03-pod.yaml
Normal file
20
examples/nfs-pv/templates/03-pod.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podExistingFilesystemName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcExistingFilesystemName }}
|
21
examples/nfs-pv/values.yaml
Normal file
21
examples/nfs-pv/values.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
# Various names used through example
|
||||
scExistingFilesystemName: zfssa-fs-existing-sc
|
||||
pvExistingFilesystemName: OVERRIDE
|
||||
pvcExistingFilesystemName: zfssa-fs-existing-pvc
|
||||
podExistingFilesystemName: zfssa-fs-existing-pod
|
||||
applianceName: OVERRIDE
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
|
||||
# Settings for volume
|
||||
volMountPoint: OVERRIDE
|
||||
volSize: OVERRIDE
|
21
examples/nfs-snapshot/nfs-pod-restored-volume.yaml
Normal file
21
examples/nfs-snapshot/nfs-pod-restored-volume.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: zfssa-nfs-vs-restore-pod
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: zfssa-nfs-vs-restore-pvc
|
||||
readOnly: false
|
16
examples/nfs-snapshot/nfs-pvc-from-snapshot.yaml
Normal file
16
examples/nfs-snapshot/nfs-pvc-from-snapshot.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfssa-nfs-vs-restore-pvc
|
||||
spec:
|
||||
storageClassName: zfssa-nfs-vs-example-sc
|
||||
dataSource:
|
||||
name: zfssa-nfs-vs-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: 68796
|
8
examples/nfs-snapshot/nfs-snapshot.yaml
Normal file
8
examples/nfs-snapshot/nfs-snapshot.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: zfssa-nfs-vs-snapshot
|
||||
spec:
|
||||
volumeSnapshotClassName: zfssa-nfs-vs-example-vsc
|
||||
source:
|
||||
persistentVolumeClaimName: zfssa-nfs-vs-example-pvc
|
4
examples/nfs-vsc/Chart.yaml
Normal file
4
examples/nfs-vsc/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-csi-nfs-vs-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end NFS volume snapshot example for Oracle ZFS Storage Appliance CSI driver.
|
193
examples/nfs-vsc/README.md
Normal file
193
examples/nfs-vsc/README.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of taking a snapshot of an NFS filesystem
|
||||
volume on a target Oracle ZFS Storage Appliance and making use of it
|
||||
on another pod by creating (restoring) a volume from the snapshot.
|
||||
|
||||
Prior to running this example, the NFS environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can contain others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* nfsServer: the NFS data path IP address
|
||||
* volSize: the size of the filesystem share to create
|
||||
|
||||
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
|
||||
|
||||
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
|
||||
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
|
||||
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
|
||||
[INSTALLATION](../../INSTALLATION.md).
|
||||
|
||||
## Deployment
|
||||
|
||||
This step includes deploying a pod with an NFS volume attached using a regular
|
||||
storage class and a persistent volume claim. It also deploys a volume snapshot class
|
||||
required to take snapshots of the persistent volume.
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3. If you plan to exercise creating volume from a snapshot with given yaml files as they are, define the names in the local-values.yaml as follows. You can modify them as per your preference.
|
||||
```text
|
||||
scNfsName: zfssa-nfs-vs-example-sc
|
||||
vscNfsName: zfssa-nfs-vs-example-vsc
|
||||
pvcNfsName: zfssa-nfs-vs-example-pvc
|
||||
podNfsName: zfssa-nfs-vs-example-pod
|
||||
```
|
||||
|
||||
```text
|
||||
helm ../install -f local-values/local-values.yaml zfssa-nfs-vsc ./
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
1. Display the storage class (SC)
|
||||
The command `kubectl get sc` should now return something similar to this:
|
||||
|
||||
```text
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
zfssa-nfs-vs-example-sc zfssa-csi-driver Delete Immediate false 86s
|
||||
```
|
||||
2. Display the volume claim
|
||||
The command `kubectl get pvc` should now return something similar to this:
|
||||
```text
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 86s
|
||||
```
|
||||
3. Display the volume snapshot class
|
||||
The command `kubectl get volumesnapshotclass` should now return something similar to this:
|
||||
```text
|
||||
NAME DRIVER DELETIONPOLICY AGE
|
||||
zfssa-nfs-vs-example-vsc zfssa-csi-driver Delete 86s
|
||||
```
|
||||
4. Display the pod mounting the volume
|
||||
|
||||
The command `kubectl get pod` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snapshot-controller-0 1/1 Running 0 6d6h
|
||||
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 24m
|
||||
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 24m
|
||||
zfssa-csi-provisioner-0 4/4 Running 0 24m
|
||||
zfssa-nfs-vs-example-pod 1/1 Running 0 86s
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, verify the volume is mounted and can be written.
|
||||
|
||||
```text
|
||||
kubectl exec -it zfssa-nfs-vs-example-pod -- /bin/sh
|
||||
|
||||
/ # cd /mnt
|
||||
/mnt #
|
||||
/mnt # date > timestamp.txt
|
||||
/mnt # cat timestamp.txt
|
||||
Tue Jan 19 23:13:10 UTC 2021
|
||||
```
|
||||
|
||||
## Creating snapshot
|
||||
|
||||
Use configuration files in examples/nfs-snapshot directory with proper modifications
|
||||
for the rest of the example steps.
|
||||
|
||||
Create a snapshot of the volume by running the command below:
|
||||
|
||||
```text
|
||||
kubectl apply -f ../nfs-snapshot/nfs-snapshot.yaml
|
||||
```
|
||||
|
||||
Verify the volume snapshot is created and available by running the following command:
|
||||
|
||||
```text
|
||||
kubectl get volumesnapshot
|
||||
```
|
||||
|
||||
Wait until the READYTOUSE of the snapshot becomes true before moving on to the next steps.
|
||||
It is important to use the RESTORESIZE value of the volume snapshot just created when specifying
|
||||
the storage capacity of a persistent volume claim to provision a persistent volume using this
|
||||
snapshot. For example, the storage capacity in ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
|
||||
|
||||
Optionally, verify the volume snapshot exists on ZFS Storage Appliance. The snapshot name
|
||||
on ZFS Storage Appliance should have the volume snapshot UID as the suffix.
|
||||
|
||||
## Creating persistent volume claim
|
||||
|
||||
Create a persistent volume claim to provision a volume from the snapshot by running
|
||||
the command below. Be aware that the persistent volume provisioned by this persistent volume claim
|
||||
is not expandable. Create a new storage class with allowVolumeExpansion: true and use it when
|
||||
specifying the persistent volume claim.
|
||||
|
||||
|
||||
```text
|
||||
kubectl apply -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
|
||||
```
|
||||
|
||||
Verify the persistent volume claim is created and a volume is provisioned by running the following command:
|
||||
|
||||
```text
|
||||
kubectl get pv,pvc
|
||||
```
|
||||
|
||||
The command `kubectl get pv,pvc` should return something similar to this:
|
||||
```text
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
persistentvolume/pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX Delete Bound default/zfssa-nfs-vs-example-pvc zfssa-nfs-vs-example-sc 34m
|
||||
persistentvolume/pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO Delete Bound default/zfssa-nfs-vs-restore-pvc zfssa-nfs-vs-example-sc 112s
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
persistentvolumeclaim/zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 34m
|
||||
persistentvolumeclaim/zfssa-nfs-vs-restore-pvc Bound pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO zfssa-nfs-vs-example-sc 116s
|
||||
```
|
||||
|
||||
Optionally, verify the new volume exists on ZFS Storage Appliance. Notice that the new
|
||||
volume is a clone off the snapshot taken from the original volume.
|
||||
|
||||
## Creating pod using restored volume
|
||||
|
||||
Create a pod with the persistent volume claim created from the above step by running the command below:
|
||||
|
||||
```text
|
||||
kubectl apply -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
|
||||
```
|
||||
|
||||
The command `kubectl get pod` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
snapshot-controller-0 1/1 Running 0 6d7h
|
||||
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 68m
|
||||
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 68m
|
||||
zfssa-csi-provisioner-0 4/4 Running 0 68m
|
||||
zfssa-nfs-vs-example-pod 1/1 Running 0 46m
|
||||
zfssa-nfs-vs-restore-pod 1/1 Running 0 37s
|
||||
```
|
||||
|
||||
Verify the new volume has the contents of the original volume at the point in time
|
||||
when the snapsnot was taken.
|
||||
|
||||
```text
|
||||
kubectl exec -it zfssa-nfs-vs-restore-pod -- /bin/sh
|
||||
|
||||
/ # cd /mnt
|
||||
/mnt #
|
||||
/mnt # cat timestamp.txt
|
||||
Tue Jan 19 23:13:10 UTC 2021
|
||||
```
|
||||
|
||||
## Deleting pod, persistent volume claim and volume snapshot
|
||||
|
||||
To delete the pod, persistent volume claim and volume snapshot created from the above steps,
|
||||
run the following commands below. Wait until the resources being deleted disappear from
|
||||
the list that `kubectl get ...` command displays before running the next command.
|
||||
|
||||
```text
|
||||
kubectl delete -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
|
||||
kubectl delete -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
|
||||
kubectl delete -f ../nfs-snapshot/nfs-snapshot.yaml
|
||||
```
|
20
examples/nfs-vsc/templates/00-storage-class.yaml
Normal file
20
examples/nfs-vsc/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scNfsName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: {{ .Values.appliance.shareNFS }}
|
||||
restrictChown: "false"
|
12
examples/nfs-vsc/templates/01-pvc.yaml
Normal file
12
examples/nfs-vsc/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcNfsName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsName }}
|
6
examples/nfs-vsc/templates/02-volume-snapshot-class.yaml
Normal file
6
examples/nfs-vsc/templates/02-volume-snapshot-class.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: {{ .Values.vscNfsName }}
|
||||
driver: zfssa-csi-driver
|
||||
deletionPolicy: Delete
|
21
examples/nfs-vsc/templates/03-pod.yaml
Normal file
21
examples/nfs-vsc/templates/03-pod.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podNfsName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcNfsName }}
|
||||
readOnly: false
|
20
examples/nfs-vsc/values.yaml
Normal file
20
examples/nfs-vsc/values.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
# Various names used through example
|
||||
scNfsName: zfssa-nfs-vs-example-sc
|
||||
vscNfsName: zfssa-nfs-vs-example-vsc
|
||||
pvcNfsName: zfssa-nfs-vs-example-pvc
|
||||
podNfsName: zfssa-nfs-vs-example-pod
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
shareNFS: "on"
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
4
examples/nfs/Chart.yaml
Normal file
4
examples/nfs/Chart.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
name: zfssa-csi-nfs-example
|
||||
version: 0.0.1
|
||||
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.
|
83
examples/nfs/README.md
Normal file
83
examples/nfs/README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Introduction
|
||||
|
||||
This is an end-to-end example of using NFS filesystems on a target
|
||||
Oracle ZFS Storage Appliance.
|
||||
|
||||
Prior to running this example, the NFS environment must be set up properly
|
||||
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
|
||||
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
|
||||
|
||||
## Configuration
|
||||
|
||||
Set up a local values files. It must contain the values that customize to the
|
||||
target appliance, but can container others. The minimum set of values to
|
||||
customize are:
|
||||
|
||||
* appliance:
|
||||
* targetGroup: the target group that contains data path interfaces on the target appliance
|
||||
* pool: the pool to create shares in
|
||||
* project: the project to create shares in
|
||||
* targetPortal: the target iSCSI portal on the appliance
|
||||
* nfsServer: the NFS data path IP address
|
||||
* volSize: the size of the filesystem share to create
|
||||
|
||||
Check out the parameters section of the storage class configuration file (storage-class.yaml)
|
||||
to see all supporting properties. Refer to NFS Protocol page of Oracle ZFS Storage Appliance
|
||||
Administration Guide how to defind the values properly.
|
||||
|
||||
## Deployment
|
||||
|
||||
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
|
||||
|
||||
```
|
||||
helm install -f local-values/local-values.yaml zfssa-nfs ./nfs
|
||||
```
|
||||
|
||||
Once deployed, verify each of the created entities using kubectl:
|
||||
|
||||
1. Display the storage class (SC)
|
||||
The command `kubectl get sc` should now return something similar to this:
|
||||
|
||||
```text
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
zfssa-csi-nfs-sc zfssa-csi-driver Delete Immediate false 2m9s
|
||||
```
|
||||
2. Display the volume
|
||||
The command `kubectl get pvc` should now return something similar to this:
|
||||
```text
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
zfssa-csi-nfs-pvc Bound pvc-808d9bd7-cbb0-47a7-b400-b144248f1818 10Gi RWX zfssa-csi-nfs-sc 8s
|
||||
```
|
||||
3. Display the pod mounting the volume
|
||||
|
||||
The command `kubectl get all` should now return something similar to this:
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/zfssa-csi-nodeplugin-lpts9 2/2 Running 0 25m
|
||||
pod/zfssa-csi-nodeplugin-vdb44 2/2 Running 0 25m
|
||||
pod/zfssa-csi-provisioner-0 2/2 Running 0 23m
|
||||
pod/zfssa-nfs-example-pod 1/1 Running 0 12s
|
||||
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
daemonset.apps/zfssa-csi-nodeplugin 2 2 2 2 2 <none> 25m
|
||||
|
||||
NAME READY AGE
|
||||
statefulset.apps/zfssa-csi-provisioner 1/1 23m
|
||||
|
||||
```
|
||||
|
||||
## Writing data
|
||||
|
||||
Once the pod is deployed, for demo, start the following analytics in a worksheet on
|
||||
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
|
||||
|
||||
Exec into the pod and write some data to the block volume:
|
||||
```yaml
|
||||
kubectl exec -it zfssa-nfs-example-pod -- /bin/sh
|
||||
/ # cd /mnt
|
||||
/mnt # ls
|
||||
/mnt # echo "hello world" > demo.txt
|
||||
/mnt #
|
||||
```
|
||||
|
||||
The analytics on the appliance should have seen the spikes as data was written.
|
20
examples/nfs/templates/00-storage-class.yaml
Normal file
20
examples/nfs/templates/00-storage-class.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ .Values.scNfsName }}
|
||||
provisioner: zfssa-csi-driver
|
||||
reclaimPolicy: Delete
|
||||
volumeBindingMode: Immediate
|
||||
parameters:
|
||||
volumeType: {{ .Values.appliance.volumeType }}
|
||||
targetGroup: {{ .Values.appliance.targetGroup }}
|
||||
blockSize: "8192"
|
||||
pool: {{ .Values.appliance.pool }}
|
||||
project: {{ .Values.appliance.project }}
|
||||
targetPortal: {{ .Values.appliance.targetPortal }}
|
||||
nfsServer: {{ .Values.appliance.nfsServer }}
|
||||
rootUser: {{ .Values.appliance.rootUser }}
|
||||
rootGroup: {{ .Values.appliance.rootGroup }}
|
||||
rootPermissions: "777"
|
||||
shareNFS: {{ .Values.appliance.shareNFS }}
|
||||
restrictChown: "false"
|
12
examples/nfs/templates/01-pvc.yaml
Normal file
12
examples/nfs/templates/01-pvc.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: {{ .Values.pvcNfsName }}
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
volumeMode: Filesystem
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.volSize }}
|
||||
storageClassName: {{ .Values.scNfsName }}
|
21
examples/nfs/templates/02-pod.yaml
Normal file
21
examples/nfs/templates/02-pod.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: {{ .Values.podNfsName }}
|
||||
labels:
|
||||
name: ol7slim-test
|
||||
spec:
|
||||
restartPolicy: Always
|
||||
containers:
|
||||
- image: container-registry.oracle.com/os/oraclelinux:7-slim
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: [ "tail -f /dev/null" ]
|
||||
name: ol7slim
|
||||
volumeMounts:
|
||||
- name: vol
|
||||
mountPath: /mnt
|
||||
volumes:
|
||||
- name: vol
|
||||
persistentVolumeClaim:
|
||||
claimName: {{ .Values.pvcNfsName }}
|
||||
readOnly: false
|
19
examples/nfs/values.yaml
Normal file
19
examples/nfs/values.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# Various names used through example
|
||||
scNfsName: zfssa-nfs-example-sc
|
||||
pvcNfsName: zfssa-nfs-example-pvc
|
||||
podNfsName: zfssa-nfs-example-pod
|
||||
|
||||
# Settings for target appliance
|
||||
appliance:
|
||||
volumeType: thin
|
||||
targetGroup: OVERRIDE
|
||||
pool: OVERRIDE
|
||||
project: OVERRIDE
|
||||
targetPortal: OVERRIDE
|
||||
nfsServer: OVERRIDE
|
||||
rootUser: root
|
||||
rootGroup: other
|
||||
shareNFS: "on"
|
||||
|
||||
# Settings for volume
|
||||
volSize: OVERRIDE
|
Reference in New Issue
Block a user