Merge pull request #1 from oracle/init

ZFSSA CSI driver v1.0.0 release
This commit is contained in:
Jeongtae Kim 2021-08-25 17:30:14 -06:00 committed by GitHub
commit 0fc7c51c13
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
113 changed files with 11966 additions and 1 deletions

38
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,38 @@
# Contributing
Oracle welcomes contributions to this repository from anyone. If you want to submit a pull request to fix a bug or enhance code, please first open an issue and link to that issue when you submit your pull request. If you have any questions about a possible contribution, feel free to open an issue too.
## Contributing to the zfssa-csi-driver repository
Pull requests can be made under
[The Oracle Contributor Agreement](https://www.oracle.com/technetwork/community/oca-486395.html) (OCA).
For pull requests to be accepted, the bottom of your commit message must have
the following line using your name and e-mail address as it appears in the
OCA Signatories list.
```
Signed-off-by: Your Name <you@example.org>
```
This can be automatically added to pull requests by committing with:
```
git commit --signoff
```
Only pull requests from committers that can be verified as having
signed the OCA can be accepted.
### Pull request process
1. Fork this repository
1. Create a branch in your fork to implement the changes. We recommend using
the issue number as part of your branch name, e.g. `1234-fixes`
1. Ensure that any documentation is updated with the changes that are required
by your fix.
1. Ensure that any samples are updated if the base image has been changed.
1. Submit the pull request. *Do not leave the pull request blank*. Explain exactly
what your changes are meant to do and provide simple steps on how to validate
your changes. Ensure that you reference the issue you created as well.
We will assign the pull request to 2-3 people for review before it is merged.

23
Dockerfile Normal file
View File

@ -0,0 +1,23 @@
# Copyright (c) 2021 Oracle and/or its affiliates.
#
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
# This is the Dockerfile for Oracle ZFS Storage Appliance CSI Driver
#
FROM container-registry.oracle.com/os/oraclelinux:7-slim
LABEL maintainers="Oracle"
LABEL description="Oracle ZFS Storage Appliance CSI Driver for Kubernetes"
ARG var_proxy
ENV http_proxy=$var_proxy
ENV https_proxy=$var_proxy
# Add util-linux to get a new version of losetup.
RUN yum -y install iscsi-initiator-utils nfs-utils e2fsprogs xfsprogs && yum clean all
ENV http_proxy ""
ENV https_proxy ""
COPY ./bin/zfssa-csi-driver /zfssa-csi-driver
ENTRYPOINT ["/zfssa-csi-driver"]

319
INSTALLATION.md Normal file
View File

@ -0,0 +1,319 @@
# Installation of Oracle ZFS Storage Appliance CSI Plugin
This document reviews how to install and make use of the driver.
## Requirements
Ensure the following information and requirements can be met prior to installation.
* The following ZFS Storage Appliance information (see your ZFFSA device administrator):
* The name or the IP address of the appliance. If it's a name, it must be DNS resolvable.
When the appliance is a clustered system, the connection for management operations is tied
to a head in driver deployment. You will see different driver behaviors in takeover/failback
scenarios with the target storage appliance depending on the management interface settings and
if it remains locked to the failed node or not.
* A login access to your ZFSSA in the form of a user login and associated password.
It is desirable to create a normal login user with required authorizations.
* The appliance certificate for the REST endpoint is available.
* The name of the appliance storage pool from which volumes will be provisioned.
* The name of the project in the pool.
* In secure mode, the driver supports only TLSv1.2 for HTTPS connection to ZFSSA. Make sure that
TLSv1.2 is enabled for HTTPS service on ZFSSA.
The user on the appliance must have a minimum of the following authorizations (where pool and project are those
that will be used in the storage class), root should not be used for provisioning.
* Object: nas.<pool>.<project>.*
* Permissions:
* changeAccessProps
* changeGeneralProps
* changeProtocolProps
* changeSpaceProps
* changeUserQuota
* clearLocks
* clone
* createShare
* destroy
* rollback
* scheduleSnap
* takeSnap
* destroySnap
* renameSnap
The File system being exported must have 'Share Mode' set to 'Read/Write' in the section 'NFS' of the tab 'Protocol'
of the file system (Under 'Shares').
More than one pool/project are possible if there are storage classes that identify different
pools and projects.
* The Kubernetes cluster namespace you must use (see your cluster administrator)
* Sidecar images
Make sure you have access to the registry or registries containing these images from the worker nodes. The image pull
policy (`imagePullPolicy`) is set to `IfNotPresent` in the deployment files. During the first deployment the
Container Runtime will likely try to pull them. If your Container Runtime cannot access the images you will have to
pull them manually before deployment. The required images are:
* node-driver-registar v2.0.0+.
* external-attacher v3.0.2+.
* external-provisioner v2.0.5+.
* external-resizer v1.1.0+.
* external-snapshotter v3.0.3+.
The common container images for those images are:
* k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.0
* k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
* k8s.gcr.io/sig-storage/csi-provisioner:v2.0.5
* k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
* k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
* Plugin image
You can pull the plugin image from a registry that you know hosts it or you can generate it and store it in one of
your registries. In any case, as for the sidecar images, the Container Runtime must have access to that registry.
If not you will have to pull it manually before deployment. If you choose to generate the plugin yourself use
version 1.13.8 or above of the Go compiler.
## Setup
This volume driver supports both NFS (filesystem) and iSCSI (block) volumes. Preparation for iSCSI, at this time, will
take some setup, please see the information below.
### iSCSI Environment
Install iSCSI client utilities on the Kubernetes worker nodes:
```bash
$ yum install iscsi-initiator-utils -y
```
Verify `iscsid` and `iscsi` are running after installation (systemctl status iscsid iscsi).
* Create an initiator group on the Oracle ZFS Storage Appliance *per worker node name*. For example, if
your worker node name is `pmonday-olcne-worker-0`, then there should be an initiator group named `pmonday-olcne-worker-0`
on the target appliance with the IQN of the worker node. The initiator can be determined by looking at
`/etc/iscsi/initiatorname.iscsi`.
* Create one or more targets and target groups on the interface that you intend to use for iSCSI traffic.
* CHAP is not supported at this time.
* Cloud instances often have duplicate IQNs, these MUST be regenerated and unique or connection storms
happen ([Instructions](https://www.thegeekdiary.com/how-to-modify-the-iscsi-initiator-id-in-linux/)).
* There are cases where fresh instances do not start the iscsi service properly with the following,
modify the iscsi.service to remove the ConditionDirectoryNotEmpty temporarily
```
Condition: start condition failed at Wed 2020-10-28 18:37:35 GMT; 1 day 4h ago
ConditionDirectoryNotEmpty=/var/lib/iscsi/nodes was not met
```
* iSCSI may get timeouts in particular networking conditions. Review the following web pages for possible
solutions. The first involves
[modifying sysctl](https://www.thegeekdiary.com/iscsiadm-discovery-timeout-with-two-or-more-network-interfaces-in-centos-rhel/),
the second involves changing the
[replacement timeout](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/iscsi-replacement-timeout)
for iSCSI.
* There is a condition where a 'uefi' target creates noise in iscsi discovery, this is noticeable in the
iscsid output (systemctl status iscsid). This issue appears to be in Oracle Linux 7 in a virtualized
environment:
```
● iscsid.service - Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2020-10-28 17:30:17 GMT; 1 day 23h ago
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)
Main PID: 1632 (iscsid)
Status: "Ready to process requests"
Tasks: 1
Memory: 6.4M
CGroup: /system.slice/iscsid.service
└─1632 /sbin/iscsid -f -d2
Oct 30 16:23:02 pbm-kube-0-w1 iscsid[1632]: iscsid: disconnecting conn 0x56483ca0f050, fd 7
Oct 30 16:23:02 pbm-kube-0-w1 iscsid[1632]: iscsid: connecting to 169.254.0.2:3260
Oct 30 16:23:02 pbm-kube-0-w1 iscsid[1632]: iscsid: connect to 169.254.0.2:3260 failed (Connection refused)
Oct 30 16:23:02 pbm-kube-0-w1 iscsid[1632]: iscsid: deleting a scheduled/waiting thread!
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: Poll was woken by an alarm
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: re-opening session -1 (reopen_cnt 55046)
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: disconnecting conn 0x56483cb55e60, fd 9
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: connecting to 169.254.0.2:3260
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: connect to 169.254.0.2:3260 failed (Connection refused)
Oct 30 16:23:03 pbm-kube-0-w1 iscsid[1632]: iscsid: deleting a scheduled/waiting thread!
```
### NFS Environment
Ensure that:
* All worker nodes have the NFS packages installed for their Operating System:
```bash
$ yum install nfs-utils -y
```
* All worker nodes are running the daemon `rpc.statd`
### Enabling Kubernetes Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, deploy API extensions, associated configurations,
and a snapshot controller by running the following command in deploy directory:
```text
kubectl apply -R -f k8s-1.17/snapshot-controller
```
This command will report creation of resources and configuratios as follows:
```text
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
statefulset.apps/snapshot-controller created
```
The details of them can be viewed using kubectl get <resource-type> command. Note that the command
above deploys a snapshot-controler in the default namespace by default. The command
`kubectl get all` should present something similar to this:
```text
NAME READY STATUS RESTARTS AGE
pod/snapshot-controller-0 1/1 Running 0 5h22m
...
NAME READY AGE
statefulset.apps/snapshot-controller 1/1 5h22m
```
### CSI Volume Plugin Deployment from Helm
A sample Helm chart is available in the deploy/helm directory, this method can be used for
simpler deployment than the section below.
Create a local-values.yaml file that, at a minimum, sets the values for the
zfssaInformation section. Depending on your environment, the image block may
also need updates if the identified repositories cannot be reached.
The secrets must be encoded. There are many ways to Base64 strings and files,
this technique would encode a user name of 'demo' for use in the values file on
a Mac with the base64 tool installed:
```bash
echo -n 'demo' | base64
```
The following example shows how to get the server certificate of ZFSSA and encode it:
```bash
openssl s_client -connect <zfssa>:215 2>/dev/null </dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | base64
```
Deploy the driver using Helm 3:
```bash
helm install -f local-values/local-values.yaml zfssa-csi ./k8s-1.17
```
When all pods are running, move to verification.
### CSI Volume Plugin Deployment from YAML
To deploy the plugin using YAML files, follow the steps listed below.
They assume you are installing at least version 0.4.0 of the plugin
on a cluster running version 1.17 of Kubernetes, and you are using Kubernetes secrets to provide the appliance login
access information and certificate. They also use generic information described below. When following these steps change
the values to your own values.
| Information | Value |
| --- | --- |
| Appliance name or IP address | _myappliance_ |
| Appliance login user | _mylogin_ |
| Appliance password | _mypassword_ |
| Appliance file certificate | _mycertfile_ |
| Appliance storage pool | _mypool_ |
| Appliance storage project | _myproject_ |
| Cluster namespace | _myspace_ |
1. The driver requires a file (zfssa.yaml) mounted as a volume to /mnt/zfssa. The volume should be an
in memory volume and the file should
be provided by a secure secret service that shares the secret via a sidecar, such as a Hashicorp Vault
agent that interacts with vault via role-based access controls.
```yaml
username: <text>
password: <text>
```
For development only, other mechanisms can be used to create and share the secret with the container.
*Warning* Do not store your credentials in source code control (such as this project). For production
environments use a secure secret store that encrypts at rest and can provide credentials through role
based access controls (refer to Kubernetes documentation). Do not use root user in production environments,
a purpose-built user will provide better audit controls and isolation for the driver.
2. Create the Kubernetes secret containing the certificate chain for the appliance and make it available to the
driver in a mounted volume (/mnt/certs) and file name of zfssa.crt. While a certificate chain is a public
document, it is typically also provided by a volume mounted from a secret provider to protect the chain
of trust and bind it to the instance.
To create a Kubernetes-secret from the certificate chain:
```text
kubectl create secret generic oracle.zfssa.csi.node.myappliance.certs -n myspace --from-file=./mycertfile
```
For development only, it is possible to run without the appliance chain of trust, see the options for
the driver.
3. Update the deployment files.
* __zfssa-csi-plugin.yaml__
In the `DaemonSet` section make the following modifications:
* in the container _node-driver-registar_ subsection
* set `image` to the appropriate container image.
* in the container _zfssabs_ subsection
* set `image` for the container _zfssabs_ to the appropriate container image.
* in the `env` subsection
* under `ZFSSA_TARGET` set `valueFrom.secretKeyRef.name` to _oracle.zfssa.csi.node.myappliance_
* under `ZFSSA_INSECURE` set `value` to _False_ (if you choose to **SECURE** the communication
with the appliance otherwise let it set to _True_)
* in the volume subsection, (skip if you set `ZFSSA_INSECURE` to _True_)
* under `cert`, **if you want communication with the appliance to be secure**
* set `secret.secretName` to _oracle.zfssa.csi.node.myappliance.certs_
* set `secret.secretName.items.key` to _mycertfile_
* set `secret.secretName.items.path` to _mycertfile_
* __zfssa-csi-provisioner.yaml__
In the `StatefulSet` section make the following modifications:
* set `image` for the container _zfssa-csi-provisioner_ to the appropriate image.
* set `image` for the container _zfssa-csi-attacher_ to the appropriate container image.
4. Deploy the plugin running the following commands:
```text
kubectl apply -n myspace -f ./zfssa-csi-rbac.yaml
kubectl apply -n myspace -f ./zfssa-csi-plugin.yaml
kubectl apply -n myspace -f ./zfssa-csi-provisioner.yaml
```
At this point the command `kubectl get all -n myspace` should return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
pod/zfssa-csi-nodeplugin-lpts9 2/2 Running 0 3m22s
pod/zfssa-csi-nodeplugin-vdb44 2/2 Running 0 3m22s
pod/zfssa-csi-provisioner-0 2/2 Running 0 72s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/zfssa-csi-nodeplugin 2 2 2 2 2 <none> 3m16s
NAME READY AGE
statefulset.apps/zfssa-csi-provisioner 1/1 72s
```
###Deployment Example Using an NFS Share
Refer to the [NFS EXAMPLE README](./examples/nfs/README.md) file for details.
###Deployment Example Using a Block Volume
Refer to the [BLOCK EXAMPLE README](./examples/block/README.md) file for details.

35
LICENSE.txt Normal file
View File

@ -0,0 +1,35 @@
Copyright (c) 2021 Oracle.
The Universal Permissive License (UPL), Version 1.0
Subject to the condition set forth below, permission is hereby granted to any
person obtaining a copy of this software, associated documentation and/or data
(collectively the "Software"), free of charge and under any and all copyright
rights in the Software, and any and all patent rights owned or freely
licensable by each licensor hereunder covering either (i) the unmodified
Software as contributed to or provided by such licensor, or (ii) the Larger
Works (as defined below), to deal in both
(a) the Software, and
(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if
one is included with the Software (each a "Larger Work" to which the Software
is contributed by such licensors),
without restriction, including without limitation the rights to copy, create
derivative works of, display, perform, and distribute the Software and make,
use, sell, offer for sale, import, export, have made, and have sold the
Software and the Larger Work(s), and to sublicense the foregoing rights on
either these or other terms.
This license is subject to the following condition:
The above copyright notice and either this complete permission notice or at
a minimum a reference to the UPL must be included in all copies or
substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

8
Makefile Normal file
View File

@ -0,0 +1,8 @@
# Copyright (c) 2021 Oracle and/or its affiliates.
#
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#
CMDS=zfssa-csi-driver
all: build
include release-tools/build.make

View File

@ -1 +1,93 @@
# zfssa-csi-driver
# About zfssa-csi-driver
This plugin supports Oracle ZFS Storage Appliance
as a backend for block storage (iSCSI volumes) and file storage (NFS).
| CSI Plugin Version | Supported CSI Versions | Supported Kubernetes Versions | Persistence | Supported Access Modes | Dynamic Provisioning | Raw Block Support |
| --- | --- | --- | --- | --- | --- | --- |
| v0.5.x | v1.0+ | v1.17.X+ | Persistent | Read/Write Once (for Block) | Yes | Yes |
## Requirements
* Kubernetes v1.17 or above.
* A Container runtime implementing the Kubernetes Container Runtime Interface. This plugin was tested with CRI-O v1.17.
* An Oracle ZFS Storage Appliance running Appliance Kit Version 8.8 or above. This plugin may work with previous
versions but it is not tested with them. It is possible to use this
driver with the [Oracle ZFS Storage Simulator](https://www.oracle.com/downloads/server-storage/sun-unified-storage-sun-simulator-downloads.html)
* Access to both a management path and a data path for the target Oracle
ZFS Storage Appiance (or simulator). The management and data path
can be the same address.
* A suitable container image build environment. The Makefile currently uses docker
but with minor updates to release-tools/build.make, podman should also be usable.
## Unsupported Functionality
ZFS Storage Appliance CSI driver does not support the following functionality:
* Volume Cloning
## Building
Use and enhance the Makefile in the root directory and release-tools/build.make.
Build the driver:
```
make build
```
Depending on your golang installation, there may be dependencies identified by the build, install
these and retry the build.
The parent image for the container is container-registry.oracle.com/os/oraclelinux:7-slim, refer
to [container-registry.oracle.com](https://container-registry.oracle.com/) for more information.
The parent image can also be obtained from ghcr.io/oracle/oraclelinux and docker.io/library/oraclelinux.
To build the container:
```
make container
```
Tag and push the resulting container image to a container registry available to the
Kubernetes cluster where it will be deployed.
## Installation
See [INSTALLATION](./INSTALLATION.md) for details.
## Testing
For information about testing the driver, see [TEST](./TEST.md).
## Examples
Example usage of this driver can be found in the ./examples
directory.
The examples below use the image _container-registry.oracle.com/os/oraclelinux:7-slim_
when they create a container where a persistent volume(s) is attached and mounted.
This set uses dynamic volume creation.
* [NFS](./examples/nfs/README.md) - illustrates NFS volume usage
from a simple container.
* [Block](./examples/block/README.md) - illustrates block volume
usage from a simple container.
* [NFS multi deployment](./examples/nfs-multi) - illustrates the use
of Helm to build several volumes and optionally build a pod to consume them.
This next set uses existing shares on the target appliance:
* [Existing NFS](./examples/nfs-pv/README.md) - illustrates NFS volume usage
from a simple container of an existing NFS filesystem share.
* [Existing Block](./examples/block-pv/README.md) - illustrates block volume
usage from a simple container of an existing iSCSI LUN.
This set exercises dynamic volume creation followed by expanding the volume capacity.
* [NFS Volume Expansion](./examples/nfs-exp/README.md) - illustrates an expansion of an NFS volume.
This set exercises dynamic volume creation (restoring from a volume snapshot) followed by creating a snapshot of the volume.
* [NFS Volume Snapshot](./examples/nfs-vsc/README.md) - illustrates a snapshot creation of an NFS volume.
* [Block Volume Snapshot](./examples/block-vsc/README.md) - illustrates a snapshot creation of a block volume.
## Help
Refer to the documentation links and examples for more information on
this driver.
## Contributing
See [CONTRIBUTING](./CONTRIBUTING.md) for details.

17
SECURITY.md Normal file
View File

@ -0,0 +1,17 @@
# Reporting Security Vulnerabilities
Oracle values the independent security research community and believes that responsible disclosure of security vulnerabilities helps us ensure the security and privacy of all our users.
Please do NOT raise a GitHub Issue to report a security vulnerability. If you believe you have found a security vulnerability, please submit a report to secalert_us@oracle.com preferably with a proof of concept. We provide additional information on [how to report security vulnerabilities to Oracle](https://www.oracle.com/corporate/security-practices/assurance/vulnerability/reporting.html) which includes public encryption keys for secure email.
We ask that you do not use other channels or contact project contributors directly.
Non-vulnerability related security issues such as new great new ideas for security features are welcome on GitHub Issues.
## Security Updates, Alerts and Bulletins
Security updates will be released on a regular cadence. Many of our projects will typically release security fixes in conjunction with the [Oracle Critical Patch Update](https://www.oracle.com/security-alerts/) program. Security updates are released on the Tuesday closest to the 17th day of January, April, July and October. A pre-release announcement will be published on the Thursday preceding each release. Additional information, including past advisories, is available on our [Security Alerts](https://www.oracle.com/security-alerts/) page.
## Security-Related Information
We will provide security related information such as a threat model, considerations for secure use, or any known security issues in our documentation. Please note that labs and sample code are intended to demonstrate a concept and may not be sufficiently hardened for production use.

139
TEST.md Normal file
View File

@ -0,0 +1,139 @@
# Oracle ZFS Storage Appliance CSI Plugin Testing
There are two distinct paths in the plugin
* Filesystem (Mount)
* Block (iSCSI)
This write-up discusses various techniques for testing the plugin.
Note that some of the paths require the plug-in to be deployed (unit tests) while others are more for quick iterative development without
a cluster being available.
Refer to the README.md file for information about deploying the plug-in.
## Test Driver Locally
Create a local build
```
make build
```
You will need the following environment variables configured (these emulate the secrets setup). Note that
the driver *must* be run as root (or with root-equivalence for iscsiadmin utility). The csc tool can be run
from any process space as it only interacts with the socket that the driver is waiting on.
```
ZFSSA_TARGET=(target zfssa head)
ZFSSA_USER=(user for target)
ZFSSA_PASSWORD=(password for target)
ZFSSA_POOL=(pool to create resources in)
ZFSSA_PROJECT=(project to create resources in)
HOST_IP=(IP address of host where running)
POD_IP=(IP of pod)
NODE_NAME=(name of node)
CSI_ENDPOINT=tcp://127.0.0.1:10000
```
Now run the driver as root:
```
sudo su -
export CSI_ENDPOINT=tcp://127.0.0.1:10000;<other exports>;./bin/zfssa-csi-driver --endpoint tcp://127.0.0.1:10000 --nodeid MyCSINode
Building Provider
ERROR: logging before flag.Parse: I1108 15:34:55.472389 6622 service.go:63] Driver: zfssa-csi-driver version: 0.0.0
Using stored configuration { }
ERROR: logging before flag.Parse: I1108 15:34:55.472558 6622 controller.go:42] NewControllerServer Implementation
ERROR: logging before flag.Parse: I1108 15:34:55.472570 6622 identity.go:15] NewIdentityServer Implementation
ERROR: logging before flag.Parse: I1108 15:34:55.472581 6622 node.go:24] NewNodeServer Implementation
Running gRPC
INFO[0000] identity service registered
INFO[0000] controller service registered
INFO[0000] node service registered
INFO[0000] serving endpoint="tcp://127.0.0.1:10000"
```
Test the block driver. Note the interactions of using the full volume id that is returned from each command.
```
./csc controller create-volume --cap MULTI_NODE_MULTI_WRITER,block --req-bytes 1073741824 --params pool=dedup1,project=pmonday,initiatorGroup=paulGroup,targetGroup=paulTargetGroup,blockSize=8192 --endpoint tcp://127.0.0.1:10000 pmonday5
./csc controller delete-volume --endpoint tcp://127.0.0.1:10000 /iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday3
```
Test the driver to Publish a LUN. Note that the LUN number is the number of the "assignednumber" on the
LUN on the appliance. Also note that the flow is create -> controller publish -> node publish -> node unpublish -> controller unpublish
```bash
First publish to the controller
./csc controller publish --cap MULTI_NODE_MULTI_WRITER,block --vol-context targetPortal=10.80.44.165:3260,discoveryCHAPAuth=false,sessionCHAPAuth=false,portals=[],iscsiInterface=default --node-id worknode /iscsi/aie-7330a-h1/pmonday5/dedup1/local/pmonday/pmonday5
"/iscsi/aie-7330a-h1/pmonday5/dedup1/local/pmonday/pmonday5" "devicePath"="/dev/disk/by-path/ip-10.80.44.165:3260-iscsi-iqn.1986-03.com.sun:02:ab7b55fa-53ee-e5ab-98e1-fad3cc29ae57-lun-13"
Publish to the Node
./csc node publish -l debug --endpoint tcp://127.0.0.1:10000 --target-path /mnt/iscsi --pub-context "devicePath"="/dev/disk/by-path/ip-10.80.44.165:3260-iscsi-iqn.1986-03.com.sun:02:ab7b55fa-53ee-e5ab-98e1-fad3cc29ae57-lun-13" --cap MULTI_NODE_MULTI_WRITER,block --vol-context targetPortal=10.80.44.165:3260,discoveryCHAPAuth=false,sessionCHAPAuth=false,portals=[],iscsiInterface=default /iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5
DEBU[0000] assigned the root context
DEBU[0000] mounting volume request="{/iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5 map[devicePath:/dev/disk/by-path/ip-10.80.44.165:3260-iscsi-iqn.1986-03.com.sun:02:ab7b55fa-53ee-e5ab-98e1-fad3cc29ae57-lun-13] /mnt/iscsi block:<> access_mode:<mode:MULTI_NODE_MULTI_WRITER > false map[] map[discoveryCHAPAuth:false iscsiInterface:default portals:[] sessionCHAPAuth:false targetPortal:10.80.44.165:3260] {} [] 0}"
DEBU[0000] parsed endpoint info addr="127.0.0.1:10000" proto=tcp timeout=1m0s
/iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5
Now unpublish from the node
./csc node unpublish -l debug --endpoint tcp://127.0.0.1:10000 --target-path /mnt/iscsi /iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5
DEBU[0000] assigned the root context
DEBU[0000] mounting volume request="{/iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5 /mnt/iscsi {} [] 0}"
DEBU[0000] parsed endpoint info addr="127.0.0.1:10000" proto=tcp timeout=1m0s
/iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5
Now unpublish from the controller (this is not working yet)
./csc controller unpublish -l debug --endpoint tcp://127.0.0.1:10000 /iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5
DEBU[0000] assigned the root context
DEBU[0000] unpublishing volume request="{/iscsi/aie-7330a-h1/pmonday3/dedup1/local/pmonday/pmonday5 map[] {} [] 0}"
DEBU[0000] parsed endpoint info addr="127.0.0.1:10000" proto=tcp timeout=1m0s
Need to just detach the device here
```
If everything looks OK, push the driver to a container registry
```
make push
```
Now deploy the driver in a Kubernetes Cluster
```bash
Working on instructions
```
Test the driver to Create a file system
```
./csc controller create --cap MULTI_NODE_MULTI_WRITER,mount,nfs,uid=500,gid=500 --req-bytes 107374182400 --params node=zs32-01,pool=p0,project=default coucou
./csc controller delete /nfs/10.80.222.176/coucou/p0/local/default/coucou
```
## Unit Test the Driver
To run the unit tests, you must have compiled csi-sanity from the
[csi-test project](https://github.com/kubernetes-csi/csi-test). Once
compiled, scp the file to the node that is functioning as a controller
in a Kubernetes Cluster with the ZFSSA CSI Driver deployed and running.
There is documentation on the test process available at the Kubernetes
[testing of CSI drivers page](https://kubernetes.io/blog/2020/01/08/testing-of-csi-drivers/).
At least do a quick sanity test (create a PVC and remove it) prior to running.
This will shake out simple problems like authentication.
Create a test parameters file that makes sense in your environment, like this:
```yaml
volumeType: thin
targetGroup: csi-data-path-target
blockSize: "8192"
pool: h1-pool1
project: pmonday
targetPortal: "10.80.44.65:3260"
nfsServer: "10.80.44.65"
```
Then run the tests
```
./csi-sanity --csi.endpoint=/var/lib/kubelet/plugins/com.oracle.zfssabs/csi.sock -csi.testvolumeparameters=./test-volume-parameters.yaml
```
There should only be one known failure due to volume name length limitations
on the Oracle ZFS Storage Appliance, it looks like this:
```
[Fail] Controller Service [Controller Server] CreateVolume [It] should not fail when creating volume with maximum-length name
```

2579
THIRD_PARTY_LICENSES.txt Normal file

File diff suppressed because it is too large Load Diff

1
VERSION Normal file
View File

@ -0,0 +1 @@
1.0.0

View File

@ -0,0 +1,30 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package main
import (
"github.com/oracle/zfssa-csi-driver/pkg/service"
"flag"
"fmt"
"os"
)
var (
driverName = flag.String("drivername", "zfssa-csi-driver", "name of the driver")
// Provided by the build process
version = "0.0.0"
)
func main() {
zd, err := service.NewZFSSADriver(*driverName, version)
if err != nil {
fmt.Print(err)
} else {
zd.Run()
}
os.Exit(1)
}

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi
version: 1.0.0
description: Deploys Oracle ZFS Storage Appliance CSI Plugin.

View File

@ -0,0 +1,9 @@
apiVersion: v1
stringData:
zfssa.yaml: |
username: {{ .Values.zfssaInformation.username }}
password: {{ .Values.zfssaInformation.password }}
kind: Secret
metadata:
name: oracle.zfssa.csi.node
namespace: {{ .Values.deployment.namespace }}

View File

@ -0,0 +1,8 @@
apiVersion: v1
data:
zfssa.crt: {{ .Values.zfssaInformation.cert }}
kind: Secret
metadata:
name: oracle.zfssa.csi.node.certs
type: Opaque

View File

@ -0,0 +1,84 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: zfssa-csi
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zfssa-csi-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch", "delete", "get"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csinodeinfos"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [ "snapshot.storage.k8s.io" ]
resources: [ "volumesnapshotcontents/status" ]
verbs: [ "update" ]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csidrivers"]
verbs: ["create", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "create", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zfssa-csi-role-binding
subjects:
- kind: ServiceAccount
name: zfssa-csi
namespace: {{ .Values.deployment.namespace }}
roleRef:
kind: ClusterRole
name: zfssa-csi-role
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,139 @@
# Service defined here, plus serviceName below in StatefulSet,
# are needed only because of condition explained in
# https://github.com/kubernetes/kubernetes/issues/69608
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: zfssa-csi-driver
namespace: {{ .Values.deployment.namespace }}
spec:
attachRequired: true
podInfoOnMount: true
volumeLifecycleModes:
- Persistent
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: zfssa-csi-nodeplugin
namespace: {{ .Values.deployment.namespace }}
spec:
selector:
matchLabels:
app: zfssa-csi-nodeplugin
template:
metadata:
labels:
app: zfssa-csi-nodeplugin
spec:
serviceAccount: zfssa-csi
hostNetwork: true
containers:
- name: node-driver-registrar
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiNodeDriverRegistrar.name }}:{{ .Values.images.csiNodeDriverRegistrar.tag }}
args:
- --v=5
- --csi-address=/plugin/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/com.oracle.zfssabs/csi.sock
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
privileged: true
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: {{ .Values.paths.pluginDir.mountPath }}
- name: registration-dir
mountPath: /registration
- name: zfssabs
image: {{ .Values.image.zfssaBase }}{{ .Values.images.zfssaCsiDriver.name }}:{{ .Values.images.zfssaCsiDriver.tag }}
args:
- "--drivername=zfssa-csi-driver.oracle.com"
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(NODE_NAME)"
env:
- name: CSI_ENDPOINT
value: unix://plugin/csi.sock
- name: LOG_LEVEL
value: "5"
- name: ZFSSA_TARGET
value: {{ .Values.zfssaInformation.target }}
- name: ZFSSA_INSECURE
value: "False"
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
privileged: true
volumeMounts:
- name: socket-dir
mountPath: {{ .Values.paths.pluginDir.mountPath }}
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
- name: plugins-dir
mountPath: /var/lib/kubelet/plugins
mountPropagation: Bidirectional
- name: dev-dir
mountPath: /dev
- name: zfssa-credentials
mountPath: "/mnt/zfssa"
readOnly: true
- name: certs
mountPath: "/mnt/certs"
readOnly: true
volumes:
- name: socket-dir
hostPath:
path: {{ .Values.paths.pluginDir.hostPath }}
type: DirectoryOrCreate
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry
type: Directory
- name: plugins-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: dev-dir
hostPath:
path: /dev
type: Directory
- name: zfssa-credentials
secret:
secretName: oracle.zfssa.csi.node
items:
- key: zfssa.yaml
path: zfssa.yaml
- name: certs
secret:
secretName: oracle.zfssa.csi.node.certs
items:
- key: zfssa.crt
path: zfssa.crt

View File

@ -0,0 +1,90 @@
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: zfssa-csi-provisioner
namespace: {{ .Values.deployment.namespace }}
spec:
serviceName: "zfssa-csi-provisioner"
replicas: 1
selector:
matchLabels:
app: zfssa-csi-provisioner
template:
metadata:
labels:
app: zfssa-csi-provisioner
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- zfssa-csi-nodeplugin
topologyKey: kubernetes.io/hostname
serviceAccountName: zfssa-csi
containers:
- name: zfssa-csi-snapshotter
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiSnapshotter.name }}:{{ .Values.images.csiSnapshotter.tag }}
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election=false"
env:
- name: ADDRESS
value: /plugin/csi.sock
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: socket-dir
mountPath: /plugin
- name: zfssa-csi-resizer
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiResizer.name }}:{{ .Values.images.csiResizer.tag }}
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election"
env:
- name: ADDRESS
value: /plugin/csi.sock
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: socket-dir
mountPath: /plugin
- name: zfssa-csi-provisioner
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiProvisioner.name }}:{{ .Values.images.csiProvisioner.tag }}
args:
- -v=5
- --csi-address=/plugin/csi.sock
- --timeout=30s
- --feature-gates=Topology=true
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
privileged: true
volumeMounts:
- name: socket-dir
mountPath: /plugin
- name: zfssa-csi-attacher
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiAttacher.name }}:{{ .Values.images.csiAttacher.tag }}
args:
- --v=5
- --csi-address=/plugin/csi.sock
# securityContext:
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
# privileged: true
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: socket-dir
mountPath: {{ .Values.paths.pluginDir.mountPath }}
volumes:
- name: socket-dir
hostPath:
path: {{ .Values.paths.pluginDir.hostPath }}
type: DirectoryOrCreate

View File

@ -0,0 +1,42 @@
# Global docker image setting
image:
sidecarBase: k8s.gcr.io/sig-storage/
zfssaBase: iad.ocir.io/zs/store/csi/
pullPolicy: Always
# Define all the images that will be used during helm chart deployment
images:
csiNodeDriverRegistrar:
name: csi-node-driver-registrar
tag: "v2.0.0"
zfssaCsiDriver:
name: zfssa-csi-driver
tag: "v1.0.0"
csiProvisioner:
name: csi-provisioner
tag: "v2.0.5"
csiAttacher:
name: csi-attacher
tag: "v3.0.2"
csiResizer:
name: csi-resizer
tag: "v1.1.0"
csiSnapshotter:
name: csi-snapshotter
tag: "v3.0.3"
paths:
pluginDir:
hostPath: "/var/lib/kubelet/plugins/com.oracle.zfssabs"
mountPath: "/plugin"
deployment:
namespace: default
# ZFSSA-specific information
# It is desirable to provision a normal login user with required authorizations.
zfssaInformation:
username: text-string
password: text-string
target: text-string
cert: cert-base64-encoded

View File

@ -0,0 +1,5 @@
# Introduction
This directory can contain local values files to be used with the helm charts.
Files in this directory should not be checked in to a source code control.

View File

@ -0,0 +1,85 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshotclasses.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .driver
name: Driver
type: string
- JSONPath: .deletionPolicy
description: Determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass
should be deleted when its bound VolumeSnapshot is deleted.
name: DeletionPolicy
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotClass
listKind: VolumeSnapshotClassList
plural: volumesnapshotclasses
singular: volumesnapshotclass
preserveUnknownFields: false
scope: Cluster
subresources: {}
validation:
openAPIV3Schema:
description: VolumeSnapshotClass specifies parameters that a underlying storage
system uses when creating a volume snapshot. A specific VolumeSnapshotClass
is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses
are non-namespaced
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
deletionPolicy:
description: deletionPolicy determines whether a VolumeSnapshotContent created
through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot
is deleted. Supported values are "Retain" and "Delete". "Retain" means
that the VolumeSnapshotContent and its physical snapshot on underlying
storage system are kept. "Delete" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are deleted. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the storage driver that handles this
VolumeSnapshotClass. Required.
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
parameters:
additionalProperties:
type: string
description: parameters is a key-value map with storage driver specific
parameters for creating snapshots. These values are opaque to Kubernetes.
type: object
required:
- deletionPolicy
- driver
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,233 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshotcontents.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .status.readyToUse
description: Indicates if a snapshot is ready to be used to restore a volume.
name: ReadyToUse
type: boolean
- JSONPath: .status.restoreSize
description: Represents the complete size of the snapshot in bytes
name: RestoreSize
type: integer
- JSONPath: .spec.deletionPolicy
description: Determines whether this VolumeSnapshotContent and its physical snapshot
on the underlying storage system should be deleted when its bound VolumeSnapshot
is deleted.
name: DeletionPolicy
type: string
- JSONPath: .spec.driver
description: Name of the CSI driver used to create the physical snapshot on the
underlying storage system.
name: Driver
type: string
- JSONPath: .spec.volumeSnapshotClassName
description: Name of the VolumeSnapshotClass to which this snapshot belongs.
name: VolumeSnapshotClass
type: string
- JSONPath: .spec.volumeSnapshotRef.name
description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent
object is bound.
name: VolumeSnapshot
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotContent
listKind: VolumeSnapshotContentList
plural: volumesnapshotcontents
singular: volumesnapshotcontent
preserveUnknownFields: false
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
description: VolumeSnapshotContent represents the actual "on-disk" snapshot
object in the underlying storage system
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: spec defines properties of a VolumeSnapshotContent created
by the underlying storage system. Required.
properties:
deletionPolicy:
description: deletionPolicy determines whether this VolumeSnapshotContent
and its physical snapshot on the underlying storage system should
be deleted when its bound VolumeSnapshot is deleted. Supported values
are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are kept. "Delete"
means that the VolumeSnapshotContent and its physical snapshot on
underlying storage system are deleted. In dynamic snapshot creation
case, this field will be filled in with the "DeletionPolicy" field
defined in the VolumeSnapshotClass the VolumeSnapshot refers to. For
pre-existing snapshots, users MUST specify this field when creating
the VolumeSnapshotContent object. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the CSI driver used to create the
physical snapshot on the underlying storage system. This MUST be the
same as the name returned by the CSI GetPluginName() call for that
driver. Required.
type: string
source:
description: source specifies from where a snapshot will be created.
This field is immutable after creation. Required.
properties:
snapshotHandle:
description: snapshotHandle specifies the CSI "snapshot_id" of a
pre-existing snapshot on the underlying storage system. This field
is immutable.
type: string
volumeHandle:
description: volumeHandle specifies the CSI "volume_id" of the volume
from which a snapshot should be dynamically taken from. This field
is immutable.
type: string
type: object
volumeSnapshotClassName:
description: name of the VolumeSnapshotClass to which this snapshot
belongs.
type: string
volumeSnapshotRef:
description: volumeSnapshotRef specifies the VolumeSnapshot object to
which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName
field must reference to this VolumeSnapshotContent's name for the
bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent
object, name and namespace of the VolumeSnapshot object MUST be provided
for binding to happen. This field is immutable after creation. Required.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an
entire object, this string should contain a valid JSON/Go field
access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen only
to have some well-defined way of referencing a part of an object.
TODO: this design is not final and this field is subject to change
in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is
made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
required:
- deletionPolicy
- driver
- source
- volumeSnapshotRef
type: object
status:
description: status represents the current information of a snapshot.
properties:
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot
is taken by the underlying storage system. In dynamic snapshot creation
case, this field will be filled in with the "creation_time" value
returned from CSI "CreateSnapshotRequest" gRPC call. For a pre-existing
snapshot, this field will be filled with the "creation_time" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it. If not specified, it indicates the creation time is unknown. The
format of this field is a Unix nanoseconds time encoded as an int64.
On Unix, the command `date +%s%N` returns the current time in nanoseconds
since 1970-01-01 00:00:00 UTC.
format: int64
type: integer
error:
description: error is the latest observed error during snapshot creation,
if any.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be logged,
and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in with the "ready_to_use" value returned from CSI
"CreateSnapshotRequest" gRPC call. For a pre-existing snapshot, this
field will be filled with the "ready_to_use" value returned from the
CSI "ListSnapshots" gRPC call if the driver supports it, otherwise,
this field will be set to "True". If not specified, it means the readiness
of a snapshot is unknown.
type: boolean
restoreSize:
description: restoreSize represents the complete size of the snapshot
in bytes. In dynamic snapshot creation case, this field will be filled
in with the "size_bytes" value returned from CSI "CreateSnapshotRequest"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "size_bytes" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. When restoring a volume from
this snapshot, the size of the volume MUST NOT be smaller than the
restoreSize if it is specified, otherwise the restoration will fail.
If not specified, it indicates that the size is unknown.
format: int64
minimum: 0
type: integer
snapshotHandle:
description: snapshotHandle is the CSI "snapshot_id" of a snapshot on
the underlying storage system. If not specified, it indicates that
dynamic snapshot creation has either failed or it is still in progress.
type: string
type: object
required:
- spec
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,188 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshots.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .status.readyToUse
description: Indicates if a snapshot is ready to be used to restore a volume.
name: ReadyToUse
type: boolean
- JSONPath: .spec.source.persistentVolumeClaimName
description: Name of the source PVC from where a dynamically taken snapshot will
be created.
name: SourcePVC
type: string
- JSONPath: .spec.source.volumeSnapshotContentName
description: Name of the VolumeSnapshotContent which represents a pre-provisioned
snapshot.
name: SourceSnapshotContent
type: string
- JSONPath: .status.restoreSize
description: Represents the complete size of the snapshot.
name: RestoreSize
type: string
- JSONPath: .spec.volumeSnapshotClassName
description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot.
name: SnapshotClass
type: string
- JSONPath: .status.boundVolumeSnapshotContentName
description: The name of the VolumeSnapshotContent to which this VolumeSnapshot
is bound.
name: SnapshotContent
type: string
- JSONPath: .status.creationTime
description: Timestamp when the point-in-time snapshot is taken by the underlying
storage system.
name: CreationTime
type: date
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshot
listKind: VolumeSnapshotList
plural: volumesnapshots
singular: volumesnapshot
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: VolumeSnapshot is a user's request for either creating a point-in-time
snapshot of a persistent volume, or binding to a pre-existing snapshot.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: 'spec defines the desired characteristics of a snapshot requested
by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots
Required.'
properties:
source:
description: source specifies where a snapshot will be created from.
This field is immutable after creation. Required.
properties:
persistentVolumeClaimName:
description: persistentVolumeClaimName specifies the name of the
PersistentVolumeClaim object in the same namespace as the VolumeSnapshot
object where the snapshot should be dynamically taken from. This
field is immutable.
type: string
volumeSnapshotContentName:
description: volumeSnapshotContentName specifies the name of a pre-existing
VolumeSnapshotContent object. This field is immutable.
type: string
type: object
volumeSnapshotClassName:
description: 'volumeSnapshotClassName is the name of the VolumeSnapshotClass
requested by the VolumeSnapshot. If not specified, the default snapshot
class will be used if one exists. If not specified, and there is no
default snapshot class, dynamic snapshot creation will fail. Empty
string is not allowed for this field. TODO(xiangqian): a webhook validation
on empty string. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes'
type: string
required:
- source
type: object
status:
description: 'status represents the current information of a snapshot. NOTE:
status can be modified by sources other than system controllers, and must
not be depended upon for accuracy. Controllers should only use information
from the VolumeSnapshotContent object after verifying that the binding
is accurate and complete.'
properties:
boundVolumeSnapshotContentName:
description: 'boundVolumeSnapshotContentName represents the name of
the VolumeSnapshotContent object to which the VolumeSnapshot object
is bound. If not specified, it indicates that the VolumeSnapshot object
has not been successfully bound to a VolumeSnapshotContent object
yet. NOTE: Specified boundVolumeSnapshotContentName alone does not
mean binding is valid. Controllers MUST always verify bidirectional
binding between VolumeSnapshot and VolumeSnapshotContent to
avoid possible security issues.'
type: string
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot
is taken by the underlying storage system. In dynamic snapshot creation
case, this field will be filled in with the "creation_time" value
returned from CSI "CreateSnapshotRequest" gRPC call. For a pre-existing
snapshot, this field will be filled with the "creation_time" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it. If not specified, it indicates that the creation time of the snapshot
is unknown.
format: date-time
type: string
error:
description: error is the last observed error during snapshot creation,
if any. This field could be helpful to upper level controllers(i.e.,
application controller) to decide whether they should continue on
waiting for the snapshot to be created based on the type of error
reported.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be logged,
and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in with the "ready_to_use" value returned from CSI
"CreateSnapshotRequest" gRPC call. For a pre-existing snapshot, this
field will be filled with the "ready_to_use" value returned from the
CSI "ListSnapshots" gRPC call if the driver supports it, otherwise,
this field will be set to "True". If not specified, it means the readiness
of a snapshot is unknown.
type: boolean
restoreSize:
anyOf:
- type: integer
- type: string
description: restoreSize represents the complete size of the snapshot
in bytes. In dynamic snapshot creation case, this field will be filled
in with the "size_bytes" value returned from CSI "CreateSnapshotRequest"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "size_bytes" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. When restoring a volume from
this snapshot, the size of the volume MUST NOT be smaller than the
restoreSize if it is specified, otherwise the restoration will fail.
If not specified, it indicates that the size is unknown.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
type: object
required:
- spec
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,80 @@
# RBAC file for the snapshot controller.
apiVersion: v1
kind: ServiceAccount
metadata:
name: snapshot-controller
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# rename if there are conflicts
name: snapshot-controller-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots/status"]
verbs: ["update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-role
subjects:
- kind: ServiceAccount
name: snapshot-controller
# replace with non-default namespace name
namespace: default
roleRef:
kind: ClusterRole
# change the name also here if the ClusterRole gets renamed
name: snapshot-controller-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default # TODO: replace with the namespace you want for your controller
name: snapshot-controller-leaderelection
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-leaderelection
namespace: default # TODO: replace with the namespace you want for your controller
subjects:
- kind: ServiceAccount
name: snapshot-controller
namespace: default # TODO: replace with the namespace you want for your controller
roleRef:
kind: Role
name: snapshot-controller-leaderelection
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,26 @@
# This YAML file shows how to deploy the snapshot controller
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: snapshot-controller
spec:
serviceName: "snapshot-controller"
replicas: 1
selector:
matchLabels:
app: snapshot-controller
template:
metadata:
labels:
app: snapshot-controller
spec:
serviceAccount: snapshot-controller
containers:
- name: snapshot-controller
image: quay.io/k8scsi/snapshot-controller:v2.1.1
args:
- "--v=5"
- "--leader-election=false"
imagePullPolicy: Always

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-existing-block-example
version: 0.0.1
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.

110
examples/block-pv/README.md Normal file
View File

@ -0,0 +1,110 @@
# Introduction
This is an end-to-end example of using an existing iSCSI block devices on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the iSCSI environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
This flow to use an existing volume is:
* create a persistent volume (PV) object
* allocate it to a persistent volume claim (PVC)
* use the PVC from a pod
The following must be set up:
* the volume handle must be a fully formed volume id
* there must be volume attributes defined as part of the persistent volume
* the initiator group for the block volume *must* be set to ```com.sun.ms.vss.hg.maskAll```
In this example, the volume handle is constructed via values in the helm
chart. The only new attribute necessary is the name of the volume on the
target appliance. The remaining is assembled from the information that is still
in the local-values.yaml file (appliance name, pool, project, etc...).
The resulting VolumeHandle appears similar to the following, with the values
in ```<>``` filled in from the helm variables:
```
volumeHandle: /iscsi/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
```
From the above, note that the volumeHandle is in the form of an ID with the components:
* 'iscsi' - denoting a block volume
* 'appliance name' - this is the management path of the ZFSSA target appliance
* 'volume name' - the name of the share on the appliance
* 'pool name' - the pool on the target appliance
* 'local' - denotes that the pool is owned by the head
* 'project' - the project that the share is in
In the volume attributes, the targetGroup and targetPortal must be defined. This should be similar
to that in the storage class.
Once created, a persistent volume claim can be made for this share and used in a pod.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* targetGroup: the target iSCSI group to use on the appliance
* nfsServer: the NFS data path IP address
* applianceName: name of the appliance
* pvExistingName: the name of the iSCSI LUN share on the target appliance
* volSize: the size of the iSCSI LUN share specified by pvExistingName
On the target appliance, locate the share via the CLI:
```
appliance> shares
appliance> select <pool name>
appliance> <project name>
appliance> <share name>
appliance> set initatorgroup=com.sun.ms.vss.hg.maskAll
appliance> commit
```
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-block-existing ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target LUNs:
* Protocol -> iSCSI bytes broken down by initiator
* Protocol -> iSCSI bytes broken down by target
* Protocol -> iSCSI bytes broken down by LUN
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-block-existing-pod -- /bin/sh
/ # cd /dev
/dev # ls
block fd mqueue ptmx random stderr stdout tty zero
core full null pts shm stdin termination-log urandom
/dev # dd if=/dev/zero of=/dev/block count=1024 bs=1024
1024+0 records in
1024+0 records out
/dev #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scBlockName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@ -0,0 +1,24 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.pvExistingName }}
annotations:
pv.kubernetes.io/provisioned-by: zfssa-csi-driver
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.volSize }}
csi:
driver: zfssa-csi-driver
volumeHandle: /iscsi/{{ .Values.applianceName }}/{{ .Values.pvExistingName }}/{{ .Values.appliance.pool }}/local/{{ .Values.appliance.project }}/{{ .Values.pvExistingName }}
readOnly: false
volumeAttributes:
targetGroup: {{ .Values.appliance.targetGroup }}
targetPortal: {{ .Values.appliance.targetPortal }}
claimRef:
namespace: default
name: {{ .Values.pvcExistingName }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcExistingName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scBlockName }}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podBlockName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeDevices:
- name: vol
devicePath: /dev/block
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcExistingName }}

View File

@ -0,0 +1,20 @@
# Various names used through example
scBlockName: zfssa-block-existing-sc
pvExistingName: OVERRIDE
pvcExistingName: zfssa-block-existing-pvc
podBlockName: zfssa-block-existing-pod
applianceName: OVERRIDE
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volSize: OVERRIDE

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: zfssa-block-vs-restore-pod
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeDevices:
- name: vol
devicePath: /dev/block
volumes:
- name: vol
persistentVolumeClaim:
claimName: zfssa-block-vs-restore-pvc
readOnly: false

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfssa-block-vs-restore-pvc
spec:
storageClassName: zfssa-block-vs-example-sc
dataSource:
name: zfssa-block-vs-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 68796

View File

@ -0,0 +1,8 @@
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: zfssa-block-vs-snapshot
spec:
volumeSnapshotClassName: zfssa-block-vs-example-vsc
source:
persistentVolumeClaimName: zfssa-block-vs-example-pvc

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-block-example
version: 0.0.1
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.

View File

@ -0,0 +1,193 @@
# Introduction
This is an end-to-end example of taking a snapshot of a block volume (iSCSI Lun)
on a target Oracle ZFS Storage Appliance and making use of it
on another pod by creating (restoring) a volume from the snapshot.
Prior to running this example, the iSCSI environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can contain others. The minimum set of values to
customize are:
* appliance:
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* targetGroup: the target iSCSI group to use on the appliance
* volSize: the size of the iSCSI LUN share to create
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
[INSTALLATION](../../INSTALLATION.md).
## Deployment
This step includes deploying a pod with a block volume attached using a regular
storage class and a persistent volume claim. It also deploys a volume snapshot class
required to take snapshots of the persistent volume.
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```text
helm ../install -f local-values/local-values.yaml zfssa-block-vsc ./
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-block-vs-example-sc zfssa-csi-driver Delete Immediate false 86s
```
2. Display the volume claim
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-block-vs-example-pvc Bound pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO zfssa-block-vs-example-sc 62s
```
3. Display the volume snapshot class
The command `kubectl get volumesnapshotclass` should now return something similar to this:
```text
NAME DRIVER DELETIONPOLICY AGE
zfssa-block-vs-example-vsc zfssa-csi-driver Delete 100s
```
4. Display the pod mounting the volume
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 14d
zfssa-block-vs-example-pod 1/1 Running 0 2m11s
zfssa-csi-nodeplugin-7kj5m 2/2 Running 0 3m11s
zfssa-csi-nodeplugin-rgfzf 2/2 Running 0 3m11s
zfssa-csi-provisioner-0 4/4 Running 0 3m11s
```
## Writing data
Once the pod is deployed, verify the block volume is mounted and can be written.
```text
kubectl exec -it zfssa-block-vs-example-pod -- /bin/sh
/ # cd /dev
/dev #
/dev # date > block
/dev # dd if=block bs=64 count=1
Wed Jan 27 22:06:36 UTC 2021
1+0 records in
1+0 records out
/dev #
```
Alternatively, `cat /dev/block` followed by `CTRL-C` can be used to view the timestamp written on th /dev/block device file.
## Creating snapshot
Use configuration files in examples/block-snapshot directory with proper modifications
for the rest of the example steps.
Create a snapshot of the volume by running the command below:
```text
kubectl apply -f ../block-snapshot/block-snapshot.yaml
```
Verify the volume snapshot is created and available by running the following command:
```text
kubectl get volumesnapshot
```
Wait until the READYTOUSE of the snapshot becomes true before moving on to the next steps.
It is important to use the RESTORESIZE value of the volume snapshot just created when specifying
the storage capacity of a persistent volume claim to provision a persistent volume using this
snapshot. For example, the storage capacity in ../block-snapshot/block-pvc-from-snapshot.yaml
Optionally, verify the volume snapshot exists on ZFS Storage Appliance. The snapshot name
on ZFS Storage Appliance should have the volume snapshot UID as the suffix.
## Creating persistent volume claim
Create a persistent volume claim to provision a volume from the snapshot by running
the command below. Be aware that the persistent volume provisioned by this persistent volume claim
is not expandable. Create a new storage class with allowVolumeExpansion: true and use it when
specifying the persistent volume claim.
```text
kubectl apply -f ../block-snapshot/block-pvc-from-snapshot.yaml
```
Verify the persistent volume claim is created and a volume is provisioned by running the following command:
```text
kubectl get pv,pvc
```
The command `kubectl get pv,pvc` should return something similar to this:
```text
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO Delete Bound default/zfssa-block-vs-example-pvc zfssa-block-vs-example-sc 13m
persistentvolume/pvc-91f949f6-5d77-4183-bab5-adfdb1452a90 10Gi RWO Delete Bound default/zfssa-block-vs-restore-pvc zfssa-block-vs-example-sc 11s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/zfssa-block-vs-example-pvc Bound pvc-477804b4-e592-4039-a77c-a1c99a1e537b 10Gi RWO zfssa-block-vs-example-sc 13m
persistentvolumeclaim/zfssa-block-vs-restore-pvc Bound pvc-91f949f6-5d77-4183-bab5-adfdb1452a90 10Gi RWO zfssa-block-vs-example-sc 16s
```
Optionally, verify the new volume exists on ZFS Storage Appliance. Notice that the new
volume is a clone off the snapshot taken from the original volume.
## Creating pod using restored volume
Create a pod with the persistent volume claim created from the above step by running the command below:
```text
kubectl apply -f ../block-snapshot/block-pod-restored-volume.yaml
```
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 14d
zfssa-block-vs-example-pod 1/1 Running 0 15m
zfssa-block-vs-restore-pod 1/1 Running 0 21s
zfssa-csi-nodeplugin-7kj5m 2/2 Running 0 16m
zfssa-csi-nodeplugin-rgfzf 2/2 Running 0 16m
zfssa-csi-provisioner-0 4/4 Running 0 16m
```
Verify the new volume has the contents of the original volume at the point in time
when the snapsnot was taken.
```text
kubectl exec -it zfssa-block-vs-restore-pod -- /bin/sh
/ # cd /dev
/dev # dd if=block bs=64 count=1
Wed Jan 27 22:06:36 UTC 2021
1+0 records in
1+0 records out
/dev #
```
## Deleting pod, persistent volume claim and volume snapshot
To delete the pod, persistent volume claim and volume snapshot created from the above steps,
run the following commands below. Wait until the resources being deleted disappear from
the list that `kubectl get ...` command displays before running the next command.
```text
kubectl delete -f ../block-snapshot/block-pod-restored-volume.yaml
kubectl delete -f ../block-snapshot/block-pvc-from-snapshot.yaml
kubectl delete -f ../block-snapshot/block-snapshot.yaml
```

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scBlockName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcBlockName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scBlockName }}

View File

@ -0,0 +1,6 @@
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: {{ .Values.vscBlockName }}
driver: zfssa-csi-driver
deletionPolicy: Delete

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podBlockName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeDevices:
- name: vol
devicePath: /dev/block
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcBlockName }}

View File

@ -0,0 +1,19 @@
# Various names used through example
scBlockName: zfssa-block-example-sc
vscBlockName: zfssa-block-example-vsc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volSize: OVERRIDE

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-block-example
version: 0.0.1
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.

63
examples/block/README.md Normal file
View File

@ -0,0 +1,63 @@
# Introduction
This is an end-to-end example of using iSCSI block devices on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the iSCSI environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* targetGroup: the target iSCSI group to use on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the block volume (iSCSI LUN) to create
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-block ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target LUNs:
* Protocol -> iSCSI bytes broken down by initiator
* Protocol -> iSCSI bytes broken down by target
* Protocol -> iSCSI bytes broken down by LUN
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-block-example-pod -- /bin/sh
/ # cd /dev
/dev # ls
block fd mqueue ptmx random stderr stdout tty zero
core full null pts shm stdin termination-log urandom
/dev # dd if=/dev/zero of=/dev/block count=1024 bs=1024
1024+0 records in
1024+0 records out
/dev #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scBlockName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcBlockName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scBlockName }}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podBlockName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeDevices:
- name: vol
devicePath: /dev/block
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcBlockName }}

View File

@ -0,0 +1,18 @@
# Various names used through example
scBlockName: zfssa-block-example-sc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volSize: OVERRIDE

View File

@ -0,0 +1,4 @@
apiVersion: v1
description: Creates Storageclass and Persistent Volume Claim used by Sauron.
name: sauron-storage
version: 3.0.1

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.storageClass.name }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.storageClass.volumeType }}
targetGroup: {{ .Values.storageClass.targetGroup }}
blockSize: {{ .Values.storageClass.blockSize }}
pool: {{ .Values.storageClass.pool }}
project: {{ .Values.storageClass.project }}
targetPortal: {{ .Values.storageClass.targetPortal }}
nfsServer: {{ .Values.storageClass.nfsServer }}
rootUser: {{ .Values.storageClass.rootUser }}
rootGroup: {{ .Values.storageClass.rootGroup }}
rootPermissions: {{ .Values.storageClass.rootPermissions }}
shareNFS: {{ .Values.storageClass.shareNFS }}
restrictChown: {{ .Values.storageClass.restrictChown }}

View File

@ -0,0 +1,75 @@
{{- if .Values.persistentVolumeClaim.enabled -}}
kind: Namespace
apiVersion: v1
metadata:
name: {{ .Values.persistentVolumeClaim.namespace }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssec0
namespace: {{ .Values.persistentVolumeClaim.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.size }}
storageClassName: {{ .Values.storageClass.name }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssec1
namespace: {{ .Values.persistentVolumeClaim.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.size }}
storageClassName: {{ .Values.storageClass.name }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssec2
namespace: {{ .Values.persistentVolumeClaim.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.size }}
storageClassName: {{ .Values.storageClass.name }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssg
namespace: {{ .Values.persistentVolumeClaim.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.size }}
storageClassName: {{ .Values.storageClass.name }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssp-many
namespace: {{ .Values.persistentVolumeClaim.namespace }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .Values.persistentVolumeClaim.size }}
storageClassName: {{ .Values.storageClass.name }}
{{- end }}

View File

@ -0,0 +1,21 @@
# Define Storage Class Parameters
storageClass:
name: "sauron-sc"
blockSize: '"8192"'
pool: h1-pool1
project: pmonday
targetPortal: '"10.80.44.65:3260"'
nfsServer: '"10.80.44.65"'
rootUser: nobody
rootGroup: other
rootPermissions: '"777"'
shareNFS: '"on"'
restrictChown: '"false"'
volumeType: '"thin"'
targetGroup: '"csi-data-path-target"'
# Define Persistent Volume Claim Parameters.
persistentVolumeClaim:
enabled: true
namespace: sauron
size: 100Gi

View File

@ -0,0 +1,6 @@
# Introduction
This directory can contain local values files to be used with the helm charts.
Files in this directory should not be checked in to a source code control
system as they may contain passwords.

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-exp-example
version: 0.0.1
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.

174
examples/nfs-exp/README.md Normal file
View File

@ -0,0 +1,174 @@
# Introduction
This is an end-to-end example of using NFS filesystems and expanding the volume size
on a target Oracle ZFS Storage Appliance.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-nfs-exp ./
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-nfs-exp-example-sc zfssa-csi-driver Delete Immediate true 15s
```
2. Display the volume claim
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-nfs-exp-example-pvc Bound pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 10Gi RWX zfssa-nfs-exp-example-sc 108s
```
3. Display the pod mounting the volume
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
zfssa-csi-nodeplugin-xmv96 2/2 Running 0 43m
zfssa-csi-nodeplugin-z5tmm 2/2 Running 0 43m
zfssa-csi-provisioner-0 4/4 Running 0 43m
zfssa-nfs-exp-example-pod 1/1 Running 0 3m23s
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the NFS volume:
```text
kubectl exec -it zfssa-nfs-exp-example-pod -- /bin/sh
/ # cd /mnt
/mnt # df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.4G 15.0G 23.4G 39% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
10.0G 0 10.0G 0% /mnt
...
/mnt # dd if=/dev/zero of=/mnt/data count=1024 bs=1024
1024+0 records in
1024+0 records out
/mnt # df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.4G 15.0G 23.4G 39% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
10.0G 1.0M 10.0G 0% /mnt
/mnt #
```
The analytics on the appliance should have seen the spikes as data was written.
## Expanding volume capacity
After verifying the initially requested capaicy of the NFS volume is provisioned and usable,
exercise expanding of the volume capacity by editing the deployed Persistent Volume Claim.
Copy ./templates/01-pvc.yaml to /tmp/nfs-exp-pvc.yaml and modify this yaml file for volume expansion, for example:
```text
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfssa-nfs-exp-example-pvc
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: "20Gi"
storageClassName: zfssa-nfs-exp-example-sc
```
Then, apply the updated PVC configuration by running 'kubectl apply -f /tmp/nfs-exp-pvc.yaml' command. Note that the command will return a warning message similar to the following:
```text
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
```
Alternatively, you can perform volume expansion on the fly using 'kubectl edit' command.
```text
kubectl edit pvc/zfssa-nfs-exp-example-pvc
...
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: zfssa-nfs-exp-example-sc
volumeMode: Filesystem
volumeName: pvc-27281fde-be45-436d-99a3-b45cddbc74d1
status:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
phase: Bound
...
Modify the capacity from 10Gi to 20Gi on both spec and status sectioins, then save and exit the edit mode.
```
The command `kubectl get pv,pvc` should now return something similar to this:
```text
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 20Gi RWX Delete Bound default/zfssa-nfs-exp-example-pvc zfssa-nfs-exp-example-sc 129s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/zfssa-nfs-exp-example-pvc Bound pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624 20Gi RWX zfssa-nfs-exp-example-sc 132s
```
Exec into the pod and Verify the size of the mounted NFS volume is expanded:
```text
kubectl exec -it zfssa-nfs-exp-example-pod -- /bin/sh
/ # cd /mnt
/mnt # df -h
Filesystem Size Used Available Use% Mounted on
overlay 38.4G 15.0G 23.4G 39% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 14.6G 0 14.6G 0% /sys/fs/cgroup
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 14.6G 1.4G 13.2G 9% /tmp/resolv.conf
tmpfs 14.6G 1.4G 13.2G 9% /etc/hostname
<ZFSSA_IP_ADDR>:/export/pvc-8325aaa0-bbe3-495b-abb0-0c43cc309624
20.0G 1.0M 20.0G 0% /mnt
...
```

View File

@ -0,0 +1,21 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsName }}

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}
readOnly: false

View File

@ -0,0 +1,19 @@
# Various names used through example
scNfsName: zfssa-nfs-exp-example-sc
pvcNfsName: zfssa-nfs-exp-example-pvc
podNfsName: zfssa-nfs-exp-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-nfs-multi-example
version: 0.0.1
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.

View File

@ -0,0 +1,46 @@
# Introduction
This is an end-to-end example of using NFS filesystems on a target
Oracle ZFS Storage Appliance. It creates several PVCs and optionally
creates a pod to consume them.
This example also illustrates the use of namespaces with PVCs and pods.
Be aware that PVCs and pods will be created in the user defined namespace
not in the default namespace as in other examples.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-nfs-multi ./
```
## Check pod mounts
If you enabled the use of the test pod, exec into it and check the NFS volumes:
```
kubectl exec -n zfssa-nfs-multi -it zfssa-nfs-multi-example-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
ssec0 ssec1 ssec2 ssg ssp-many
```

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsMultiName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -0,0 +1,74 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc0 }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsMultiName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc1 }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsMultiName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc2 }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsMultiName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc3 }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsMultiName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvc4 }}
namespace: {{ .Values.namespace }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsMultiName }}

View File

@ -0,0 +1,49 @@
{{- if .Values.deployPod -}}
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsMultiName }}
namespace: {{ .Values.namespace }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol0
mountPath: /mnt/{{ .Values.pvc0 }}
- name: vol1
mountPath: /mnt/{{ .Values.pvc1 }}
- name: vol2
mountPath: /mnt/{{ .Values.pvc2 }}
- name: vol3
mountPath: /mnt/{{ .Values.pvc3 }}
- name: vol4
mountPath: /mnt/{{ .Values.pvc4 }}
volumes:
- name: vol0
persistentVolumeClaim:
claimName: {{ .Values.pvc0 }}
readOnly: false
- name: vol1
persistentVolumeClaim:
claimName: {{ .Values.pvc1 }}
readOnly: false
- name: vol2
persistentVolumeClaim:
claimName: {{ .Values.pvc2 }}
readOnly: false
- name: vol3
persistentVolumeClaim:
claimName: {{ .Values.pvc3 }}
readOnly: false
- name: vol4
persistentVolumeClaim:
claimName: {{ .Values.pvc4 }}
readOnly: false
{{- end }}

View File

@ -0,0 +1,27 @@
# Various names used through example
scNfsMultiName: zfssa-nfs-multi-example-sc
pvc0: ssec0
pvc1: ssec1
pvc2: ssec2
pvc3: ssg
pvc4: ssp-many
podNfsMultiName: zfssa-nfs-multi-example-pod
namespace: zfssa-nfs-multi
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE
# Deploy a pod to consume the volumes
deployPod: true

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-existing-fs-example
version: 0.0.1
description: Deploys an end to end filesystem example for an existing Oracle ZFS Storage Appliance CSI filesystem.

91
examples/nfs-pv/README.md Normal file
View File

@ -0,0 +1,91 @@
# Introduction
This is an end-to-end example of using an existing filesystem share on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
This flow to use an existing volume is:
* create a persistent volume (PV) object
* allocate it to a persistent volume claim (PVC)
* use the PVC from a pod
The following must be set up:
* the volume handle must be a fully formed volume id
* there must be volume attributes defined as part of the persistent volume
In this example, the volume handle is constructed via values in the helm
chart. The only new attribute necessary is the name of the volume on the
target appliance. The remaining is assembled from the information that is still
in the local-values.yaml file (appliance name, pool, project, etc...).
The resulting VolumeHandle appears similar to the following, with the values
in ```<>``` filled in from the helm variables:
```
volumeHandle: /nfs/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
```
From the above, note that the volumeHandle is in the form of an ID with the components:
* 'nfs' - denoting an exported NFS share
* 'appliance name' - this is the management path of the ZFSSA target appliance
* 'volume name' - the name of the share on the appliance
* 'pool name' - the pool on the target appliance
* 'local' - denotes that the pool is owned by the head
* 'project' - the project that the share is in
In the volume attributes, nfsServer must be defined.
Once created, a persistent volume claim can be made for this share and used in a pod.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* applianceName: the existing appliance name (this is the management path)
* pvExistingFilesystemName: the name of the filesystem share on the target appliance
* volMountPoint: the mount point on the target appliance of the filesystem share
* volSize: the size of the filesystem share
On the target appliance, ensure that the filesystem share is exported via NFS.
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-nfs-existing ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-fs-existing-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scExistingFilesystemName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
restrictChown: "false"

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.pvExistingFilesystemName }}
annotations:
pv.kubernetes.io/provisioned-by: zfssa-csi-driver
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.volSize }}
csi:
driver: zfssa-csi-driver
volumeHandle: /nfs/{{ .Values.applianceName }}/{{ .Values.pvExistingFilesystemName }}/{{ .Values.appliance.pool }}/local/{{ .Values.appliance.project }}/{{ .Values.pvExistingFilesystemName }}
readOnly: false
volumeAttributes:
nfsServer: {{ .Values.appliance.nfsServer }}
share: {{ .Values.volMountPoint }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
rootUser: {{ .Values.appliance.rootUser }}
claimRef:
namespace: default
name: {{ .Values.pvcExistingFilesystemName }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcExistingFilesystemName }}
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scExistingFilesystemName }}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podExistingFilesystemName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcExistingFilesystemName }}

View File

@ -0,0 +1,21 @@
# Various names used through example
scExistingFilesystemName: zfssa-fs-existing-sc
pvExistingFilesystemName: OVERRIDE
pvcExistingFilesystemName: zfssa-fs-existing-pvc
podExistingFilesystemName: zfssa-fs-existing-pod
applianceName: OVERRIDE
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volMountPoint: OVERRIDE
volSize: OVERRIDE

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: zfssa-nfs-vs-restore-pod
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: zfssa-nfs-vs-restore-pvc
readOnly: false

View File

@ -0,0 +1,16 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfssa-nfs-vs-restore-pvc
spec:
storageClassName: zfssa-nfs-vs-example-sc
dataSource:
name: zfssa-nfs-vs-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 68796

View File

@ -0,0 +1,8 @@
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: zfssa-nfs-vs-snapshot
spec:
volumeSnapshotClassName: zfssa-nfs-vs-example-vsc
source:
persistentVolumeClaimName: zfssa-nfs-vs-example-pvc

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-vs-example
version: 0.0.1
description: Deploys an end to end NFS volume snapshot example for Oracle ZFS Storage Appliance CSI driver.

193
examples/nfs-vsc/README.md Normal file
View File

@ -0,0 +1,193 @@
# Introduction
This is an end-to-end example of taking a snapshot of an NFS filesystem
volume on a target Oracle ZFS Storage Appliance and making use of it
on another pod by creating (restoring) a volume from the snapshot.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can contain others. The minimum set of values to
customize are:
* appliance:
* pool: the pool to create shares in
* project: the project to create shares in
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
[INSTALLATION](../../INSTALLATION.md).
## Deployment
This step includes deploying a pod with an NFS volume attached using a regular
storage class and a persistent volume claim. It also deploys a volume snapshot class
required to take snapshots of the persistent volume.
Assuming there is a set of values in the local-values directory, deploy using Helm 3. If you plan to exercise creating volume from a snapshot with given yaml files as they are, define the names in the local-values.yaml as follows. You can modify them as per your preference.
```text
scNfsName: zfssa-nfs-vs-example-sc
vscNfsName: zfssa-nfs-vs-example-vsc
pvcNfsName: zfssa-nfs-vs-example-pvc
podNfsName: zfssa-nfs-vs-example-pod
```
```text
helm ../install -f local-values/local-values.yaml zfssa-nfs-vsc ./
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-nfs-vs-example-sc zfssa-csi-driver Delete Immediate false 86s
```
2. Display the volume claim
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 86s
```
3. Display the volume snapshot class
The command `kubectl get volumesnapshotclass` should now return something similar to this:
```text
NAME DRIVER DELETIONPOLICY AGE
zfssa-nfs-vs-example-vsc zfssa-csi-driver Delete 86s
```
4. Display the pod mounting the volume
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 6d6h
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 24m
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 24m
zfssa-csi-provisioner-0 4/4 Running 0 24m
zfssa-nfs-vs-example-pod 1/1 Running 0 86s
```
## Writing data
Once the pod is deployed, verify the volume is mounted and can be written.
```text
kubectl exec -it zfssa-nfs-vs-example-pod -- /bin/sh
/ # cd /mnt
/mnt #
/mnt # date > timestamp.txt
/mnt # cat timestamp.txt
Tue Jan 19 23:13:10 UTC 2021
```
## Creating snapshot
Use configuration files in examples/nfs-snapshot directory with proper modifications
for the rest of the example steps.
Create a snapshot of the volume by running the command below:
```text
kubectl apply -f ../nfs-snapshot/nfs-snapshot.yaml
```
Verify the volume snapshot is created and available by running the following command:
```text
kubectl get volumesnapshot
```
Wait until the READYTOUSE of the snapshot becomes true before moving on to the next steps.
It is important to use the RESTORESIZE value of the volume snapshot just created when specifying
the storage capacity of a persistent volume claim to provision a persistent volume using this
snapshot. For example, the storage capacity in ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
Optionally, verify the volume snapshot exists on ZFS Storage Appliance. The snapshot name
on ZFS Storage Appliance should have the volume snapshot UID as the suffix.
## Creating persistent volume claim
Create a persistent volume claim to provision a volume from the snapshot by running
the command below. Be aware that the persistent volume provisioned by this persistent volume claim
is not expandable. Create a new storage class with allowVolumeExpansion: true and use it when
specifying the persistent volume claim.
```text
kubectl apply -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
```
Verify the persistent volume claim is created and a volume is provisioned by running the following command:
```text
kubectl get pv,pvc
```
The command `kubectl get pv,pvc` should return something similar to this:
```text
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX Delete Bound default/zfssa-nfs-vs-example-pvc zfssa-nfs-vs-example-sc 34m
persistentvolume/pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO Delete Bound default/zfssa-nfs-vs-restore-pvc zfssa-nfs-vs-example-sc 112s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/zfssa-nfs-vs-example-pvc Bound pvc-0c1e5351-dc1b-45a4-8f54-b28741d1003e 10Gi RWX zfssa-nfs-vs-example-sc 34m
persistentvolumeclaim/zfssa-nfs-vs-restore-pvc Bound pvc-59d8d447-302d-4438-a751-7271fbbe8238 10Gi RWO zfssa-nfs-vs-example-sc 116s
```
Optionally, verify the new volume exists on ZFS Storage Appliance. Notice that the new
volume is a clone off the snapshot taken from the original volume.
## Creating pod using restored volume
Create a pod with the persistent volume claim created from the above step by running the command below:
```text
kubectl apply -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
```
The command `kubectl get pod` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
snapshot-controller-0 1/1 Running 0 6d7h
zfssa-csi-nodeplugin-dx2s4 2/2 Running 0 68m
zfssa-csi-nodeplugin-q9h9w 2/2 Running 0 68m
zfssa-csi-provisioner-0 4/4 Running 0 68m
zfssa-nfs-vs-example-pod 1/1 Running 0 46m
zfssa-nfs-vs-restore-pod 1/1 Running 0 37s
```
Verify the new volume has the contents of the original volume at the point in time
when the snapsnot was taken.
```text
kubectl exec -it zfssa-nfs-vs-restore-pod -- /bin/sh
/ # cd /mnt
/mnt #
/mnt # cat timestamp.txt
Tue Jan 19 23:13:10 UTC 2021
```
## Deleting pod, persistent volume claim and volume snapshot
To delete the pod, persistent volume claim and volume snapshot created from the above steps,
run the following commands below. Wait until the resources being deleted disappear from
the list that `kubectl get ...` command displays before running the next command.
```text
kubectl delete -f ../nfs-snapshot/nfs-pod-restored-volume.yaml
kubectl delete -f ../nfs-snapshot/nfs-pvc-from-snapshot.yaml
kubectl delete -f ../nfs-snapshot/nfs-snapshot.yaml
```

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsName }}

View File

@ -0,0 +1,6 @@
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: {{ .Values.vscNfsName }}
driver: zfssa-csi-driver
deletionPolicy: Delete

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}
readOnly: false

View File

@ -0,0 +1,20 @@
# Various names used through example
scNfsName: zfssa-nfs-vs-example-sc
vscNfsName: zfssa-nfs-vs-example-vsc
pvcNfsName: zfssa-nfs-vs-example-pvc
podNfsName: zfssa-nfs-vs-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE

4
examples/nfs/Chart.yaml Normal file
View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-example
version: 0.0.1
description: Deploys an end to end NFS volume example for Oracle ZFS Storage Appliance CSI driver.

83
examples/nfs/README.md Normal file
View File

@ -0,0 +1,83 @@
# Introduction
This is an end-to-end example of using NFS filesystems on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
Check out the parameters section of the storage class configuration file (storage-class.yaml)
to see all supporting properties. Refer to NFS Protocol page of Oracle ZFS Storage Appliance
Administration Guide how to defind the values properly.
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f local-values/local-values.yaml zfssa-nfs ./nfs
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-csi-nfs-sc zfssa-csi-driver Delete Immediate false 2m9s
```
2. Display the volume
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-csi-nfs-pvc Bound pvc-808d9bd7-cbb0-47a7-b400-b144248f1818 10Gi RWX zfssa-csi-nfs-sc 8s
```
3. Display the pod mounting the volume
The command `kubectl get all` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
pod/zfssa-csi-nodeplugin-lpts9 2/2 Running 0 25m
pod/zfssa-csi-nodeplugin-vdb44 2/2 Running 0 25m
pod/zfssa-csi-provisioner-0 2/2 Running 0 23m
pod/zfssa-nfs-example-pod 1/1 Running 0 12s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/zfssa-csi-nodeplugin 2 2 2 2 2 <none> 25m
NAME READY AGE
statefulset.apps/zfssa-csi-provisioner 1/1 23m
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-nfs-example-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsName }}

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}
readOnly: false

19
examples/nfs/values.yaml Normal file
View File

@ -0,0 +1,19 @@
# Various names used through example
scNfsName: zfssa-nfs-example-sc
pvcNfsName: zfssa-nfs-example-pvc
podNfsName: zfssa-nfs-example-pod
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE

53
go.mod Normal file
View File

@ -0,0 +1,53 @@
module github.com/oracle/zfssa-csi-driver
go 1.13
require (
github.com/container-storage-interface/spec v1.2.0
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef // indirect
github.com/golang/protobuf v1.4.0
github.com/kubernetes-csi/csi-lib-iscsi v0.0.0-20190415173011-c545557492f4
github.com/kubernetes-csi/csi-lib-utils v0.6.1
github.com/onsi/gomega v1.9.0 // indirect
github.com/prometheus/client_golang v1.2.1 // indirect
golang.org/x/net v0.0.0-20191101175033-0deb6923b6d9
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae // indirect
google.golang.org/grpc v1.23.1
gopkg.in/yaml.v2 v2.2.8
k8s.io/apimachinery v0.17.11
k8s.io/client-go v0.18.2
k8s.io/klog v1.0.0
k8s.io/kubernetes v1.17.5
k8s.io/utils v0.0.0-20191114184206-e782cd3c129f
)
replace (
k8s.io/api => k8s.io/api v0.17.5
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.17.5
k8s.io/apimachinery => k8s.io/apimachinery v0.17.6-beta.0
k8s.io/apiserver => k8s.io/apiserver v0.17.5
k8s.io/cli-runtime => k8s.io/cli-runtime v0.17.5
k8s.io/client-go => k8s.io/client-go v0.17.5
k8s.io/cloud-provider => k8s.io/cloud-provider v0.17.5
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.17.5
k8s.io/code-generator => k8s.io/code-generator v0.17.6-beta.0
k8s.io/component-base => k8s.io/component-base v0.17.5
k8s.io/cri-api => k8s.io/cri-api v0.17.13-rc.0
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.17.5
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.17.5
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.17.5
k8s.io/kube-proxy => k8s.io/kube-proxy v0.17.5
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.17.5
k8s.io/kubelet => k8s.io/kubelet v0.17.5
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.17.5
k8s.io/metrics => k8s.io/metrics v0.17.5
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.17.5
)
replace k8s.io/kubectl => k8s.io/kubectl v0.17.5
replace k8s.io/node-api => k8s.io/node-api v0.17.5
replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.17.5
replace k8s.io/sample-controller => k8s.io/sample-controller v0.17.5

79
pkg/service/cluster.go Normal file
View File

@ -0,0 +1,79 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"errors"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
var (
clusterConfig *rest.Config
clientset *kubernetes.Clientset
)
// Initializes the cluster interface.
//
func InitClusterInterface() error {
var err error
clusterConfig, err = rest.InClusterConfig()
if err != nil {
fmt.Print("not in cluster mode")
} else {
clientset, err = kubernetes.NewForConfig(clusterConfig)
if err != nil {
return errors.New("could not get Clientset for Kubernetes work")
}
}
return nil
}
// Returns the node name based on the passed in node ID.
//
func GetNodeName(nodeID string) (string, error) {
nodeInfo, err := clientset.CoreV1().Nodes().Get(nodeID, metav1.GetOptions{
TypeMeta: metav1.TypeMeta{
Kind: "",
APIVersion: "",
},
ResourceVersion: "1",
})
if err != nil {
return "", err
} else {
return nodeInfo.Name, nil
}
}
// Returns the list of nodes in the form of a slice containing their name.
//
func GetNodeList() ([]string, error) {
nodeList, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{
TypeMeta: metav1.TypeMeta{
Kind: "",
APIVersion: "",
},
ResourceVersion: "1",
})
if err != nil {
return nil, err
}
var nodeNameList []string
for _, node:= range nodeList.Items {
nodeNameList = append(nodeNameList, node.Name)
}
return nodeNameList, nil
}

688
pkg/service/controller.go Normal file
View File

@ -0,0 +1,688 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"strconv"
)
var (
// the current controller service accessModes supported
controllerCaps = []csi.ControllerServiceCapability_RPC_Type{
csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME,
csi.ControllerServiceCapability_RPC_PUBLISH_UNPUBLISH_VOLUME,
csi.ControllerServiceCapability_RPC_LIST_VOLUMES,
csi.ControllerServiceCapability_RPC_GET_CAPACITY,
csi.ControllerServiceCapability_RPC_EXPAND_VOLUME,
csi.ControllerServiceCapability_RPC_CREATE_DELETE_SNAPSHOT,
csi.ControllerServiceCapability_RPC_LIST_SNAPSHOTS,
}
)
func newZFSSAControllerServer(zd *ZFSSADriver) *csi.ControllerServer {
var cs csi.ControllerServer = zd
return &cs
}
func (zd *ZFSSADriver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (
*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("CreateVolume", "request", protosanitizer.StripSecrets(req))
// Token retrieved
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
// Validate the parameters
if err := validateCreateVolumeReq(ctx, token, req); err != nil {
return nil, err
}
parameters := req.GetParameters()
pool := parameters["pool"]
project := parameters["project"]
zvol, err := zd.newVolume(ctx, pool, project,
req.GetName(), isBlock(req.GetVolumeCapabilities()))
if err != nil {
return nil, err
}
defer zd.releaseVolume(ctx, zvol)
if volumeContentSource := req.GetVolumeContentSource(); volumeContentSource != nil {
if snapshot := volumeContentSource.GetSnapshot(); snapshot != nil {
zsnap, err := zd.lookupSnapshot(ctx, token, snapshot.GetSnapshotId())
if err != nil {
return nil, err
}
defer zd.releaseSnapshot(ctx, zsnap)
return zvol.cloneSnapshot(ctx, token, req, zsnap)
}
return nil, status.Error(codes.InvalidArgument, "Only snapshots are supported as content source")
} else {
return zvol.create(ctx, token, req)
}
}
// Retrieve the volume size from the request (if not available, use a default)
func getVolumeSize(capRange *csi.CapacityRange) int64 {
volSizeBytes := DefaultVolumeSizeBytes
if capRange != nil {
if capRange.RequiredBytes > 0 {
volSizeBytes = capRange.RequiredBytes
} else if capRange.LimitBytes > 0 && capRange.LimitBytes < volSizeBytes {
volSizeBytes = capRange.LimitBytes
}
}
return volSizeBytes
}
// Check whether the access mode of the volume to create is "block" or "filesystem"
//
// true block access mode
// false filesystem access mode
//
func isBlock(capabilities []*csi.VolumeCapability) bool {
for _, capacity := range capabilities {
if capacity.GetBlock() == nil {
return false
}
}
return true
}
// Validates as much of the "create volume request" as possible
//
func validateCreateVolumeReq(ctx context.Context, token *zfssarest.Token, req *csi.CreateVolumeRequest) error {
log5 := utils.GetLogCTRL(ctx, 5)
log5.Println("validateCreateVolumeReq started")
// check the request object is populated
if req == nil {
return status.Errorf(codes.InvalidArgument, "request must not be nil")
}
reqCaps := req.GetVolumeCapabilities()
if len(reqCaps) == 0 {
return status.Errorf(codes.InvalidArgument, "no accessModes provided")
}
// check that the name is populated
if req.GetName() == "" {
return status.Error(codes.InvalidArgument, "name must be supplied")
}
// check as much of the ZFSSA pieces as we can up front, this will cache target information
// in a volatile cache, but in the long run, with many storage classes, this may save us
// quite a few trips to the appliance. Note that different storage classes may have
// different parameters
parameters := req.GetParameters()
poolName, ok := parameters["pool"]
if !ok || len(poolName) < 1 || !utils.IsResourceNameValid(poolName) {
utils.GetLogCTRL(ctx, 3).Println("pool name is invalid", poolName)
return status.Errorf(codes.InvalidArgument, "pool name is invalid (%s)", poolName)
}
projectName, ok := parameters["project"]
if !ok || len(projectName) < 1 || !utils.IsResourceNameValid(projectName) {
utils.GetLogCTRL(ctx, 3).Println("project name is invalid", projectName)
return status.Errorf(codes.InvalidArgument, "project name is invalid (%s)", projectName)
}
pool, err := zfssarest.GetPool(ctx, token, poolName)
if err != nil {
return err
}
if pool.Status != "online" && pool.Status != "degraded" {
log5.Println("Pool not ready", "State", pool.Status)
return status.Errorf(codes.InvalidArgument, "pool %s in an error state (%s)", poolName, pool.Status)
}
_, err = zfssarest.GetProject(ctx, token, poolName, projectName)
if err != nil {
return err
}
// If this is a block request, the storage class must have the target group set and it must be on the target
if isBlock(reqCaps) {
err = validateCreateBlockVolumeReq(ctx, token, req)
} else {
err = validateCreateFilesystemVolumeReq(ctx, req)
}
return err
}
func (zd *ZFSSADriver) DeleteVolume(ctx context.Context, req *csi.DeleteVolumeRequest) (
*csi.DeleteVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("DeleteVolume",
"request", protosanitizer.StripSecrets(req), "context", ctx)
log2 := utils.GetLogCTRL(ctx, 2)
// The account to be used for this operation is determined.
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
log2.Println("VolumeID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
if status.Convert(err).Code() == codes.NotFound {
log2.Println("Volume already removed", "volume_id", req.GetVolumeId())
return &csi.DeleteVolumeResponse{}, nil
} else {
log2.Println("Cannot delete volume", "volume_id", req.GetVolumeId(), "error", err.Error())
return nil, err
}
}
defer zd.releaseVolume(ctx, zvol)
entries, err := zvol.getSnapshotsList(ctx, token)
if err != nil {
return nil, err
}
if len(entries) > 0 {
return nil, status.Errorf(codes.FailedPrecondition, "Volume (%s) has snapshots", volumeID)
}
return zvol.delete(ctx, token)
}
func (zd *ZFSSADriver) ControllerPublishVolume(ctx context.Context, req *csi.ControllerPublishVolumeRequest) (
*csi.ControllerPublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ControllerPublishVolume",
"request", protosanitizer.StripSecrets(req), "volume_context",
req.GetVolumeContext(), "volume_capability", req.GetVolumeCapability())
log2 := utils.GetLogCTRL(ctx, 2)
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
log2.Println("Volume ID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
nodeID := req.GetNodeId()
if len(nodeID) == 0 {
log2.Println("Node ID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Node ID not provided")
}
capability := req.GetVolumeCapability()
if capability == nil {
log2.Println("Capability not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Capability not provided")
}
nodeName, err := GetNodeName(nodeID)
if err != nil {
return nil, status.Errorf(codes.NotFound, "Node (%s) was not found: %v", req.NodeId, err)
}
// The account to be used for this operation is determined.
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
log2.Println("Volume ID unknown", "volume_id", volumeID, "error", err.Error())
return nil, err
}
defer zd.releaseVolume(ctx, zvol)
return zvol.controllerPublishVolume(ctx, token, req, nodeName)
}
func (zd *ZFSSADriver) ControllerUnpublishVolume(ctx context.Context, req *csi.ControllerUnpublishVolumeRequest) (
*csi.ControllerUnpublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ControllerUnpublishVolume",
"request", protosanitizer.StripSecrets(req))
log2 := utils.GetLogCTRL(ctx, 2)
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
log2.Println("Volume ID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
// The account to be used for this operation is determined.
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
if status.Convert(err).Code() == codes.NotFound {
log2.Println("Volume already removed", "volume_id", req.GetVolumeId())
return &csi.ControllerUnpublishVolumeResponse{}, nil
} else {
log2.Println("Cannot unpublish volume", "volume_id", req.GetVolumeId(), "error", err.Error())
return nil, err
}
}
defer zd.releaseVolume(ctx, zvol)
return zvol.controllerUnpublishVolume(ctx, token, req)
}
func (zd *ZFSSADriver) ValidateVolumeCapabilities(ctx context.Context, req *csi.ValidateVolumeCapabilitiesRequest) (
*csi.ValidateVolumeCapabilitiesResponse, error) {
log2 := utils.GetLogCTRL(ctx, 2)
log2.Println("validateVolumeCapabilities", "request", protosanitizer.StripSecrets(req))
volumeID := req.GetVolumeId()
if volumeID == "" {
return nil, status.Error(codes.InvalidArgument, "no volume ID provided")
}
reqCaps := req.GetVolumeCapabilities()
if len(reqCaps) == 0 {
return nil, status.Errorf(codes.InvalidArgument, "no accessModes provided")
}
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
return nil, status.Errorf(codes.NotFound, "Volume (%s) was not found: %v", volumeID)
}
defer zd.releaseVolume(ctx, zvol)
return zvol.validateVolumeCapabilities(ctx, token, req)
}
func (zd *ZFSSADriver) ListVolumes(ctx context.Context, req *csi.ListVolumesRequest) (
*csi.ListVolumesResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ListVolumes", "request", protosanitizer.StripSecrets(req))
var startIndex int
if len(req.GetStartingToken()) > 0 {
var err error
startIndex, err = strconv.Atoi(req.GetStartingToken())
if err != nil {
return nil, status.Errorf(codes.Aborted, "invalid starting_token value")
}
} else {
startIndex = 0
}
var maxIndex int
maxEntries := int(req.GetMaxEntries())
if maxEntries < 0 {
return nil, status.Errorf(codes.InvalidArgument, "invalid max_entries value")
} else if maxEntries > 0 {
maxIndex = startIndex + maxEntries
} else {
maxIndex = (1 << 31) - 1
}
entries, err := zd.getVolumesList(ctx)
if err != nil {
return nil, err
}
// The starting index and the maxIndex have to be adjusted based on
// the results of the query.
var nextToken string
if startIndex >= len(entries) {
// An empty list is returned.
nextToken = "0"
entries = []*csi.ListVolumesResponse_Entry{}
} else if maxIndex >= len(entries) {
// All entries from startIndex are returned.
nextToken = "0"
entries = entries[startIndex:]
} else {
nextToken = strconv.Itoa(maxIndex)
entries = entries[startIndex:maxIndex]
}
rsp := &csi.ListVolumesResponse{
NextToken: nextToken,
Entries: entries,
}
return rsp, nil
}
func (zd *ZFSSADriver) GetCapacity(ctx context.Context, req *csi.GetCapacityRequest) (
*csi.GetCapacityResponse, error) {
utils.GetLogCTRL(ctx,5).Println("GetCapacity", "request", protosanitizer.StripSecrets(req))
reqCaps := req.GetVolumeCapabilities()
if len(reqCaps) > 0 {
// Providing accessModes is optional, but if provided they must be supported.
var capsValid bool
if isBlock(reqCaps) {
capsValid = areBlockVolumeCapsValid(reqCaps)
} else {
capsValid = areFilesystemVolumeCapsValid(reqCaps)
}
if !capsValid {
return nil, status.Error(codes.InvalidArgument, "invalid volume accessModes")
}
}
var availableCapacity int64
user, password, err := zd.getUserLogin(ctx, nil)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
parameters := req.GetParameters()
projectName, ok := parameters["project"]
if !ok || len(projectName) == 0 {
// No project name provided the capacity returned will be the capacity
// of the pool if a pool is provided.
poolName, ok := parameters["pool"]
if !ok || len(poolName) == 0 {
// No pool name provided. In this case the sum of the space
// available in each pool is returned.
pools, err := zfssarest.GetPools(ctx, token)
if err != nil {
return nil, err
}
for _, pool := range *pools {
availableCapacity += pool.Usage.Available
}
} else {
// A pool name was provided. The space available in the pool is returned.
pool, err := zfssarest.GetPool(ctx, token, poolName)
if err != nil {
return nil, err
}
availableCapacity = pool.Usage.Available
}
} else {
// A project name was provided. In this case a pool name is required. If
// no pool name was provided, the request is failed.
poolName, ok := parameters["pool"]
if !ok || len(poolName) == 0 {
return nil, status.Error(codes.InvalidArgument, "a pool name is required")
}
project, err := zfssarest.GetProject(ctx, token, poolName, projectName)
if err != nil {
return nil, err
}
availableCapacity = project.SpaceAvailable
}
return &csi.GetCapacityResponse{AvailableCapacity: availableCapacity}, nil
}
func (zd *ZFSSADriver) ControllerGetCapabilities(ctx context.Context, req *csi.ControllerGetCapabilitiesRequest) (
*csi.ControllerGetCapabilitiesResponse, error) {
utils.GetLogCTRL(ctx,5).Println("ControllerGetCapabilities",
"request", protosanitizer.StripSecrets(req))
var caps []*csi.ControllerServiceCapability
for _, capacity := range controllerCaps {
c := &csi.ControllerServiceCapability{
Type: &csi.ControllerServiceCapability_Rpc{
Rpc: &csi.ControllerServiceCapability_RPC{
Type: capacity,
},
},
}
caps = append(caps, c)
}
return &csi.ControllerGetCapabilitiesResponse{Capabilities: caps}, nil
}
func (zd *ZFSSADriver) CreateSnapshot(ctx context.Context, req *csi.CreateSnapshotRequest) (
*csi.CreateSnapshotResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("CreateSnapshot", "request", protosanitizer.StripSecrets(req))
sourceId := req.GetSourceVolumeId()
snapName := req.GetName()
if len(snapName) == 0 || len(sourceId) == 0 {
return nil, status.Error(codes.InvalidArgument, "Source or snapshot ID missing")
}
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
zsnap, err := zd.newSnapshot(ctx, token, snapName, sourceId)
if err != nil {
return nil, err
}
defer zd.releaseSnapshot(ctx, zsnap)
return zsnap.create(ctx, token)
}
func (zd *ZFSSADriver) DeleteSnapshot(ctx context.Context, req *csi.DeleteSnapshotRequest) (
*csi.DeleteSnapshotResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("DeleteSnapshot", "request", protosanitizer.StripSecrets(req))
if len(req.GetSnapshotId()) == 0 {
return nil, status.Errorf(codes.InvalidArgument, "no snapshot ID provided")
}
log2 := utils.GetLogCTRL(ctx, 2)
// Retrieve Token
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
// Get exclusive access to the snapshot.
zsnap, err := zd.lookupSnapshot(ctx, token, req.SnapshotId)
if err != nil {
return &csi.DeleteSnapshotResponse{}, nil
}
if err != nil {
if status.Convert(err).Code() == codes.NotFound {
log2.Println("Snapshot already removed", "snapshot_id", req.GetSnapshotId())
return &csi.DeleteSnapshotResponse{}, nil
} else {
log2.Println("Cannot delete snapshot", "snapshot_id", req.GetSnapshotId(), "error", err.Error())
return nil, err
}
}
defer zd.releaseSnapshot(ctx, zsnap)
return zsnap.delete(ctx, token)
}
func (zd *ZFSSADriver) ListSnapshots(ctx context.Context, req *csi.ListSnapshotsRequest) (
*csi.ListSnapshotsResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ListSnapshots", "request", protosanitizer.StripSecrets(req))
var startIndex int
var err error
if len(req.GetStartingToken()) > 0 {
startIndex, err = strconv.Atoi(req.GetStartingToken())
if err != nil {
return nil, status.Errorf(codes.Aborted, "invalid starting_token value")
}
} else {
startIndex = 0
}
var maxIndex int
maxEntries := int(req.GetMaxEntries())
if maxEntries < 0 {
return nil, status.Errorf(codes.InvalidArgument, "invalid max_entries value")
} else if maxEntries > 0 {
maxIndex = startIndex + maxEntries
} else {
maxIndex = (1 << 31) - 1
}
// Retrieve Token
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
var entries []*csi.ListSnapshotsResponse_Entry
snapshotId := req.GetSnapshotId()
if len(snapshotId) > 0 {
// Only this snapshot is requested.
zsnap, err := zd.lookupSnapshot(ctx, token, snapshotId)
if err == nil {
entry := new(csi.ListSnapshotsResponse_Entry)
entry.Snapshot = &csi.Snapshot{
SnapshotId: zsnap.id.String(),
SizeBytes: zsnap.getSize(),
SourceVolumeId: zsnap.getStringSourceId(),
CreationTime: zsnap.getCreationTime(),
ReadyToUse: true,
}
zd.releaseSnapshot(ctx, zsnap)
utils.GetLogCTRL(ctx, 5).Println("ListSnapshots with snapshot ID", "Snapshot", zsnap.getHref())
entries = append(entries, entry)
}
} else if len(req.GetSourceVolumeId()) > 0 {
// Only snapshots of this volume are requested.
zvol, err := zd.lookupVolume(ctx, token, req.GetSourceVolumeId())
if err == nil {
entries, err = zvol.getSnapshotsList(ctx, token)
if err != nil {
entries = []*csi.ListSnapshotsResponse_Entry{}
utils.GetLogCTRL(ctx, 5).Println("ListSnapshots with source ID", "Count", len(entries))
}
zd.releaseVolume(ctx, zvol)
}
} else {
entries, err = zd.getSnapshotList(ctx)
if err != nil {
entries = []*csi.ListSnapshotsResponse_Entry{}
}
utils.GetLogCTRL(ctx, 5).Println("ListSnapshots All", "Count", len(entries))
}
// The starting index and the maxIndex have to be adjusted based on
// the results of the query.
var nextToken string
if startIndex >= len(entries) {
nextToken = "0"
entries = []*csi.ListSnapshotsResponse_Entry{}
} else if maxIndex >= len(entries) {
nextToken = "0"
entries = entries[startIndex:]
} else {
nextToken = strconv.Itoa(maxIndex)
entries = entries[startIndex:maxIndex]
}
rsp := &csi.ListSnapshotsResponse{
NextToken: nextToken,
Entries: entries,
}
return rsp, nil
}
func (zd *ZFSSADriver) ControllerExpandVolume(ctx context.Context, req *csi.ControllerExpandVolumeRequest) (
*csi.ControllerExpandVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ControllerExpandVolume", "request", protosanitizer.StripSecrets(req))
log2 := utils.GetLogCTRL(ctx, 2)
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
log2.Println("Volume ID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
log2.Println("ControllerExpandVolume request failed, bad VolumeId",
"volume_id", volumeID, "error", err.Error())
return nil, err
}
defer zd.releaseVolume(ctx, zvol)
return zvol.controllerExpandVolume(ctx, token, req)
}
// Check the secrets map (typically in a request context) for a change in the username
// and password or retrieve the username/password from the credentials file, the username
// and password should be scrubbed quickly after use and not remain in memory
func (zd *ZFSSADriver) getUserLogin(ctx context.Context, secrets map[string]string) (string, string, error) {
if secrets != nil {
user, ok := secrets["username"]
if ok {
password := secrets["password"]
return user, password, nil
}
}
username, err := zd.GetUsernameFromCred()
if err != nil {
utils.GetLogCTRL(ctx, 2).Println("ZFSSA username error:", err)
username = "INVALID_USERNAME"
return "", "", err
}
password, err := zd.GetPasswordFromCred()
if err != nil {
utils.GetLogCTRL(ctx, 2).Println("ZFSSA password error:", err)
return "", "", err
}
return username, password, nil
}

View File

@ -0,0 +1,381 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"context"
"fmt"
"github.com/container-storage-interface/spec/lib/go/csi"
context2 "golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"net/http"
"sync/atomic"
)
// ZFSSA block volume
type zLUN struct {
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
initiatorgroup []string
targetgroup string``
}
var (
// access modes supported by block volumes.
blockVolumeCaps = []csi.VolumeCapability_AccessMode {
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER },
}
)
// Creates a new LUN structure. If no information is provided (luninfo is nil), this
// method cannot fail. If information is provided, it will fail if it cannot create
// a volume ID.
func newLUN(vid *utils.VolumeId) *zLUN {
lun := new(zLUN)
lun.id = vid
lun.bolt = utils.NewBolt()
lun.state = stateCreating
return lun
}
func (lun *zLUN) create(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.create")
capacityRange := req.GetCapacityRange()
capabilities := req.GetVolumeCapabilities()
_, luninfo, httpStatus, err := zfssarest.CreateLUN(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
if err != nil {
if httpStatus != http.StatusConflict {
lun.state = stateDeleted
return nil, err
}
utils.GetLogCTRL(ctx, 5).Println("LUN already exits")
// The creation failed because the appliance already has a LUN
// with the same name. We get the information from the appliance,
// update the LUN context and check its compatibility with the request.
if lun.state == stateCreated {
luninfo, _, err := zfssarest.GetLun(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
if err != nil {
return nil, err
}
lun.setInfo(luninfo)
}
// The LUN has already been created. The compatibility of the
// capacity range and accessModes is checked.
if !compareCapacityRange(capacityRange, lun.capacity) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
" capacity range incompatible (%v), requested (%v/%v)",
lun.id.Name, lun.id.Zfssa, lun.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
}
if !compareCapabilities(capabilities, lun.accessModes, true) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
lun.id.Name, lun.id.Zfssa)
}
} else {
lun.setInfo(luninfo)
}
utils.GetLogCTRL(ctx, 5).Printf(
"LUN created: name=%s, target=%s, assigned_number=%d",
luninfo.CanonicalName, luninfo.TargetGroup, luninfo.AssignedNumber[0])
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: lun.id.String(),
CapacityBytes: lun.capacity,
VolumeContext: req.GetParameters()}}, nil
}
func (lun *zLUN) cloneSnapshot(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest, zsnap *zSnapshot) (*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.cloneSnapshot")
parameters := make(map[string]interface{})
parameters["project"] = req.Parameters["project"]
parameters["share"] = req.GetName()
parameters["initiatorgroup"] = []string{zfssarest.MaskAll}
luninfo, _, err := zfssarest.CloneLunSnapshot(ctx, token, zsnap.getHref(), parameters)
if err != nil {
return nil, err
}
lun.setInfo(luninfo)
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: lun.id.String(),
CapacityBytes: lun.capacity,
VolumeContext: req.GetParameters(),
ContentSource: req.GetVolumeContentSource(),
}}, nil
}
func (lun *zLUN) delete(ctx context.Context, token *zfssarest.Token) (*csi.DeleteVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.delete")
if lun.state == stateCreated {
_, httpStatus, err := zfssarest.DeleteLun(ctx, token, lun.id.Pool, lun.id.Project, lun.id.Name)
if err != nil && httpStatus != http.StatusNotFound {
return nil, err
}
lun.state = stateDeleted
}
return &csi.DeleteVolumeResponse{}, nil
}
func (lun *zLUN) controllerPublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerPublishVolumeRequest, nodeName string) (*csi.ControllerPublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.controllerPublishVolume")
pool := lun.id.Pool
project := lun.id.Project
name := lun.id.Name
list, err := zfssarest.GetInitiatorGroupList(ctx, token, pool, project, name)
if err != nil {
// Log something
return nil, err
}
// When the driver creates a LUN or clones a Lun from a snapshot of another Lun,
// it masks the intiator group of the Lun using zfssarest.MaskAll value.
// When the driver unpublishes the Lun, it also masks the initiator group.
// This block is to test if the Lun to publish was created or unpublished
// by the driver. Publishing a Lun with unmasked initiator group fails
// to avoid mistakenly publishing a Lun that may be in use by other entity.
utils.GetLogCTRL(ctx, 5).Printf("Volume to publish: %s:%s", lun.id, list[0])
if len(list) != 1 || list[0] != zfssarest.MaskAll {
var msg string
if len(list) > 0 {
msg = fmt.Sprintf("Volume (%s:%s) may already be published", lun.id, list[0])
} else {
msg = fmt.Sprintf("Volume (%s) did not return an initiator group list", lun.id)
}
return nil, status.Error(codes.FailedPrecondition, msg)
}
// Reset the masked initiator group with one named by the current node name.
// There must be initiator groups on ZFSSA defined by the node names.
_, err = zfssarest.SetInitiatorGroupList(ctx, token, pool, project, name, nodeName)
if err != nil {
// Log something
return nil, err
}
return &csi.ControllerPublishVolumeResponse{}, nil
}
func (lun *zLUN) controllerUnpublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerUnpublishVolumeRequest) (*csi.ControllerUnpublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.controllerUnpublishVolume")
pool := lun.id.Pool
project := lun.id.Project
name := lun.id.Name
code, err := zfssarest.SetInitiatorGroupList(ctx, token, pool, project, name, zfssarest.MaskAll)
if err != nil {
utils.GetLogCTRL(ctx, 5).Println("Could not unpublish volume {}, code {}", lun, code)
if code != 404 {
return nil, err
}
utils.GetLogCTRL(ctx, 5).Println("Unpublish failed because LUN was deleted, return success")
}
return &csi.ControllerUnpublishVolumeResponse{}, nil
}
func (lun *zLUN) validateVolumeCapabilities(ctx context.Context, token *zfssarest.Token,
req *csi.ValidateVolumeCapabilitiesRequest) (*csi.ValidateVolumeCapabilitiesResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("lun.validateVolumeCapabilities")
if areBlockVolumeCapsValid(req.VolumeCapabilities) {
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
Message: "",
}, nil
} else {
return &csi.ValidateVolumeCapabilitiesResponse{
Message: "One or more volume accessModes failed",
}, nil
}
}
func (lun *zLUN) controllerExpandVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerExpandVolumeRequest) (*csi.ControllerExpandVolumeResponse, error) {
return nil, status.Error(codes.OutOfRange, "Not allowed for block devices")
}
func (lun *zLUN) nodeStageVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) {
return nil, nil
}
func (lun *zLUN) nodeUnstageVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeUnstageVolumeRequest) (*csi.NodeUnstageVolumeResponse, error) {
return nil, nil
}
func (lun *zLUN) nodePublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) {
return nil, nil
}
func (lun *zLUN) nodeUnpublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeUnpublishVolumeRequest) (*csi.NodeUnpublishVolumeResponse, error) {
return nil, nil
}
func (lun *zLUN) nodeGetVolumeStats(ctx context.Context, token *zfssarest.Token,
req *csi.NodeGetVolumeStatsRequest) (*csi.NodeGetVolumeStatsResponse, error) {
return nil, nil
}
func (lun *zLUN) getDetails(ctx context2.Context, token *zfssarest.Token) (int, error) {
lunInfo, httpStatus, err := zfssarest.GetLun(ctx, token, lun.id.Pool, lun.id.Project, lun.id.Name)
if err != nil {
return httpStatus, err
}
lun.setInfo(lunInfo)
return httpStatus, nil
}
func (lun *zLUN) getSnapshotsList(ctx context.Context, token *zfssarest.Token) (
[]*csi.ListSnapshotsResponse_Entry, error) {
snapList, err := zfssarest.GetSnapshots(ctx, token, lun.href)
if err != nil {
return nil, err
}
return zfssaSnapshotList2csiSnapshotList(ctx, token.Name, snapList), nil
}
func (lun *zLUN) getState() volumeState { return lun.state }
func (lun *zLUN) getName() string { return lun.id.Name }
func (lun *zLUN) getHref() string { return lun.href }
func (lun *zLUN) getVolumeID() *utils.VolumeId { return lun.id }
func (lun *zLUN) getCapacity() int64 { return lun.capacity }
func (lun *zLUN) isBlock() bool { return true }
func (lun *zLUN) getSnapshots(ctx context.Context, token *zfssarest.Token) ([]zfssarest.Snapshot, error) {
return zfssarest.GetSnapshots(ctx, token, lun.href)
}
func (lun *zLUN) setInfo(volInfo interface{}) {
switch luninfo := volInfo.(type) {
case *zfssarest.Lun:
lun.capacity = int64(luninfo.VolumeSize)
lun.href = luninfo.Href
lun.initiatorgroup = luninfo.InitiatorGroup
lun.targetgroup = luninfo.TargetGroup
lun.state = stateCreated
default:
panic("lun.setInfo called with wrong type")
}
}
// Waits until the file system is available and, when it is, returns with its current state.
func (lun *zLUN) hold(ctx context.Context) volumeState {
utils.GetLogCTRL(ctx, 5).Printf("holding lun (%s)", lun.id.Name)
atomic.AddInt32(&lun.refcount, 1)
return lun.state
}
// Releases the file system and returns its current reference count.
func (lun *zLUN) release(ctx context.Context) (int32, volumeState) {
utils.GetLogCTRL(ctx, 5).Printf("releasing lun (%s)", lun.id.Name)
return atomic.AddInt32(&lun.refcount, -1), lun.state
}
func (lun *zLUN) lock(ctx context.Context) volumeState {
utils.GetLogCTRL(ctx, 5).Printf("locking %s", lun.id.String())
lun.bolt.Lock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is locked", lun.id.String())
return lun.state
}
func (lun *zLUN) unlock(ctx context.Context) (int32, volumeState){
lun.bolt.Unlock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is unlocked", lun.id.String())
return lun.refcount, lun.state
}
// Validates the block specific parameters of the create request.
func validateCreateBlockVolumeReq(ctx context.Context, token *zfssarest.Token, req *csi.CreateVolumeRequest) error {
reqCaps := req.GetVolumeCapabilities()
if !areBlockVolumeCapsValid(reqCaps) {
return status.Error(codes.InvalidArgument, "invalid volume accessModes")
}
parameters := req.GetParameters()
tg, ok := parameters["targetGroup"]
if !ok || len(tg) < 1 {
return status.Error(codes.InvalidArgument, "a valid ZFSSA target group is required ")
}
_, err := zfssarest.GetTargetGroup(ctx, token, "iscsi", tg)
if err != nil {
return err
}
return nil
}
// Checks whether the capability list is all supported.
func areBlockVolumeCapsValid(volCaps []*csi.VolumeCapability) bool {
hasSupport := func(cap *csi.VolumeCapability) bool {
for _, c := range blockVolumeCaps {
if c.GetMode() == cap.AccessMode.GetMode() {
return true
}
}
return false
}
foundAll := true
for _, c := range volCaps {
if !hasSupport(c) {
foundAll = false
break
}
}
return foundAll
}

View File

@ -0,0 +1,355 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"context"
"github.com/container-storage-interface/spec/lib/go/csi"
context2 "golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"net/http"
"sync/atomic"
)
var (
filesystemAccessModes = []csi.VolumeCapability_AccessMode{
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER },
{ Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER },
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY },
{ Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY },
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY },
}
)
// ZFSSA mount volume
type zFilesystem struct {
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
mountpoint string
}
// Creates a new filesysyem structure. If no information is provided (fsinfo is nil), this
// method cannot fail. If information is provided, it will fail if it cannot create a volume ID
func newFilesystem(vid *utils.VolumeId) *zFilesystem {
fs := new(zFilesystem)
fs.id = vid
fs.bolt = utils.NewBolt()
fs.state = stateCreating
return fs
}
func (fs *zFilesystem) create(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.create")
capacityRange := req.GetCapacityRange()
capabilities := req.GetVolumeCapabilities()
fsinfo, httpStatus, err := zfssarest.CreateFilesystem(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
if err != nil {
if httpStatus != http.StatusConflict {
fs.state = stateDeleted
return nil, err
}
utils.GetLogCTRL(ctx, 5).Println("Filesystem already exits")
// The creation failed because the appliance already has a file system
// with the same name. We get the information from the appliance, update
// the file system context and check its compatibility with the request.
if fs.state == stateCreated {
fsinfo, _, err = zfssarest.GetFilesystem(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
if err != nil {
return nil, err
}
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
}
// The volume has already been created. The compatibility of the
// capacity range and accessModes is checked.
if !compareCapacityRange(capacityRange, fs.capacity) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
" capacity range incompatible (%v), requested (%v/%v)",
fs.id.Name, fs.id.Zfssa, fs.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
}
if !compareCapabilities(capabilities, fs.accessModes, false) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
fs.id.Name, fs.id.Zfssa)
}
} else {
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
}
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: fs.id.String(),
CapacityBytes: fs.capacity,
VolumeContext: req.Parameters}}, nil
}
func (fs *zFilesystem) cloneSnapshot(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest, zsnap *zSnapshot) (*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.cloneSnapshot")
parameters := make(map[string]interface{})
parameters["project"] = req.Parameters["project"]
parameters["share"] = req.GetName()
fsinfo, _, err := zfssarest.CloneFileSystemSnapshot(ctx, token, zsnap.getHref(), parameters)
if err != nil {
return nil, err
}
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: fs.id.String(),
CapacityBytes: fs.capacity,
VolumeContext: req.GetParameters(),
ContentSource: req.GetVolumeContentSource(),
}}, nil
}
func (fs *zFilesystem) delete(ctx context.Context, token *zfssarest.Token) (*csi.DeleteVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.delete")
// Check first if the filesystem has snapshots.
snaplist, err := zfssarest.GetSnapshots(ctx, token, fs.href)
if err != nil {
return nil, err
}
if len(snaplist) > 0 {
return nil, status.Errorf(codes.FailedPrecondition, "filesysytem (%s) has snapshots", fs.id.String())
}
_, httpStatus, err := zfssarest.DeleteFilesystem(ctx, token, fs.href)
if err != nil && httpStatus != http.StatusNotFound {
return nil, err
}
fs.state = stateDeleted
return &csi.DeleteVolumeResponse{}, nil
}
// Publishes a file system. In this case there's nothing to do.
//
func (fs *zFilesystem) controllerPublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerPublishVolumeRequest, nodeName string) (*csi.ControllerPublishVolumeResponse, error) {
// Note: the volume context of the volume provisioned from an existing share does not have the mountpoint.
// Use the share (corresponding to volumeAttributes.share of PV configuration) to define the mountpoint.
return &csi.ControllerPublishVolumeResponse{}, nil
}
// Unpublishes a file system. In this case there's nothing to do.
//
func (fs *zFilesystem) controllerUnpublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerUnpublishVolumeRequest) (*csi.ControllerUnpublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.controllerUnpublishVolume")
return &csi.ControllerUnpublishVolumeResponse{}, nil
}
func (fs *zFilesystem) validateVolumeCapabilities(ctx context.Context, token *zfssarest.Token,
req *csi.ValidateVolumeCapabilitiesRequest) (*csi.ValidateVolumeCapabilitiesResponse, error) {
if areFilesystemVolumeCapsValid(req.VolumeCapabilities) {
return &csi.ValidateVolumeCapabilitiesResponse{
Confirmed: &csi.ValidateVolumeCapabilitiesResponse_Confirmed{
VolumeCapabilities: req.VolumeCapabilities,
},
Message: "",
}, nil
} else {
return &csi.ValidateVolumeCapabilitiesResponse{
Message: "One or more volume accessModes failed",
}, nil
}
}
func (fs *zFilesystem) controllerExpandVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerExpandVolumeRequest) (*csi.ControllerExpandVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.controllerExpandVolume")
reqCapacity := req.GetCapacityRange().RequiredBytes
if fs.capacity >= reqCapacity {
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: fs.capacity,
NodeExpansionRequired: false,
}, nil
}
parameters := make(map[string]interface{})
parameters["quota"] = reqCapacity
parameters["reservation"] = reqCapacity
fsinfo, _, err := zfssarest.ModifyFilesystem(ctx, token, fs.href, &parameters)
if err != nil {
return nil, err
}
fs.capacity = fsinfo.Quota
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: fsinfo.Quota,
NodeExpansionRequired: false,
}, nil
}
func (fs *zFilesystem) nodeStageVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) {
return nil, nil
}
func (fs *zFilesystem) nodeUnstageVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeUnstageVolumeRequest) (*csi.NodeUnstageVolumeResponse, error) {
return nil, nil
}
func (fs *zFilesystem) nodePublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) {
return nil, nil
}
func (fs *zFilesystem) nodeUnpublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeUnpublishVolumeRequest) (*csi.NodeUnpublishVolumeResponse, error) {
return nil, nil
}
func (fs *zFilesystem) nodeGetVolumeStats(ctx context.Context, token *zfssarest.Token,
req *csi.NodeGetVolumeStatsRequest) (*csi.NodeGetVolumeStatsResponse, error) {
return nil, nil
}
func (fs *zFilesystem) getDetails(ctx context2.Context, token *zfssarest.Token) (int, error) {
fsinfo, httpStatus, err := zfssarest.GetFilesystem(ctx, token, fs.id.Pool, fs.id.Project, fs.id.Name)
if err != nil {
return httpStatus, err
}
fs.setInfo(fsinfo)
return httpStatus, nil
}
func (fs *zFilesystem) getSnapshotsList(ctx context.Context, token *zfssarest.Token) (
[]*csi.ListSnapshotsResponse_Entry, error) {
snapList, err := zfssarest.GetSnapshots(ctx, token, fs.href)
if err != nil {
return nil, err
}
return zfssaSnapshotList2csiSnapshotList(ctx, token.Name, snapList), nil
}
func (fs *zFilesystem) getState() volumeState { return fs.state }
func (fs *zFilesystem) getName() string { return fs.id.Name }
func (fs *zFilesystem) getHref() string { return fs.href }
func (fs *zFilesystem) getVolumeID() *utils.VolumeId { return fs.id }
func (fs *zFilesystem) getCapacity() int64 { return fs.capacity }
func (fs *zFilesystem) isBlock() bool { return false }
func (fs *zFilesystem) setInfo(volInfo interface{}) {
switch fsinfo := volInfo.(type) {
case *zfssarest.Filesystem:
fs.capacity = fsinfo.Quota
fs.mountpoint = fsinfo.MountPoint
fs.href = fsinfo.Href
if fsinfo.ReadOnly {
fs.accessModes = filesystemAccessModes[2:len(filesystemAccessModes)]
} else {
fs.accessModes = filesystemAccessModes[0:len(filesystemAccessModes)]
}
fs.state = stateCreated
default:
panic("fs.setInfo called with wrong type")
}
}
// Waits until the file system is available and, when it is, returns with its current state.
func (fs *zFilesystem) hold(ctx context.Context) volumeState {
utils.GetLogCTRL(ctx, 5).Printf("%s held", fs.id.String())
atomic.AddInt32(&fs.refcount, 1)
return fs.state
}
func (fs *zFilesystem) lock(ctx context.Context) volumeState {
utils.GetLogCTRL(ctx, 5).Printf("locking %s", fs.id.String())
fs.bolt.Lock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is locked", fs.id.String())
return fs.state
}
func (fs *zFilesystem) unlock(ctx context.Context) (int32, volumeState){
fs.bolt.Unlock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is unlocked", fs.id.String())
return fs.refcount, fs.state
}
// Releases the file system and returns its current reference count.
func (fs *zFilesystem) release(ctx context.Context) (int32, volumeState) {
utils.GetLogCTRL(ctx, 5).Printf("%s released", fs.id.String())
return atomic.AddInt32(&fs.refcount, -1), fs.state
}
// Validates the filesystem specific parameters of the create request.
func validateCreateFilesystemVolumeReq(ctx context.Context, req *csi.CreateVolumeRequest) error {
reqCaps := req.GetVolumeCapabilities()
if !areFilesystemVolumeCapsValid(reqCaps) {
return status.Error(codes.InvalidArgument, "invalid volume accessModes")
}
return nil
}
// Checks whether the capability list is all supported.
func areFilesystemVolumeCapsValid(volCaps []*csi.VolumeCapability) bool {
hasSupport := func(cap *csi.VolumeCapability) bool {
for _, c := range filesystemAccessModes {
if c.GetMode() == cap.AccessMode.GetMode() {
return true
}
}
return false
}
foundAll := true
for _, c := range volCaps {
if !hasSupport(c) {
foundAll = false
break
}
}
return foundAll
}

83
pkg/service/identity.go Normal file
View File

@ -0,0 +1,83 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/golang/protobuf/ptypes/wrappers"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
grpcStatus "google.golang.org/grpc/status"
)
func newZFSSAIdentityServer(zd *ZFSSADriver) *csi.IdentityServer {
var id csi.IdentityServer = zd
return &id
}
func (zd *ZFSSADriver) GetPluginInfo(ctx context.Context, req *csi.GetPluginInfoRequest) (
*csi.GetPluginInfoResponse, error) {
utils.GetLogIDTY(ctx, 5).Println("GetPluginInfo")
return &csi.GetPluginInfoResponse{
Name: zd.name,
VendorVersion: zd.version,
}, nil
}
func (zd *ZFSSADriver) GetPluginCapabilities(ctx context.Context, req *csi.GetPluginCapabilitiesRequest) (
*csi.GetPluginCapabilitiesResponse, error) {
utils.GetLogIDTY(ctx, 5).Println("GetPluginCapabilities")
return &csi.GetPluginCapabilitiesResponse{
Capabilities: []*csi.PluginCapability{
{
Type: &csi.PluginCapability_Service_{
Service: &csi.PluginCapability_Service{
Type: csi.PluginCapability_Service_CONTROLLER_SERVICE,
},
},
},
{
Type: &csi.PluginCapability_VolumeExpansion_{
VolumeExpansion: &csi.PluginCapability_VolumeExpansion{
Type: csi.PluginCapability_VolumeExpansion_ONLINE,
},
},
},
},
}, nil
}
// This is a readiness probe for the driver, it is for checking if proper drivers are
// loaded. Typical response to failure is a driver restart.
//
func (zd *ZFSSADriver) Probe(ctx context.Context, req *csi.ProbeRequest) (
*csi.ProbeResponse, error) {
utils.GetLogIDTY(ctx, 5).Println("Probe")
// Check that the appliance is responsive, if it is not, we are on hold
user, password, err := zd.getUserLogin(ctx, nil)
if err != nil {
return nil, grpcStatus.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
_, err = zfssarest.GetServices(ctx, token)
if err != nil {
return &csi.ProbeResponse{
Ready: &wrappers.BoolValue{Value: false},
}, grpcStatus.Error(codes.FailedPrecondition, "Failure creating token")
}
return &csi.ProbeResponse{
Ready: &wrappers.BoolValue{Value: true},
}, nil
}

468
pkg/service/iscsi.go Normal file
View File

@ -0,0 +1,468 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"bytes"
"context"
"encoding/json"
"fmt"
"github.com/container-storage-interface/spec/lib/go/csi"
iscsi_lib "github.com/kubernetes-csi/csi-lib-iscsi/iscsi"
"k8s.io/utils/mount"
"k8s.io/kubernetes/pkg/volume/util"
"os"
"os/exec"
"path"
"strings"
)
// A subset of the iscsiadm
type IscsiAdmReturnValues int32
const(
ISCSI_SUCCESS IscsiAdmReturnValues = 0
ISCSI_ERR_SESS_NOT_FOUND = 2
ISCSI_ERR_TRANS_TIMEOUT = 8
ISCSI_ERR_ISCSID_NOTCONN = 20
ISCSI_ERR_NO_OBJS_FOUND = 21
)
func GetISCSIInfo(ctx context.Context, vid *utils.VolumeId, req *csi.NodePublishVolumeRequest, targetIqn string,
assignedLunNumber int32) (*iscsiDisk, error) {
volName := vid.Name
tp := req.GetVolumeContext()["targetPortal"]
iqn := targetIqn
if tp == "" || iqn == "" {
return nil, fmt.Errorf("iSCSI target information is missing (portal=%v), (iqn=%v)", tp, iqn)
}
portalList := req.GetVolumeContext()["portals"]
if portalList == "" {
portalList = "[]"
}
utils.GetLogCTRL(ctx, 5).Println("getISCSIInfo", "portal_list", portalList)
secretParams := req.GetVolumeContext()["secret"]
utils.GetLogCTRL(ctx, 5).Println("getISCSIInfo", "secret_params", secretParams)
secret := parseSecret(secretParams)
sessionSecret, err := parseSessionSecret(secret)
if err != nil {
return nil, err
}
discoverySecret, err := parseDiscoverySecret(secret)
if err != nil {
return nil, err
}
utils.GetLogCTRL(ctx, 5).Println("portalMounter", "tp", tp)
portal := portalMounter(tp)
var bkportal []string
bkportal = append(bkportal, portal)
portals := []string{}
if err := json.Unmarshal([]byte(portalList), &portals); err != nil {
return nil, err
}
for _, portal := range portals {
bkportal = append(bkportal, portalMounter(string(portal)))
}
utils.GetLogCTRL(ctx, 5).Println("Built bkportal", "bkportal", bkportal)
iface := req.GetVolumeContext()["iscsiInterface"]
initiatorName := req.GetVolumeContext()["initiatorName"]
chapDiscovery := false
if req.GetVolumeContext()["discoveryCHAPAuth"] == "true" {
chapDiscovery = true
}
chapSession := false
if req.GetVolumeContext()["sessionCHAPAuth"] == "true" {
chapSession = true
}
utils.GetLogCTRL(ctx, 5).Println("Final values", "iface", iface, "initiatorName", initiatorName)
i := iscsiDisk{
VolName: volName,
Portals: bkportal,
Iqn: iqn,
lun: assignedLunNumber,
Iface: iface,
chapDiscovery: chapDiscovery,
chapSession: chapSession,
secret: secret,
sessionSecret: sessionSecret,
discoverySecret: discoverySecret,
InitiatorName: initiatorName,
}
return &i, nil
}
func GetNodeISCSIInfo(vid *utils.VolumeId, req *csi.NodePublishVolumeRequest, targetIqn string, assignedLunNumber int32) (
*iscsiDisk, error) {
volName := vid.Name
tp := req.GetVolumeContext()["targetPortal"]
iqn := targetIqn
if tp == "" || iqn == "" {
return nil, fmt.Errorf("iSCSI target information is missing")
}
portalList := req.GetVolumeContext()["portals"]
if portalList == "" {
portalList = "[]"
}
secretParams := req.GetVolumeContext()["secret"]
secret := parseSecret(secretParams)
sessionSecret, err := parseSessionSecret(secret)
if err != nil {
return nil, err
}
discoverySecret, err := parseDiscoverySecret(secret)
if err != nil {
return nil, err
}
// For ZFSSA, the portal should also contain the assigned number
portal := portalMounter(tp)
var bkportal []string
bkportal = append(bkportal, portal)
portals := []string{}
if err := json.Unmarshal([]byte(portalList), &portals); err != nil {
return nil, err
}
for _, portal := range portals {
bkportal = append(bkportal, portalMounter(string(portal)))
}
iface := req.GetVolumeContext()["iscsiInterface"]
initiatorName := req.GetVolumeContext()["initiatorName"]
chapDiscovery := false
if req.GetVolumeContext()["discoveryCHAPAuth"] == "true" {
chapDiscovery = true
}
chapSession := false
if req.GetVolumeContext()["sessionCHAPAuth"] == "true" {
chapSession = true
}
i := iscsiDisk{
VolName: volName,
Portals: bkportal,
Iqn: iqn,
lun: assignedLunNumber,
Iface: iface,
chapDiscovery: chapDiscovery,
chapSession: chapSession,
secret: secret,
sessionSecret: sessionSecret,
discoverySecret: discoverySecret,
InitiatorName: initiatorName,
}
return &i, nil
}
func buildISCSIConnector(iscsiInfo *iscsiDisk) *iscsi_lib.Connector {
c := iscsi_lib.Connector{
VolumeName: iscsiInfo.VolName,
TargetIqn: iscsiInfo.Iqn,
TargetPortals: iscsiInfo.Portals,
Lun: iscsiInfo.lun,
Multipath: len(iscsiInfo.Portals) > 1,
}
if iscsiInfo.sessionSecret != (iscsi_lib.Secrets{}) {
c.SessionSecrets = iscsiInfo.sessionSecret
if iscsiInfo.discoverySecret != (iscsi_lib.Secrets{}) {
c.DiscoverySecrets = iscsiInfo.discoverySecret
}
}
return &c
}
func GetISCSIDiskMounter(iscsiInfo *iscsiDisk, readOnly bool, fsType string, mountOptions []string,
targetPath string) *iscsiDiskMounter {
return &iscsiDiskMounter{
iscsiDisk: iscsiInfo,
fsType: fsType,
readOnly: readOnly,
mountOptions: mountOptions,
mounter: &mount.SafeFormatAndMount{Interface: mount.New("")},
targetPath: targetPath,
deviceUtil: util.NewDeviceHandler(util.NewIOHandler()),
connector: buildISCSIConnector(iscsiInfo),
}
}
func GetISCSIDiskUnmounter(volumeId *utils.VolumeId) *iscsiDiskUnmounter {
volName := volumeId.Name
return &iscsiDiskUnmounter{
iscsiDisk: &iscsiDisk{
VolName: volName,
},
mounter: mount.New(""),
}
}
func portalMounter(portal string) string {
if !strings.Contains(portal, ":") {
portal = portal + ":3260"
}
return portal
}
func parseSecret(secretParams string) map[string]string {
var secret map[string]string
if err := json.Unmarshal([]byte(secretParams), &secret); err != nil {
return nil
}
return secret
}
func parseSessionSecret(secretParams map[string]string) (iscsi_lib.Secrets, error) {
var ok bool
secret := iscsi_lib.Secrets{}
if len(secretParams) == 0 {
return secret, nil
}
if secret.UserName, ok = secretParams["node.session.auth.username"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.session.auth.username not found in secret")
}
if secret.Password, ok = secretParams["node.session.auth.password"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.session.auth.password not found in secret")
}
if secret.UserNameIn, ok = secretParams["node.session.auth.username_in"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.session.auth.username_in not found in secret")
}
if secret.PasswordIn, ok = secretParams["node.session.auth.password_in"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.session.auth.password_in not found in secret")
}
secret.SecretsType = "chap"
return secret, nil
}
func parseDiscoverySecret(secretParams map[string]string) (iscsi_lib.Secrets, error) {
var ok bool
secret := iscsi_lib.Secrets{}
if len(secretParams) == 0 {
return secret, nil
}
if secret.UserName, ok = secretParams["node.sendtargets.auth.username"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.sendtargets.auth.username not found in secret")
}
if secret.Password, ok = secretParams["node.sendtargets.auth.password"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.sendtargets.auth.password not found in secret")
}
if secret.UserNameIn, ok = secretParams["node.sendtargets.auth.username_in"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.sendtargets.auth.username_in not found in secret")
}
if secret.PasswordIn, ok = secretParams["node.sendtargets.auth.password_in"]; !ok {
return iscsi_lib.Secrets{}, fmt.Errorf("node.sendtargets.auth.password_in not found in secret")
}
secret.SecretsType = "chap"
return secret, nil
}
type iscsiDisk struct {
Portals []string
Iqn string
lun int32
Iface string
chapDiscovery bool
chapSession bool
secret map[string]string
sessionSecret iscsi_lib.Secrets
discoverySecret iscsi_lib.Secrets
InitiatorName string
VolName string
}
type iscsiDiskMounter struct {
*iscsiDisk
readOnly bool
fsType string
mountOptions []string
mounter *mount.SafeFormatAndMount
deviceUtil util.DeviceUtil
targetPath string
connector *iscsi_lib.Connector
}
type iscsiDiskUnmounter struct {
*iscsiDisk
mounter mount.Interface
}
type ISCSIUtil struct{}
func (util *ISCSIUtil) Rescan (ctx context.Context) (string, error) {
cmd := exec.Command("iscsiadm", "-m", "session", "--rescan")
var stdout bytes.Buffer
var iscsiadmError error
cmd.Stdout = &stdout
cmd.Stderr = &stdout
defer stdout.Reset()
// we're using Start and Wait because we want to grab exit codes
err := cmd.Start()
if err != nil {
// Check if this is simply no sessions found (rc 21)
exitCode := err.(*exec.ExitError).ExitCode()
if exitCode == ISCSI_ERR_NO_OBJS_FOUND {
// No error, just no objects found
utils.GetLogUTIL(ctx, 4).Println("iscsiadm: no sessions, will continue (start path)")
} else {
formattedOutput := strings.Replace(string(stdout.Bytes()), "\n", "", -1)
iscsiadmError = fmt.Errorf("iscsiadm error: %s (%s)", formattedOutput, err.Error())
}
return string(stdout.Bytes()), iscsiadmError
}
err = cmd.Wait()
if err != nil {
exitCode := err.(*exec.ExitError).ExitCode()
if exitCode == ISCSI_ERR_NO_OBJS_FOUND {
// No error, just no objects found
utils.GetLogUTIL(ctx, 4).Println("iscsiadm: no sessions, will continue (wait path)")
} else {
formattedOutput := strings.Replace(string(stdout.Bytes()), "\n", "", -1)
iscsiadmError = fmt.Errorf("iscsiadm error: %s (%s)", formattedOutput, err.Error())
}
}
return string(stdout.Bytes()), iscsiadmError
}
func (util *ISCSIUtil) ConnectDisk(ctx context.Context, b iscsiDiskMounter) (string, error) {
utils.GetLogUTIL(ctx, 4).Println("ConnectDisk started")
_, err := util.Rescan(ctx)
if err != nil {
utils.GetLogUTIL(ctx, 4).Println("iSCSI rescan error: %s", err.Error())
return "", err
}
utils.GetLogUTIL(ctx, 4).Println("ConnectDisk will connect and get device path")
devicePath, err := iscsi_lib.Connect(*b.connector)
if err != nil {
utils.GetLogUTIL(ctx, 4).Println("iscsi_lib connect error: %s", err.Error())
return "", err
}
if devicePath == "" {
utils.GetLogUTIL(ctx, 4).Println("iscsi_lib devicePath is empty, cannot continue")
return "", fmt.Errorf("connect reported success, but no path returned")
}
utils.GetLogUTIL(ctx, 4).Println("ConnectDisk devicePath: %s", devicePath)
return devicePath, nil
}
func (util *ISCSIUtil) AttachDisk(ctx context.Context, b iscsiDiskMounter, devicePath string) (string, error) {
// Mount device
if len(devicePath) == 0 {
localDevicePath, err := util.ConnectDisk(ctx, b)
if err != nil {
utils.GetLogUTIL(ctx, 3).Println("ConnectDisk failure: %s", err.Error())
return "", err
}
devicePath = localDevicePath
}
mntPath := b.targetPath
notMnt, err := b.mounter.IsLikelyNotMountPoint(mntPath)
if err != nil && !os.IsNotExist(err) {
return "", fmt.Errorf("heuristic determination of mount point failed: %v", err)
}
if !notMnt {
utils.GetLogUTIL(ctx, 3).Println("iscsi: device already mounted", "mount_path", mntPath)
return "", nil
}
if err := os.MkdirAll(mntPath, 0750); err != nil {
return "", err
}
// Persist iscsi disk config to json file for DetachDisk path
file := path.Join(mntPath, b.VolName+".json")
err = iscsi_lib.PersistConnector(b.connector, file)
if err != nil {
return "", err
}
options := []string{"bind"}
if b.readOnly {
options = append(options, "ro")
} else {
options = append(options, "rw")
}
options = append(options, b.mountOptions...)
utils.GetLogUTIL(ctx, 3).Println("Mounting disk at path: %s", mntPath)
err = b.mounter.Mount(devicePath, mntPath, "", options)
if err != nil {
utils.GetLogUTIL(ctx, 3).Println("iscsi: failed to mount iscsi volume",
"device_path", devicePath, "mount_path", mntPath, "error", err.Error())
return "", err
}
return devicePath, err
}
func (util *ISCSIUtil) DetachDisk(ctx context.Context, c iscsiDiskUnmounter, targetPath string) error {
_, cnt, err := mount.GetDeviceNameFromMount(c.mounter, targetPath)
if err != nil {
return err
}
if pathExists, pathErr := mount.PathExists(targetPath); pathErr != nil {
return fmt.Errorf("Error checking if path exists: %v", pathErr)
} else if !pathExists {
utils.GetLogUTIL(ctx, 2).Println("Unmount skipped because path does not exist",
"target_path", targetPath)
return nil
}
if err = c.mounter.Unmount(targetPath); err != nil {
utils.GetLogUTIL(ctx, 3).Println("iscsi detach disk: failed to unmount",
"target_path", targetPath, "error", err.Error())
return err
}
cnt--
if cnt != 0 {
return nil
}
// load iscsi disk config from json file
file := path.Join(targetPath, c.iscsiDisk.VolName+".json")
connector, err := iscsi_lib.GetConnectorFromFile(file)
if err != nil {
return err
}
err = iscsi_lib.Disconnect(connector.TargetIqn, connector.TargetPortals)
if err := os.RemoveAll(targetPath); err != nil {
return err
}
return nil
}

61
pkg/service/mount.go Normal file
View File

@ -0,0 +1,61 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"k8s.io/utils/mount"
"os"
)
// Mounter is an interface for mount operations
type Mounter interface {
mount.Interface
GetDeviceName(mountPath string) (string, int, error)
MakeFile(pathname string) error
ExistsPath(pathname string) (bool, error)
}
type NodeMounter struct {
mount.SafeFormatAndMount
}
func newNodeMounter() Mounter {
return &NodeMounter{
mount.SafeFormatAndMount{
Interface: mount.New(""),
},
}
}
// Retrieve a device name from a mount point (this is a compatibility interface)
func (m *NodeMounter) GetDeviceName(mountPath string) (string, int, error) {
return mount.GetDeviceNameFromMount(m, mountPath)
}
// Make a file at the pathname
func (mounter *NodeMounter) MakeFile(pathname string) error {
f, err := os.OpenFile(pathname, os.O_CREATE, os.FileMode(0644))
defer f.Close()
if err != nil && !os.IsExist(err) {
return err
}
return nil
}
// Check if a file exists
func (mount *NodeMounter) ExistsPath(pathname string) (bool, error) {
// Check if the global mount path exists and create it if it does not
exists := true
_, err := os.Stat(pathname)
if _, err := os.Stat(pathname); os.IsNotExist(err) {
exists = false
}
return exists, err
}

272
pkg/service/node.go Normal file
View File

@ -0,0 +1,272 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"fmt"
"os"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
var (
// nodeCaps represents the capability of node service.
nodeCaps = []csi.NodeServiceCapability_RPC_Type{
csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME,
// csi.NodeServiceCapability_RPC_EXPAND_VOLUME,
csi.NodeServiceCapability_RPC_UNKNOWN,
}
)
func NewZFSSANodeServer(zd *ZFSSADriver) *csi.NodeServer {
zd.NodeMounter = newNodeMounter()
var ns csi.NodeServer = zd
return &ns
}
func (zd *ZFSSADriver) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (
*csi.NodeStageVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeStageVolume", "request", req)
// The request validity of the request is checked
VolumeID := req.GetVolumeId()
if len(VolumeID) == 0 {
utils.GetLogNODE(ctx, 2).Println("VolumeID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
targetPath := req.GetStagingTargetPath()
if len(targetPath) == 0 {
utils.GetLogNODE(ctx, 2).Println("Target path not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
reqCaps := req.GetVolumeCapability()
if reqCaps == nil {
utils.GetLogNODE(ctx, 2).Println("Capability not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Capability not provided")
}
// Not staging for either block or mount for now.
return &csi.NodeStageVolumeResponse{}, nil
}
func (zd *ZFSSADriver) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstageVolumeRequest) (
*csi.NodeUnstageVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeUnStageVolume", "request", req)
VolumeID := req.GetVolumeId()
if len(VolumeID) == 0 {
utils.GetLogNODE(ctx, 2).Println("VolumeID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
target := req.GetStagingTargetPath()
if len(target) == 0 {
return nil, status.Error(codes.InvalidArgument, "Staging target not provided")
}
// Check if target directory is a mount point. GetDeviceNameFromMount
// given a mnt point, finds the device from /proc/mounts
// returns the device name, reference count, and error code
dev, refCount, err := zd.NodeMounter.GetDeviceName(target)
if err != nil {
msg := fmt.Sprintf("failed to check if volume is mounted: %v", err)
return nil, status.Error(codes.Internal, msg)
}
// From the spec: If the volume corresponding to the volume_id
// is not staged to the staging_target_path, the Plugin MUST
// reply 0 OK.
if refCount == 0 {
utils.GetLogNODE(ctx, 3).Println("NodeUnstageVolume: target not mounted", "target", target)
return &csi.NodeUnstageVolumeResponse{}, nil
}
if refCount > 1 {
utils.GetLogNODE(ctx, 2).Println("NodeUnstageVolume: found references to device mounted at target path",
"references", refCount, "device", dev, "target", target)
}
utils.GetLogNODE(ctx, 5).Println("NodeUnstageVolume: unmounting target", "target", target)
err = zd.NodeMounter.Unmount(target)
if err != nil {
return nil, status.Errorf(codes.Internal, "Cannot unmount staging target %q: %v", target, err)
}
notMnt, mntErr := zd.NodeMounter.IsLikelyNotMountPoint(target)
if mntErr != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot determine staging target path",
"staging_target_path", target, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
if notMnt {
if err := os.Remove(target); err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot delete staging target path",
"staging_target_path", target, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
}
return &csi.NodeUnstageVolumeResponse{}, nil
}
func (zd *ZFSSADriver) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (
*csi.NodePublishVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodePublishVolume", "request", req)
VolumeID := req.GetVolumeId()
if len(VolumeID) == 0 {
utils.GetLogNODE(ctx, 2).Println("VolumeID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
zVolumeId, err := utils.VolumeIdFromString(VolumeID)
if err != nil {
utils.GetLogNODE(ctx, 2).Println("NodePublishVolume Volume ID was invalid",
"volume_id", req.GetVolumeId(), "error", err.Error())
// NOTE: by spec, we should return success since there is nothing to delete
return nil, status.Error(codes.InvalidArgument, "Volume ID invalid")
}
source := req.GetStagingTargetPath()
if len(source) == 0 {
utils.GetLogNODE(ctx, 2).Println("Staging target path not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Staging target not provided")
}
target := req.GetTargetPath()
if len(target) == 0 {
utils.GetLogNODE(ctx, 2).Println("Target path not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
utils.GetLogNODE(ctx, 2).Printf("NodePublishVolume: stagingTarget=%s, target=%s", source, target)
volCap := req.GetVolumeCapability()
if volCap == nil {
utils.GetLogNODE(ctx, 2).Println("Volume Capabilities path not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume capability not provided")
}
// The account to be used for this operation is determined.
user, password, err := zd.getUserLogin(ctx, req.Secrets)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
var mountOptions []string
if req.GetReadonly() {
mountOptions = append(mountOptions, "ro")
}
if req.GetVolumeCapability().GetBlock() != nil {
mountOptions = append(mountOptions, "bind")
return zd.nodePublishBlockVolume(ctx, token, req, zVolumeId, mountOptions)
}
switch mode := volCap.GetAccessType().(type) {
case *csi.VolumeCapability_Block:
mountOptions = append(mountOptions, "bind")
return zd.nodePublishBlockVolume(ctx, token, req, zVolumeId, mountOptions)
case *csi.VolumeCapability_Mount:
return zd.nodePublishFileSystem(ctx, token, req, zVolumeId, mountOptions, mode)
default:
utils.GetLogNODE(ctx, 2).Println("Publish does not support Access Type", "access_type",
volCap.GetAccessType())
return nil, status.Error(codes.InvalidArgument, "Invalid access type")
}
}
func (zd *ZFSSADriver) NodeUnpublishVolume(ctx context.Context, req *csi.NodeUnpublishVolumeRequest) (
*csi.NodeUnpublishVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeUnpublishVolume", "request", req)
targetPath := req.GetTargetPath()
if len(targetPath) == 0 {
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
utils.GetLogNODE(ctx, 2).Println("VolumeID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
zVolumeId, err := utils.VolumeIdFromString(volumeID)
if err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot unpublish volume",
"volume_id", req.GetVolumeId(), "error", err.Error())
return nil, err
}
user, password, err := zd.getUserLogin(ctx, nil)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(user, password)
if zVolumeId.IsBlock() {
return zd.nodeUnpublishBlockVolume(ctx, token, req, zVolumeId)
} else {
return zd.nodeUnpublishFilesystemVolume(token, ctx, req, zVolumeId)
}
}
func (zd *ZFSSADriver) NodeGetVolumeStats(ctx context.Context, req *csi.NodeGetVolumeStatsRequest) (
*csi.NodeGetVolumeStatsResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeGetVolumeStats", "request", req)
return nil, status.Error(codes.Unimplemented, "")
}
func (zd *ZFSSADriver) NodeExpandVolume(ctx context.Context, req *csi.NodeExpandVolumeRequest) (
*csi.NodeExpandVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeExpandVolume", "request", req)
return nil, status.Error(codes.Unimplemented, "")
}
func (zd *ZFSSADriver) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (
*csi.NodeGetCapabilitiesResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeGetCapabilities", "request", req)
var caps []*csi.NodeServiceCapability
for _, capacity := range nodeCaps {
c := &csi.NodeServiceCapability{
Type: &csi.NodeServiceCapability_Rpc{
Rpc: &csi.NodeServiceCapability_RPC{
Type: capacity,
},
},
}
caps = append(caps, c)
}
return &csi.NodeGetCapabilitiesResponse{Capabilities: caps}, nil
}
func (zd *ZFSSADriver) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) (
*csi.NodeGetInfoResponse, error) {
utils.GetLogNODE(ctx, 2).Println("NodeGetInfo", "request", req)
return &csi.NodeGetInfoResponse{
NodeId: zd.config.NodeName,
}, nil
}

216
pkg/service/node_block.go Normal file
View File

@ -0,0 +1,216 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"fmt"
"os"
"path/filepath"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// Nothing is done
func (zd *ZFSSADriver) NodeStageBlockVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (
*csi.NodeStageVolumeResponse, error) {
return &csi.NodeStageVolumeResponse{}, nil
}
func (zd *ZFSSADriver) NodeUnstageBlockVolume(ctx context.Context, req *csi.NodeUnstageVolumeRequest) (
*csi.NodeUnstageVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeUnStageVolume", "request", req, "context", ctx)
target := req.GetStagingTargetPath()
if len(target) == 0 {
return nil, status.Error(codes.InvalidArgument, "Staging target not provided")
}
// Check if target directory is a mount point. GetDeviceNameFromMount
// given a mnt point, finds the device from /proc/mounts
// returns the device name, reference count, and error code
dev, refCount, err := zd.NodeMounter.GetDeviceName(target)
if err != nil {
msg := fmt.Sprintf("failed to check if volume is mounted: %v", err)
return nil, status.Error(codes.Internal, msg)
}
// From the spec: If the volume corresponding to the volume_id
// is not staged to the staging_target_path, the Plugin MUST
// reply 0 OK.
if refCount == 0 {
utils.GetLogNODE(ctx, 3).Println("NodeUnstageVolume: target not mounted", "target", target)
return &csi.NodeUnstageVolumeResponse{}, nil
}
if refCount > 1 {
utils.GetLogNODE(ctx, 2).Println("NodeUnstageVolume: found references to device mounted at target path",
"references", refCount, "device", dev, "target", target)
}
utils.GetLogNODE(ctx, 5).Println("NodeUnstageVolume: unmounting target", "target", target)
err = zd.NodeMounter.Unmount(target)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not unmount target %q: %v", target, err)
}
return &csi.NodeUnstageVolumeResponse{}, nil
}
// nodePublishBlockVolume is the worker for block volumes only, it is going to get the
// block device mounted to the target path so it can be moved to the container requesting it
func (zd *ZFSSADriver) nodePublishBlockVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodePublishVolumeRequest, vid *utils.VolumeId, mountOptions []string) (
*csi.NodePublishVolumeResponse, error) {
target := req.GetTargetPath()
utils.GetLogNODE(ctx, 5).Println("nodePublishBlockVolume", req)
devicePath, err := attachBlockVolume(ctx, token, req, vid)
if err != nil {
return nil, err
}
utils.GetLogNODE(ctx, 5).Println("nodePublishBlockVolume", "devicePath", devicePath)
_, err = zd.NodeMounter.ExistsPath(devicePath)
if err != nil {
return nil, err
}
globalMountPath := filepath.Dir(target)
// create the global mount path if it is missing
// Path in the form of /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish/{volumeName}
utils.GetLogNODE(ctx, 5).Println("NodePublishVolume [block]", "globalMountPath", globalMountPath)
// Check if the global mount path exists and create it if it does not
if _, err := os.Stat(globalMountPath); os.IsNotExist(err) {
err := os.Mkdir(globalMountPath, 0700)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not create dir %q: %v", globalMountPath, err)
}
} else if err != nil {
return nil, status.Errorf(codes.Internal, "Could not check if path exists %q: %v", globalMountPath, err)
}
utils.GetLogNODE(ctx, 5).Println("NodePublishVolume [block]: making target file", "target_file", target)
// Create a new target file for the mount location
err = zd.NodeMounter.MakeFile(target)
if err != nil {
if removeErr := os.Remove(target); removeErr != nil {
return nil, status.Errorf(codes.Internal, "Could not remove mount target %q: %v", target, removeErr)
}
return nil, status.Errorf(codes.Internal, "Could not create file %q: %v", target, err)
}
utils.GetLogNODE(ctx, 5).Println("NodePublishVolume [block]: mounting block device",
"device_path", devicePath, "target", target, "mount_options", mountOptions)
if err := zd.NodeMounter.Mount(devicePath, target, "", mountOptions); err != nil {
if removeErr := os.Remove(target); removeErr != nil {
return nil, status.Errorf(codes.Internal, "Could not remove mount target %q: %v", target, removeErr)
}
return nil, status.Errorf(codes.Internal, "Could not mount %q at %q: %v", devicePath, target, err)
}
utils.GetLogNODE(ctx, 5).Println("NodePublishVolume [block]: mounted block device",
"device_path", devicePath, "target", target)
return &csi.NodePublishVolumeResponse{}, nil
}
// attachBlockVolume rescans the iSCSI session and attempts to attach the disk.
// This may actually belong in ControllerPublish (and was there for a while), but
// if it goes in the controller, then we have to have a way to remote the request
// to the proper node since the controller may not co-exist with the node where
// the device is actually needed for the work to be done
func attachBlockVolume(ctx context.Context, token *zfssarest.Token, req *csi.NodePublishVolumeRequest,
vid *utils.VolumeId) (string, error) {
lun := vid.Name
pool := vid.Pool
project := vid.Project
lunInfo, _, err := zfssarest.GetLun(nil, token, pool, project, lun)
if err != nil {
return "", err
}
targetGroup := lunInfo.TargetGroup
targetInfo, err := zfssarest.GetTargetGroup(nil, token, "iscsi", targetGroup)
if err != nil {
return "", err
}
iscsiInfo, err := GetISCSIInfo(ctx, vid, req, targetInfo.Targets[0], lunInfo.AssignedNumber[0])
if err != nil {
return "", status.Error(codes.Internal, err.Error())
}
utils.GetLogNODE(ctx, 5).Printf("attachBlockVolume: prepare mounting: %v", iscsiInfo)
fsType := req.GetVolumeCapability().GetMount().GetFsType()
mountOptions := req.GetVolumeCapability().GetMount().GetMountFlags()
diskMounter := GetISCSIDiskMounter(iscsiInfo, false, fsType, mountOptions, "")
utils.GetLogNODE(ctx, 5).Println("iSCSI Connector", "TargetPortals", diskMounter.connector.TargetPortals,
"Lun", diskMounter.connector.Lun, "TargetIqn", diskMounter.connector.TargetIqn,
"VolumeName", diskMounter.connector.VolumeName)
util := &ISCSIUtil{}
utils.GetLogNODE(ctx, 5).Println("attachBlockVolume: connecting disk", "diskMounter", diskMounter)
devicePath, err := util.ConnectDisk(ctx, *diskMounter)
if err != nil {
utils.GetLogNODE(ctx, 5).Println("attachBlockVolume: failed connecting the disk: %s", err.Error())
return "", status.Error(codes.Internal, err.Error())
}
utils.GetLogNODE(ctx, 5).Println("attachBlockVolume: attached at: %s", devicePath)
return devicePath, nil
}
func (zd *ZFSSADriver) nodeUnpublishBlockVolume(ctx context.Context, token *zfssarest.Token,
req *csi.NodeUnpublishVolumeRequest, zvid *utils.VolumeId) (*csi.NodeUnpublishVolumeResponse, error) {
targetPath := req.GetTargetPath()
if len(targetPath) == 0 {
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
diskUnmounter := GetISCSIDiskUnmounter(zvid)
// Add decrement a node-local file reference count so we can keep track of when
// we can release the node's attach (RWO this reference count would only reach 1,
// RWM we may have many pods using the disk so we need to keep track)
err := diskUnmounter.mounter.Unmount(targetPath)
if err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot unmount volume",
"volume_id", req.GetVolumeId(), "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
notMnt, mntErr := zd.NodeMounter.IsLikelyNotMountPoint(targetPath)
if mntErr != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot determine target path",
"target_path", targetPath, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
if notMnt {
if err := os.Remove(targetPath); err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot delete target path",
"target_path", targetPath, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
}
return &csi.NodeUnpublishVolumeResponse{}, nil
}

101
pkg/service/node_fs.go Normal file
View File

@ -0,0 +1,101 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"fmt"
"os"
"strings"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
func (zd *ZFSSADriver) nodePublishFileSystem(ctx context.Context, token *zfssarest.Token,
req *csi.NodePublishVolumeRequest, vid *utils.VolumeId, mountOptions []string,
mode *csi.VolumeCapability_Mount) (*csi.NodePublishVolumeResponse, error) {
targetPath := req.GetTargetPath()
notMnt, err := zd.NodeMounter.IsLikelyNotMountPoint(targetPath)
if err != nil {
if os.IsNotExist(err) {
if err := os.MkdirAll(targetPath, 0750); err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
notMnt = true
} else {
return nil, status.Error(codes.Internal, err.Error())
}
}
if !notMnt {
return &csi.NodePublishVolumeResponse{}, nil
}
s := req.GetVolumeContext()["nfsServer"]
ep, found := req.GetVolumeContext()["mountpoint"]
if !found {
// The volume context of the volume provisioned from an existing share does not have the mountpoint.
// Use the share (corresponding to volumeAttributes.share of PV configuration) to get the mountpoint.
ep = req.GetVolumeContext()["share"]
}
source := fmt.Sprintf("%s:%s", s, ep)
utils.GetLogNODE(ctx, 5).Println("nodePublishFileSystem", "mount_point", source)
err = zd.NodeMounter.Mount(source, targetPath, "nfs", mountOptions)
if err != nil {
if os.IsPermission(err) {
return nil, status.Error(codes.PermissionDenied, err.Error())
}
if strings.Contains(err.Error(), "invalid argument") {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
return nil, status.Error(codes.Internal, err.Error())
}
return &csi.NodePublishVolumeResponse{}, nil
}
func (zd *ZFSSADriver) nodeUnpublishFilesystemVolume(token *zfssarest.Token,
ctx context.Context, req *csi.NodeUnpublishVolumeRequest, vid *utils.VolumeId) (
*csi.NodeUnpublishVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("nodeUnpublishFileSystem", "request", req)
targetPath := req.GetTargetPath()
if len(targetPath) == 0 {
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
err := zd.NodeMounter.Unmount(targetPath)
if err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot unmount volume",
"volume_id", req.GetVolumeId(), "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
notMnt, mntErr := zd.NodeMounter.IsLikelyNotMountPoint(targetPath)
if mntErr != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot determine target path",
"target_path", targetPath, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
if notMnt {
if err := os.Remove(targetPath); err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot delete target path",
"target_path", targetPath, "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
}
}
return &csi.NodeUnpublishVolumeResponse{}, nil
}

393
pkg/service/service.go Normal file
View File

@ -0,0 +1,393 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"errors"
"fmt"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc"
"gopkg.in/yaml.v2"
"io/ioutil"
"net"
"os"
"os/signal"
"regexp"
"strconv"
"strings"
"sync"
"syscall"
"time"
)
const (
// Default Log Level
DefaultLogLevel = "3"
DefaultCertPath = "/mnt/certs/zfssa.crt"
DefaultCredPath = "/mnt/zfssa/zfssa.yaml"
)
type ZFSSADriver struct {
name string
nodeID string
version string
endpoint string
config config
NodeMounter Mounter
vCache volumeHashTable
sCache snapshotHashTable
ns *csi.NodeServer
cs *csi.ControllerServer
is *csi.IdentityServer
}
type config struct {
Appliance string
User string
endpoint string
HostIp string
NodeName string
PodIp string
Secure bool
logLevel string
Certificate []byte
CertLocation string
CredLocation string
}
// The structured data in the ZFSSA credentials file
type ZfssaCredentials struct {
Username string `yaml:username`
Password string `yaml:password`
}
type accessType int
// NonBlocking server
type nonBlockingGRPCServer struct {
wg sync.WaitGroup
server *grpc.Server
}
const (
// Helpful size constants
Kib int64 = 1024
Mib int64 = Kib * 1024
Gib int64 = Mib * 1024
Gib100 int64 = Gib * 100
Tib int64 = Gib * 1024
Tib100 int64 = Tib * 100
DefaultVolumeSizeBytes int64 = 50 * Gib
mountAccess accessType = iota
blockAccess
)
const (
UsernamePattern string = `^[a-zA-Z][a-zA-Z0-9_\-\.]*$`
UsernameLength int = 255
)
type ZfssaBlockVolume struct {
VolName string `json:"volName"`
VolID string `json:"volID"`
VolSize int64 `json:"volSize"`
VolPath string `json:"volPath"`
VolAccessType accessType `json:"volAccessType"`
}
// Creates and returns a new ZFSSA driver structure.
func NewZFSSADriver(driverName, version string) (*ZFSSADriver, error) {
zd := new(ZFSSADriver)
zd.name = driverName
zd.version = version
err := getConfig(zd)
if err != nil {
return nil, err
}
zd.vCache.vHash = make(map[string]zVolumeInterface)
zd.sCache.sHash = make(map[string]*zSnapshot)
utils.InitLogs(zd.config.logLevel, zd.name, version, zd.config.NodeName)
err = zfssarest.InitREST(zd.config.Appliance, zd.config.Certificate, zd.config.Secure)
if err != nil {
return nil, err
}
err = InitClusterInterface()
if err != nil {
return nil, err
}
zd.is = newZFSSAIdentityServer(zd)
zd.cs = newZFSSAControllerServer(zd)
zd.ns = NewZFSSANodeServer(zd)
return zd, nil
}
// Gets the configuration and sanity checks it. Several environment variables values
// are retrieved:
//
// ZFSSA_TARGET The name or IP address of the appliance.
// NODE_NAME The name of the node on which the container is running.
// NODE_ID The ID of the node on which the container is running.
// CSI_ENDPOINT Unix socket the CSI driver will be listening on.
// ZFSSA_INSECURE Boolean specifying whether an appliance certificate is not required.
// ZFSSA_CERT Path to the certificate file (defaults to "/mnt/certs/zfssa.crt")
// ZFSSA_CRED Path to the credential file (defaults to "/mnt/zfssa/zfssa.yaml")
// HOST_IP IP address of the node.
// POD_IP IP address of the pod.
// LOG_LEVEL Log level to apply.
//
// Verifies the credentials are in the ZFSSA_CRED yaml file, does not verify their
// correctness.
func getConfig(zd *ZFSSADriver) error {
// Validate the ZFSSA credentials are available
credfile := strings.TrimSpace(getEnvFallback("ZFSSA_CRED", DefaultCredPath))
if len(credfile) == 0 {
return errors.New(fmt.Sprintf("a ZFSSA credentials file location is required, current value: <%s>",
credfile))
}
zd.config.CredLocation = credfile
_, err := os.Stat(credfile)
if os.IsNotExist(err) {
return errors.New(fmt.Sprintf("the ZFSSA credentials file is not present at location: <%s>",
credfile))
}
// Get the user from the credentials file, this can be stored in the config file without a problem
zd.config.User, err = zd.GetUsernameFromCred()
if err != nil {
return errors.New(fmt.Sprintf("Cannot get ZFSSA username: %s", err))
}
appliance := getEnvFallback("ZFSSA_TARGET", "")
zd.config.Appliance = strings.TrimSpace(appliance)
if zd.config.Appliance == "not-set" {
return errors.New("appliance name required")
}
zd.config.NodeName = getEnvFallback("NODE_NAME", "")
if zd.config.NodeName == "" {
return errors.New("node name required")
}
zd.config.endpoint = getEnvFallback("CSI_ENDPOINT", "")
if zd.config.endpoint == "" {
return errors.New("endpoint is required")
} else {
if !strings.HasPrefix(zd.config.endpoint, "unix://") {
return errors.New("endpoint is invalid")
}
s := strings.SplitN(zd.config.endpoint, "://", 2)
zd.config.endpoint = "/" + s[1]
err := os.RemoveAll(zd.config.endpoint)
if err != nil && !os.IsNotExist(err) {
return errors.New("failed to remove endpoint path")
}
}
switch strings.ToLower(strings.TrimSpace(getEnvFallback("ZFSSA_INSECURE", "False"))) {
case "true": zd.config.Secure = false
case "false": zd.config.Secure = true
default:
return errors.New("ZFSSA_INSECURE value is invalid")
}
if zd.config.Secure {
certfile := strings.TrimSpace(getEnvFallback("ZFSSA_CERT", DefaultCertPath))
if len(certfile) == 0 {
return errors.New("a certificate is required")
}
_, err := os.Stat(certfile)
if os.IsNotExist(err) {
return errors.New("certificate does not exits")
}
zd.config.Certificate, err = ioutil.ReadFile(certfile)
if err != nil {
return errors.New("failed to read certificate")
}
}
zd.config.HostIp = getEnvFallback("HOST_IP", "0.0.0.0")
zd.config.PodIp = getEnvFallback("POD_IP", "0.0.0.0")
zd.config.logLevel = getEnvFallback("LOG_LEVEL", DefaultLogLevel)
_, err = strconv.Atoi(zd.config.logLevel)
if err != nil {
return errors.New("invalid debug level")
}
return nil
}
// Starts the CSI driver. This includes registering the different servers (Identity, Controller and Node) with
// the CSI framework and starting listening on the UNIX socket.
var sigList = []os.Signal {
syscall.SIGTERM,
syscall.SIGHUP,
syscall.SIGINT,
syscall.SIGQUIT,
}
// Retrieves just the username from a credential file (zd.config.CredLocation)
func (zd *ZFSSADriver) GetUsernameFromCred() (string, error) {
yamlData, err := ioutil.ReadFile(zd.config.CredLocation)
if err != nil {
return "", errors.New(fmt.Sprintf("the ZFSSA credentials file <%s> could not be read: <%s>",
zd.config.CredLocation, err))
}
var yamlConfig ZfssaCredentials
err = yaml.Unmarshal(yamlData, &yamlConfig)
if err != nil {
return "", errors.New(fmt.Sprintf("the ZFSSA credentials file <%s> could not be parsed: <%s>",
zd.config.CredLocation, err))
}
if !isUsernameValid(yamlConfig.Username) {
return "", errors.New(fmt.Sprintf("ZFSSA username is invalid: <%s>", yamlConfig.Username))
}
return yamlConfig.Username, nil
}
// Retrieves just the username from a credential file
func (zd *ZFSSADriver) GetPasswordFromCred() (string, error) {
yamlData, err := ioutil.ReadFile(zd.config.CredLocation)
if err != nil {
return "", errors.New(fmt.Sprintf("the ZFSSA credentials file <%s> could not be read: <%s>",
zd.config.CredLocation, err))
}
var yamlConfig ZfssaCredentials
err = yaml.Unmarshal(yamlData, &yamlConfig)
if err != nil {
return "", errors.New(fmt.Sprintf("the ZFSSA credentials file <%s> could not be parsed: <%s>",
zd.config.CredLocation, err))
}
return yamlConfig.Password, nil
}
func (zd *ZFSSADriver) Run() {
// Refresh current information
_ = zd.updateVolumeList(nil)
_ = zd.updateSnapshotList(nil)
// Create GRPC servers
s := new(nonBlockingGRPCServer)
sigChannel := make(chan os.Signal, 1)
signal.Notify(sigChannel, sigList...)
s.Start(zd.config.endpoint, *zd.is, *zd.cs, *zd.ns)
s.Wait(sigChannel)
s.Stop()
_ = os.RemoveAll(zd.config.endpoint)
}
func (s *nonBlockingGRPCServer) Start(endpoint string,
ids csi.IdentityServer, cs csi.ControllerServer, ns csi.NodeServer) {
s.wg.Add(1)
go s.serve(endpoint, ids, cs, ns)
}
func (s *nonBlockingGRPCServer) Wait(ch chan os.Signal) {
for sig := range ch {
switch sig {
case syscall.SIGTERM,
syscall.SIGHUP,
syscall.SIGINT,
syscall.SIGQUIT:
utils.GetLogCSID(nil, 5).Println("Termination signal received", "signal", sig)
return
default:
utils.GetLogCSID(nil, 5).Println("Signal received", "signal", sig)
continue
}
}
}
func (s *nonBlockingGRPCServer) Stop() {
s.server.GracefulStop()
s.wg.Add(-1)
}
func (s *nonBlockingGRPCServer) ForceStop() {
s.server.Stop()
s.wg.Add(-1)
}
func (s *nonBlockingGRPCServer) serve(endpoint string,
ids csi.IdentityServer, cs csi.ControllerServer, ns csi.NodeServer) {
listener, err := net.Listen("unix", endpoint)
if err != nil {
utils.GetLogCSID(nil, 2).Println("Failed to listen", "error", err)
return
}
opts := []grpc.ServerOption{ grpc.UnaryInterceptor(interceptorGRPC) }
server := grpc.NewServer(opts...)
s.server = server
csi.RegisterIdentityServer(server, ids)
csi.RegisterControllerServer(server, cs)
csi.RegisterNodeServer(server, ns)
utils.GetLogCSID(nil, 5).Println("Listening for connections", "address", endpoint)
err = server.Serve(listener)
if err != nil {
utils.GetLogCSID(nil, 2).Println("Serve returned with error", "error", err)
}
}
// Interceptor measuring the response time of the requests.
func interceptorGRPC(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
// Get a new context with a list of loggers request specific.
newContext := utils.GetNewContext(ctx)
// Calls the handler
utils.GetLogCSID(newContext, 4).Println("Request submitted", "method:", info.FullMethod)
start := time.Now()
rsp, err := handler(newContext, req)
utils.GetLogCSID(newContext, 4).Println("Request completed", "method:", info.FullMethod,
"duration:", time.Since(start), "error", err)
return rsp, err
}
// A local GetEnv utility function
func getEnvFallback(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
return fallback
}
// validate username
func isUsernameValid(username string) bool {
if len(username) == 0 || len(username) > UsernameLength {
return false
}
var validUsername = regexp.MustCompile(UsernamePattern).MatchString
return validUsername(username)
}

222
pkg/service/snapshots.go Normal file
View File

@ -0,0 +1,222 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/golang/protobuf/ptypes/timestamp"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"net/http"
"sync/atomic"
)
// Mount volume or BLock volume snapshot.
type zSnapshot struct {
bolt *utils.Bolt
refcount int32
state volumeState
href string
zvol zVolumeInterface
id *utils.SnapshotId
numClones int
spaceUnique int64
spaceData int64
timeStamp *timestamp.Timestamp
}
func newSnapshot(sid *utils.SnapshotId, zvol zVolumeInterface) *zSnapshot {
zsnap := new(zSnapshot)
zsnap.bolt = utils.NewBolt()
zsnap.zvol = zvol
zsnap.id = sid
zsnap.state = stateCreating
return zsnap
}
func (zsnap *zSnapshot) create(ctx context.Context, token *zfssarest.Token) (
*csi.CreateSnapshotResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("zsnap.create")
snapinfo, httpStatus, err := zfssarest.CreateSnapshot(ctx, token, zsnap.zvol.getHref(), zsnap.id.Name)
if err != nil {
if httpStatus != http.StatusConflict {
zsnap.state = stateDeleted
return nil, err
}
// The creation failed because the source file system already has a snapshot
// with the same name.
if zsnap.getState() == stateCreated {
snapinfo, _, err := zfssarest.GetSnapshot(ctx, token, zsnap.zvol.getHref(), zsnap.id.Name)
if err != nil {
return nil, err
}
err = zsnap.setInfo(snapinfo)
if err != nil {
zsnap.state = stateDeleted
return nil, err
}
}
} else {
err = zsnap.setInfo(snapinfo)
if err != nil {
zsnap.state = stateDeleted
return nil, err
}
}
return &csi.CreateSnapshotResponse {
Snapshot: &csi.Snapshot{
SizeBytes: zsnap.spaceData,
SnapshotId: zsnap.id.String(),
SourceVolumeId: zsnap.id.VolumeId.String(),
CreationTime: zsnap.timeStamp,
ReadyToUse: true,
}}, nil
}
func (zsnap *zSnapshot) delete(ctx context.Context, token *zfssarest.Token) (
*csi.DeleteSnapshotResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("zsnap.delete")
// Update the snapshot information.
httpStatus, err := zsnap.refreshDetails(ctx, token)
if err != nil {
if httpStatus != http.StatusNotFound {
return nil, err
}
zsnap.state = stateDeleted
return &csi.DeleteSnapshotResponse{}, nil
}
if zsnap.getNumClones() > 0 {
return nil, status.Errorf(codes.FailedPrecondition, "Snapshot has (%d) dependents", zsnap.numClones)
}
_, httpStatus, err = zfssarest.DeleteSnapshot(ctx, token, zsnap.href)
if err != nil && httpStatus != http.StatusNotFound {
return nil, err
}
zsnap.state = stateDeleted
return &csi.DeleteSnapshotResponse{}, nil
}
func (zsnap *zSnapshot) getDetails(ctx context.Context, token *zfssarest.Token) (int, error) {
utils.GetLogCTRL(ctx, 5).Println("zsnap.getDetails")
snapinfo, httpStatus, err := zfssarest.GetSnapshot(ctx, token, zsnap.zvol.getHref(), zsnap.id.Name)
if err != nil {
return httpStatus, err
}
zsnap.timeStamp, err = utils.DateToUnix(snapinfo.Creation)
if err != nil {
return httpStatus, err
}
zsnap.numClones = snapinfo.NumClones
zsnap.spaceData = snapinfo.SpaceData
zsnap.spaceUnique = snapinfo.SpaceUnique
zsnap.href = snapinfo.Href
zsnap.state = stateCreated
return httpStatus, nil
}
func (zsnap *zSnapshot) refreshDetails(ctx context.Context, token *zfssarest.Token) (int, error) {
snapinfo, httpStatus, err := zfssarest.GetSnapshot(ctx, token, zsnap.zvol.getHref(), zsnap.id.Name)
if err == nil {
zsnap.numClones = snapinfo.NumClones
zsnap.spaceData = snapinfo.SpaceData
zsnap.spaceUnique = snapinfo.SpaceUnique
}
return httpStatus, err
}
// Populate the snapshot structure with the information provided
func (zsnap *zSnapshot) setInfo(snapinfo *zfssarest.Snapshot) error {
var err error
zsnap.timeStamp, err = utils.DateToUnix(snapinfo.Creation)
if err != nil {
return err
}
zsnap.numClones = snapinfo.NumClones
zsnap.spaceData = snapinfo.SpaceData
zsnap.spaceUnique = snapinfo.SpaceUnique
zsnap.href = snapinfo.Href
zsnap.state = stateCreated
return nil
}
// Waits until the file system is available and, when it is, returns with its current state.
func (zsnap *zSnapshot) hold(ctx context.Context) volumeState {
atomic.AddInt32(&zsnap.refcount, 1)
return zsnap.state
}
func (zsnap *zSnapshot) lock(ctx context.Context) volumeState {
zsnap.bolt.Lock(ctx)
return zsnap.state
}
func (zsnap *zSnapshot) unlock(ctx context.Context) (int32, volumeState){
zsnap.bolt.Unlock(ctx)
return zsnap.refcount, zsnap.state
}
// Releases the file system and returns its current reference count.
func (zsnap *zSnapshot) release(ctx context.Context) (int32, volumeState) {
return atomic.AddInt32(&zsnap.refcount, -1), zsnap.state
}
func (zsnap *zSnapshot) isBlock() bool { return zsnap.zvol.isBlock() }
func (zsnap *zSnapshot) getSourceVolume() zVolumeInterface { return zsnap.zvol }
func (zsnap *zSnapshot) getState() volumeState { return zsnap.state }
func (zsnap *zSnapshot) getName() string { return zsnap.id.Name }
func (zsnap *zSnapshot) getStringId() string { return zsnap.id.String() }
func (zsnap *zSnapshot) getStringSourceId() string { return zsnap.id.VolumeId.String() }
func (zsnap *zSnapshot) getHref() string { return zsnap.href }
func (zsnap *zSnapshot) getSize() int64 { return zsnap.spaceData }
func (zsnap *zSnapshot) getCreationTime() *timestamp.Timestamp { return zsnap.timeStamp }
func (zsnap *zSnapshot) getNumClones() int { return zsnap.numClones }
func zfssaSnapshotList2csiSnapshotList(ctx context.Context, zfssa string,
snapList []zfssarest.Snapshot) []*csi.ListSnapshotsResponse_Entry {
utils.GetLogCTRL(ctx, 5).Println("zfssaSnapshotList2csiSnapshotList")
entries := make([]*csi.ListSnapshotsResponse_Entry, 0, len(snapList))
for _, snapInfo := range snapList {
sid, err := utils.SnapshotIdStringFromHref(zfssa, snapInfo.Href)
if err != nil {
continue
}
vid, err := utils.VolumeIdStringFromHref(zfssa, snapInfo.Href)
if err != nil {
continue
}
creationTime, err := utils.DateToUnix(snapInfo.Creation)
if err != nil {
continue
}
entry := new(csi.ListSnapshotsResponse_Entry)
entry.Snapshot = &csi.Snapshot{
SnapshotId: sid,
SizeBytes: snapInfo.SpaceData,
SourceVolumeId: vid,
CreationTime: creationTime,
ReadyToUse: true,
}
entries = append(entries, entry)
}
return entries
}

Some files were not shown because too many files have changed in this diff Show More