zfssa-3-canary: create an example for cloning (prior to cloning being implemented), mixed up a copy of existing example but should be OK

This commit is contained in:
Paul Monday 2021-09-12 09:36:27 -06:00
parent 0c8642ed58
commit 1f397a68d1
78 changed files with 1913 additions and 851 deletions

View File

@ -53,26 +53,22 @@ Ensure the following information and requirements can be met prior to installati
Container Runtime will likely try to pull them. If your Container Runtime cannot access the images you will have to
pull them manually before deployment. The required images are:
* node-driver-registar v2.0.0+.
* external-attacher v3.0.2+.
* external-provisioner v2.0.5+.
* external-resizer v1.1.0+.
* external-snapshotter v3.0.3+.
* node-driver-registar v2.7.0+.
* external-attacher v4.1.0+.
* external-provisioner v3.4.0+.
* external-resizer v1.7.0+.
* external-snapshotter v6.2.1+.
The common container images for those images are:
* k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.0
* k8s.gcr.io/sig-storage/csi-attacher:v3.0.2
* k8s.gcr.io/sig-storage/csi-provisioner:v2.0.5
* k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
* k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
The current deployment uses the sidecar images built by Oracle and available
from the Oracle Container Registry (container-registry.oracle.com/olcne/).
Refer to the [current deployment for more information](deploy/helm/k8s-1.25/values.yaml).
* Plugin image
You can pull the plugin image from a registry that you know hosts it or you can generate it and store it in one of
your registries. In any case, as for the sidecar images, the Container Runtime must have access to that registry.
If not you will have to pull it manually before deployment. If you choose to generate the plugin yourself use
version 1.13.8 or above of the Go compiler.
version 1.21.0 or above of the Go compiler.
## Setup
@ -150,18 +146,16 @@ Ensure that:
```
* All worker nodes are running the daemon `rpc.statd`
### Enabling Kubernetes Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
### Kubernetes Volume Snapshot Feature
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, deploy API extensions, associated configurations,
and a snapshot controller by running the following command in deploy directory:
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20.
```text
kubectl apply -R -f k8s-1.17/snapshot-controller
```
When installing from the [example helm charts](./deploy/helm/k8s-1.25), the snapshot
controller, required RBAC roles and CRDs, will be deployed simultaneously with the
driver. If your Kubernetes deployment already contains a snapshot deployment,
modify the helm example deployment as needed.
This command will report creation of resources and configuratios as follows:
After deployment there are resources applied that relate to snapshots, such as:
```text
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
@ -175,9 +169,7 @@ rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
statefulset.apps/snapshot-controller created
```
The details of them can be viewed using kubectl get <resource-type> command. Note that the command
above deploys a snapshot-controler in the default namespace by default. The command
`kubectl get all` should present something similar to this:
The details of them can be viewed using kubectl get <resource-type> command:
```text
NAME READY STATUS RESTARTS AGE
@ -244,6 +236,9 @@ the values to your own values.
```
For development only, other mechanisms can be used to create and share the secret with the container.
The driver uses a YAML parser for the parsing of this file. Because passwords should contain a variety
of 'special characters', enclose the password in double quotes
*Warning* Do not store your credentials in source code control (such as this project). For production
environments use a secure secret store that encrypts at rest and can provide credentials through role
based access controls (refer to Kubernetes documentation). Do not use root user in production environments,

View File

@ -96,6 +96,9 @@ This set exercises dynamic volume creation (restoring from a volume snapshot) fo
* [NFS Volume Snapshot](./examples/nfs-vsc/README.md) - illustrates a snapshot creation of an NFS volume.
* [Block Volume Snapshot](./examples/block-vsc/README.md) - illustrates a snapshot creation of a block volume.
This set exercises dynamic volume creation using clones:
* [NFS Volume Clone](./examples/nfs-volume-clone/README.md) - illustrates a clone of an NFS volume.
## Help
Refer to the documentation links and examples for more information on

View File

@ -1,21 +1,21 @@
/*
* Copyright (c) 2021, Oracle and/or its affiliates.
* Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/
*/
*/
package main
import (
"github.com/oracle/zfssa-csi-driver/pkg/service"
"flag"
"fmt"
"github.com/oracle/zfssa-csi-driver/pkg/service"
"os"
)
var (
driverName = flag.String("drivername", "zfssa-csi-driver", "name of the driver")
driverName = flag.String("drivername", "zfssa-csi-driver", "name of the driver")
// Provided by the build process
version = "0.0.0"
version = "1.2.0"
)
func main() {

View File

@ -1,4 +1,4 @@
apiVersion: v1
name: zfssa-csi
version: 1.0.0
version: 1.2.0
description: Deploys Oracle ZFS Storage Appliance CSI Plugin.

View File

@ -0,0 +1,136 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/814"
controller-gen.kubebuilder.io/version: v0.12.0
creationTimestamp: null
name: volumesnapshotclasses.snapshot.storage.k8s.io
spec:
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotClass
listKind: VolumeSnapshotClassList
plural: volumesnapshotclasses
shortNames:
- vsclass
- vsclasses
singular: volumesnapshotclass
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .driver
name: Driver
type: string
- description: Determines whether a VolumeSnapshotContent created through the
VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted.
jsonPath: .deletionPolicy
name: DeletionPolicy
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
description: VolumeSnapshotClass specifies parameters that a underlying storage
system uses when creating a volume snapshot. A specific VolumeSnapshotClass
is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses
are non-namespaced
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
deletionPolicy:
description: deletionPolicy determines whether a VolumeSnapshotContent
created through the VolumeSnapshotClass should be deleted when its bound
VolumeSnapshot is deleted. Supported values are "Retain" and "Delete".
"Retain" means that the VolumeSnapshotContent and its physical snapshot
on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are deleted.
Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the storage driver that handles this
VolumeSnapshotClass. Required.
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
parameters:
additionalProperties:
type: string
description: parameters is a key-value map with storage driver specific
parameters for creating snapshots. These values are opaque to Kubernetes.
type: object
required:
- deletionPolicy
- driver
type: object
served: true
storage: true
subresources: {}
- additionalPrinterColumns:
- jsonPath: .driver
name: Driver
type: string
- description: Determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted.
jsonPath: .deletionPolicy
name: DeletionPolicy
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1beta1
# This indicates the v1beta1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
deprecated: true
# This overrides the default warning returned to clients making v1beta1 API requests.
deprecationWarning: "snapshot.storage.k8s.io/v1beta1 VolumeSnapshotClass is deprecated; use snapshot.storage.k8s.io/v1 VolumeSnapshotClass"
schema:
openAPIV3Schema:
description: VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
deletionPolicy:
description: deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the storage driver that handles this VolumeSnapshotClass. Required.
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
parameters:
additionalProperties:
type: string
description: parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes.
type: object
required:
- deletionPolicy
- driver
type: object
served: false
storage: false
subresources: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,403 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/955"
controller-gen.kubebuilder.io/version: v0.12.0
creationTimestamp: null
name: volumesnapshotcontents.snapshot.storage.k8s.io
spec:
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotContent
listKind: VolumeSnapshotContentList
plural: volumesnapshotcontents
shortNames:
- vsc
- vscs
singular: volumesnapshotcontent
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Indicates if the snapshot is ready to be used to restore a volume.
jsonPath: .status.readyToUse
name: ReadyToUse
type: boolean
- description: Represents the complete size of the snapshot in bytes
jsonPath: .status.restoreSize
name: RestoreSize
type: integer
- description: Determines whether this VolumeSnapshotContent and its physical
snapshot on the underlying storage system should be deleted when its bound
VolumeSnapshot is deleted.
jsonPath: .spec.deletionPolicy
name: DeletionPolicy
type: string
- description: Name of the CSI driver used to create the physical snapshot on
the underlying storage system.
jsonPath: .spec.driver
name: Driver
type: string
- description: Name of the VolumeSnapshotClass to which this snapshot belongs.
jsonPath: .spec.volumeSnapshotClassName
name: VolumeSnapshotClass
type: string
- description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent
object is bound.
jsonPath: .spec.volumeSnapshotRef.name
name: VolumeSnapshot
type: string
- description: Namespace of the VolumeSnapshot object to which this VolumeSnapshotContent object is bound.
jsonPath: .spec.volumeSnapshotRef.namespace
name: VolumeSnapshotNamespace
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
description: VolumeSnapshotContent represents the actual "on-disk" snapshot
object in the underlying storage system
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: spec defines properties of a VolumeSnapshotContent created
by the underlying storage system. Required.
properties:
deletionPolicy:
description: deletionPolicy determines whether this VolumeSnapshotContent
and its physical snapshot on the underlying storage system should
be deleted when its bound VolumeSnapshot is deleted. Supported values
are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are kept.
"Delete" means that the VolumeSnapshotContent and its physical snapshot
on underlying storage system are deleted. For dynamically provisioned
snapshots, this field will automatically be filled in by the CSI
snapshotter sidecar with the "DeletionPolicy" field defined in the
corresponding VolumeSnapshotClass. For pre-existing snapshots, users
MUST specify this field when creating the VolumeSnapshotContent
object. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the CSI driver used to create the
physical snapshot on the underlying storage system. This MUST be
the same as the name returned by the CSI GetPluginName() call for
that driver. Required.
type: string
source:
description: source specifies whether the snapshot is (or should be)
dynamically provisioned or already exists, and just requires a Kubernetes
object representation. This field is immutable after creation. Required.
properties:
snapshotHandle:
description: snapshotHandle specifies the CSI "snapshot_id" of
a pre-existing snapshot on the underlying storage system for
which a Kubernetes object representation was (or should be)
created. This field is immutable.
type: string
volumeHandle:
description: volumeHandle specifies the CSI "volume_id" of the
volume from which a snapshot should be dynamically taken from.
This field is immutable.
type: string
type: object
oneOf:
- required: ["snapshotHandle"]
- required: ["volumeHandle"]
sourceVolumeMode:
description: SourceVolumeMode is the mode of the volume whose snapshot
is taken. Can be either “Filesystem” or “Block”. If not specified,
it indicates the source volume's mode is unknown. This field is
immutable. This field is an alpha field.
type: string
volumeSnapshotClassName:
description: name of the VolumeSnapshotClass from which this snapshot
was (or will be) created. Note that after provisioning, the VolumeSnapshotClass
may be deleted or recreated with different set of values, and as
such, should not be referenced post-snapshot creation.
type: string
volumeSnapshotRef:
description: volumeSnapshotRef specifies the VolumeSnapshot object
to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName
field must reference to this VolumeSnapshotContent's name for the
bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent
object, name and namespace of the VolumeSnapshot object MUST be
provided for binding to happen. This field is immutable after creation.
Required.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
x-kubernetes-map-type: atomic
required:
- deletionPolicy
- driver
- source
- volumeSnapshotRef
type: object
status:
description: status represents the current information of a snapshot.
properties:
creationTime:
description: creationTime is the timestamp when the point-in-time
snapshot is taken by the underlying storage system. In dynamic snapshot
creation case, this field will be filled in by the CSI snapshotter
sidecar with the "creation_time" value returned from CSI "CreateSnapshot"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "creation_time" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. If not specified, it indicates
the creation time is unknown. The format of this field is a Unix
nanoseconds time encoded as an int64. On Unix, the command `date
+%s%N` returns the current time in nanoseconds since 1970-01-01
00:00:00 UTC.
format: int64
type: integer
error:
description: error is the last observed error during snapshot creation,
if any. Upon success after retry, this error field will be cleared.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be
logged, and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in by the CSI snapshotter sidecar with the "ready_to_use"
value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing
snapshot, this field will be filled with the "ready_to_use" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it, otherwise, this field will be set to "True". If not specified,
it means the readiness of a snapshot is unknown.
type: boolean
restoreSize:
description: restoreSize represents the complete size of the snapshot
in bytes. In dynamic snapshot creation case, this field will be
filled in by the CSI snapshotter sidecar with the "size_bytes" value
returned from CSI "CreateSnapshot" gRPC call. For a pre-existing
snapshot, this field will be filled with the "size_bytes" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it. When restoring a volume from this snapshot, the size of the
volume MUST NOT be smaller than the restoreSize if it is specified,
otherwise the restoration will fail. If not specified, it indicates
that the size is unknown.
format: int64
minimum: 0
type: integer
snapshotHandle:
description: snapshotHandle is the CSI "snapshot_id" of a snapshot
on the underlying storage system. If not specified, it indicates
that dynamic snapshot creation has either failed or it is still
in progress.
type: string
volumeGroupSnapshotHandle:
description: VolumeGroupSnapshotHandle is the CSI "group_snapshot_id"
of a group snapshot on the underlying storage system.
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
- additionalPrinterColumns:
- description: Indicates if the snapshot is ready to be used to restore a volume.
jsonPath: .status.readyToUse
name: ReadyToUse
type: boolean
- description: Represents the complete size of the snapshot in bytes
jsonPath: .status.restoreSize
name: RestoreSize
type: integer
- description: Determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted.
jsonPath: .spec.deletionPolicy
name: DeletionPolicy
type: string
- description: Name of the CSI driver used to create the physical snapshot on the underlying storage system.
jsonPath: .spec.driver
name: Driver
type: string
- description: Name of the VolumeSnapshotClass to which this snapshot belongs.
jsonPath: .spec.volumeSnapshotClassName
name: VolumeSnapshotClass
type: string
- description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent object is bound.
jsonPath: .spec.volumeSnapshotRef.name
name: VolumeSnapshot
type: string
- description: Namespace of the VolumeSnapshot object to which this VolumeSnapshotContent object is bound.
jsonPath: .spec.volumeSnapshotRef.namespace
name: VolumeSnapshotNamespace
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1beta1
# This indicates the v1beta1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
deprecated: true
# This overrides the default warning returned to clients making v1beta1 API requests.
deprecationWarning: "snapshot.storage.k8s.io/v1beta1 VolumeSnapshotContent is deprecated; use snapshot.storage.k8s.io/v1 VolumeSnapshotContent"
schema:
openAPIV3Schema:
description: VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required.
properties:
deletionPolicy:
description: deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required.
type: string
source:
description: source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required.
properties:
snapshotHandle:
description: snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable.
type: string
volumeHandle:
description: volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable.
type: string
type: object
volumeSnapshotClassName:
description: name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation.
type: string
volumeSnapshotRef:
description: volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
required:
- deletionPolicy
- driver
- source
- volumeSnapshotRef
type: object
status:
description: status represents the current information of a snapshot.
properties:
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC.
format: int64
type: integer
error:
description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared.
properties:
message:
description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
type: boolean
restoreSize:
description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
format: int64
minimum: 0
type: integer
snapshotHandle:
description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress.
type: string
type: object
required:
- spec
type: object
served: false
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,314 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/814"
controller-gen.kubebuilder.io/version: v0.12.0
creationTimestamp: null
name: volumesnapshots.snapshot.storage.k8s.io
spec:
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshot
listKind: VolumeSnapshotList
plural: volumesnapshots
shortNames:
- vs
singular: volumesnapshot
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Indicates if the snapshot is ready to be used to restore a volume.
jsonPath: .status.readyToUse
name: ReadyToUse
type: boolean
- description: If a new snapshot needs to be created, this contains the name of
the source PVC from which this snapshot was (or will be) created.
jsonPath: .spec.source.persistentVolumeClaimName
name: SourcePVC
type: string
- description: If a snapshot already exists, this contains the name of the existing
VolumeSnapshotContent object representing the existing snapshot.
jsonPath: .spec.source.volumeSnapshotContentName
name: SourceSnapshotContent
type: string
- description: Represents the minimum size of volume required to rehydrate from
this snapshot.
jsonPath: .status.restoreSize
name: RestoreSize
type: string
- description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot.
jsonPath: .spec.volumeSnapshotClassName
name: SnapshotClass
type: string
- description: Name of the VolumeSnapshotContent object to which the VolumeSnapshot
object intends to bind to. Please note that verification of binding actually
requires checking both VolumeSnapshot and VolumeSnapshotContent to ensure
both are pointing at each other. Binding MUST be verified prior to usage of
this object.
jsonPath: .status.boundVolumeSnapshotContentName
name: SnapshotContent
type: string
- description: Timestamp when the point-in-time snapshot was taken by the underlying
storage system.
jsonPath: .status.creationTime
name: CreationTime
type: date
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
description: VolumeSnapshot is a user's request for either creating a point-in-time
snapshot of a persistent volume, or binding to a pre-existing snapshot.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: 'spec defines the desired characteristics of a snapshot requested
by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots
Required.'
properties:
source:
description: source specifies where a snapshot will be created from.
This field is immutable after creation. Required.
properties:
persistentVolumeClaimName:
description: persistentVolumeClaimName specifies the name of the
PersistentVolumeClaim object representing the volume from which
a snapshot should be created. This PVC is assumed to be in the
same namespace as the VolumeSnapshot object. This field should
be set if the snapshot does not exists, and needs to be created.
This field is immutable.
type: string
volumeSnapshotContentName:
description: volumeSnapshotContentName specifies the name of a
pre-existing VolumeSnapshotContent object representing an existing
volume snapshot. This field should be set if the snapshot already
exists and only needs a representation in Kubernetes. This field
is immutable.
type: string
type: object
oneOf:
- required: ["persistentVolumeClaimName"]
- required: ["volumeSnapshotContentName"]
volumeSnapshotClassName:
description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass
requested by the VolumeSnapshot. VolumeSnapshotClassName may be
left nil to indicate that the default SnapshotClass should be used.
A given cluster may have multiple default Volume SnapshotClasses:
one default per CSI Driver. If a VolumeSnapshot does not specify
a SnapshotClass, VolumeSnapshotSource will be checked to figure
out what the associated CSI Driver is, and the default VolumeSnapshotClass
associated with that CSI Driver will be used. If more than one VolumeSnapshotClass
exist for a given CSI Driver and more than one have been marked
as default, CreateSnapshot will fail and generate an event. Empty
string is not allowed for this field.'
type: string
required:
- source
type: object
status:
description: status represents the current information of a snapshot.
Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent
objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent
point at each other) before using this object.
properties:
boundVolumeSnapshotContentName:
description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent
object to which this VolumeSnapshot object intends to bind to. If
not specified, it indicates that the VolumeSnapshot object has not
been successfully bound to a VolumeSnapshotContent object yet. NOTE:
To avoid possible security issues, consumers must verify binding
between VolumeSnapshot and VolumeSnapshotContent objects is successful
(by validating that both VolumeSnapshot and VolumeSnapshotContent
point at each other) before using this object.'
type: string
creationTime:
description: creationTime is the timestamp when the point-in-time
snapshot is taken by the underlying storage system. In dynamic snapshot
creation case, this field will be filled in by the snapshot controller
with the "creation_time" value returned from CSI "CreateSnapshot"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "creation_time" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. If not specified, it may indicate
that the creation time of the snapshot is unknown.
format: date-time
type: string
error:
description: error is the last observed error during snapshot creation,
if any. This field could be helpful to upper level controllers(i.e.,
application controller) to decide whether they should continue on
waiting for the snapshot to be created based on the type of error
reported. The snapshot controller will keep retrying when an error
occurs during the snapshot creation. Upon success, this error field
will be cleared.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be
logged, and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if the snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in by the snapshot controller with the "ready_to_use"
value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing
snapshot, this field will be filled with the "ready_to_use" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it, otherwise, this field will be set to "True". If not specified,
it means the readiness of a snapshot is unknown.
type: boolean
restoreSize:
type: string
description: restoreSize represents the minimum size of volume required
to create a volume from this snapshot. In dynamic snapshot creation
case, this field will be filled in by the snapshot controller with
the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call.
For a pre-existing snapshot, this field will be filled with the
"size_bytes" value returned from the CSI "ListSnapshots" gRPC call
if the driver supports it. When restoring a volume from this snapshot,
the size of the volume MUST NOT be smaller than the restoreSize
if it is specified, otherwise the restoration will fail. If not
specified, it indicates that the size is unknown.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
volumeGroupSnapshotName:
description: VolumeGroupSnapshotName is the name of the VolumeGroupSnapshot
of which this VolumeSnapshot is a part of.
type: string
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}
- additionalPrinterColumns:
- description: Indicates if the snapshot is ready to be used to restore a volume.
jsonPath: .status.readyToUse
name: ReadyToUse
type: boolean
- description: If a new snapshot needs to be created, this contains the name of the source PVC from which this snapshot was (or will be) created.
jsonPath: .spec.source.persistentVolumeClaimName
name: SourcePVC
type: string
- description: If a snapshot already exists, this contains the name of the existing VolumeSnapshotContent object representing the existing snapshot.
jsonPath: .spec.source.volumeSnapshotContentName
name: SourceSnapshotContent
type: string
- description: Represents the minimum size of volume required to rehydrate from this snapshot.
jsonPath: .status.restoreSize
name: RestoreSize
type: string
- description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot.
jsonPath: .spec.volumeSnapshotClassName
name: SnapshotClass
type: string
- description: Name of the VolumeSnapshotContent object to which the VolumeSnapshot object intends to bind to. Please note that verification of binding actually requires checking both VolumeSnapshot and VolumeSnapshotContent to ensure both are pointing at each other. Binding MUST be verified prior to usage of this object.
jsonPath: .status.boundVolumeSnapshotContentName
name: SnapshotContent
type: string
- description: Timestamp when the point-in-time snapshot was taken by the underlying storage system.
jsonPath: .status.creationTime
name: CreationTime
type: date
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1beta1
# This indicates the v1beta1 version of the custom resource is deprecated.
# API requests to this version receive a warning in the server response.
deprecated: true
# This overrides the default warning returned to clients making v1beta1 API requests.
deprecationWarning: "snapshot.storage.k8s.io/v1beta1 VolumeSnapshot is deprecated; use snapshot.storage.k8s.io/v1 VolumeSnapshot"
schema:
openAPIV3Schema:
description: VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: 'spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required.'
properties:
source:
description: source specifies where a snapshot will be created from. This field is immutable after creation. Required.
properties:
persistentVolumeClaimName:
description: persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable.
type: string
volumeSnapshotContentName:
description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable.
type: string
type: object
volumeSnapshotClassName:
description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.'
type: string
required:
- source
type: object
status:
description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.
properties:
boundVolumeSnapshotContentName:
description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.'
type: string
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown.
format: date-time
type: string
error:
description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared.
properties:
message:
description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown.
type: boolean
restoreSize:
type: string
description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
type: object
required:
- spec
type: object
served: false
storage: false
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -2,7 +2,7 @@
# are needed only because of condition explained in
# https://github.com/kubernetes/kubernetes/issues/69608
---
apiVersion: storage.k8s.io/v1beta1
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: zfssa-csi-driver
@ -55,6 +55,15 @@ spec:
- name: registration-dir
mountPath: /registration
- name: liveness-probe
imagePullPolicy: Always
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiLivenessProbe.name }}:{{ .Values.images.csiLivenessProbe.tag }}
args:
- --csi-address=/plugin/csi.sock
volumeMounts:
- mountPath: {{ .Values.paths.pluginDir.mountPath }}
name: socket-dir
- name: zfssabs
image: {{ .Values.image.zfssaBase }}{{ .Values.images.zfssaCsiDriver.name }}:{{ .Values.images.zfssaCsiDriver.tag }}
args:
@ -87,6 +96,18 @@ spec:
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
privileged: true
ports:
- containerPort: 9808
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 2
volumeMounts:
- name: socket-dir
mountPath: {{ .Values.paths.pluginDir.mountPath }}

View File

@ -60,6 +60,9 @@ spec:
- --csi-address=/plugin/csi.sock
- --timeout=30s
- --feature-gates=Topology=true
env:
- name: ADDRESS
value: /plugin/csi.sock
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
# This is necessary only for systems with SELinux, where

View File

@ -56,7 +56,7 @@ roleRef:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default # TODO: replace with the namespace you want for your controller
namespace: {{ .Values.deployment.namespace }}
name: snapshot-controller-leaderelection
rules:
- apiGroups: ["coordination.k8s.io"]
@ -68,11 +68,12 @@ kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-leaderelection
namespace: default # TODO: replace with the namespace you want for your controller
namespace: {{ .Values.deployment.namespace }}
subjects:
- kind: ServiceAccount
name: snapshot-controller
namespace: default # TODO: replace with the namespace you want for your controller
namespace: {{ .Values.deployment.namespace }}
roleRef:
kind: Role
name: snapshot-controller-leaderelection

View File

@ -5,6 +5,7 @@ kind: StatefulSet
apiVersion: apps/v1
metadata:
name: snapshot-controller
namespace: {{ .Values.deployment.namespace }}
spec:
serviceName: "snapshot-controller"
replicas: 1
@ -19,7 +20,7 @@ spec:
serviceAccount: snapshot-controller
containers:
- name: snapshot-controller
image: quay.io/k8scsi/snapshot-controller:v2.1.1
image: {{ .Values.image.sidecarBase }}{{ .Values.images.csiSnapshotController.name }}:{{ .Values.images.csiSnapshotController.tag }}
args:
- "--v=5"
- "--leader-election=false"

View File

@ -1,6 +1,6 @@
# Global docker image setting
image:
sidecarBase: k8s.gcr.io/sig-storage/
sidecarBase: container-registry.oracle.com/olcne/
zfssaBase: iad.ocir.io/zs/store/csi/
pullPolicy: Always
@ -8,22 +8,28 @@ image:
images:
csiNodeDriverRegistrar:
name: csi-node-driver-registrar
tag: "v2.0.0"
tag: "v2.9.0"
zfssaCsiDriver:
name: zfssa-csi-driver
tag: "v1.0.0"
tag: "v1.2.0"
csiProvisioner:
name: csi-provisioner
tag: "v2.0.5"
tag: "v3.6.0"
csiAttacher:
name: csi-attacher
tag: "v3.0.2"
tag: "v4.4.0"
csiResizer:
name: csi-resizer
tag: "v1.1.0"
tag: "v1.9.0"
csiSnapshotter:
name: csi-snapshotter
tag: "v3.0.3"
tag: "v6.3.0"
csiLivenessProbe:
name: livenessprobe
tag: "v2.11.0"
csiSnapshotController:
name: snapshot-controller
tag: "v6.3.0"
paths:
pluginDir:

View File

@ -1,85 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshotclasses.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .driver
name: Driver
type: string
- JSONPath: .deletionPolicy
description: Determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass
should be deleted when its bound VolumeSnapshot is deleted.
name: DeletionPolicy
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotClass
listKind: VolumeSnapshotClassList
plural: volumesnapshotclasses
singular: volumesnapshotclass
preserveUnknownFields: false
scope: Cluster
subresources: {}
validation:
openAPIV3Schema:
description: VolumeSnapshotClass specifies parameters that a underlying storage
system uses when creating a volume snapshot. A specific VolumeSnapshotClass
is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses
are non-namespaced
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
deletionPolicy:
description: deletionPolicy determines whether a VolumeSnapshotContent created
through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot
is deleted. Supported values are "Retain" and "Delete". "Retain" means
that the VolumeSnapshotContent and its physical snapshot on underlying
storage system are kept. "Delete" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are deleted. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the storage driver that handles this
VolumeSnapshotClass. Required.
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
parameters:
additionalProperties:
type: string
description: parameters is a key-value map with storage driver specific
parameters for creating snapshots. These values are opaque to Kubernetes.
type: object
required:
- deletionPolicy
- driver
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,233 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshotcontents.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .status.readyToUse
description: Indicates if a snapshot is ready to be used to restore a volume.
name: ReadyToUse
type: boolean
- JSONPath: .status.restoreSize
description: Represents the complete size of the snapshot in bytes
name: RestoreSize
type: integer
- JSONPath: .spec.deletionPolicy
description: Determines whether this VolumeSnapshotContent and its physical snapshot
on the underlying storage system should be deleted when its bound VolumeSnapshot
is deleted.
name: DeletionPolicy
type: string
- JSONPath: .spec.driver
description: Name of the CSI driver used to create the physical snapshot on the
underlying storage system.
name: Driver
type: string
- JSONPath: .spec.volumeSnapshotClassName
description: Name of the VolumeSnapshotClass to which this snapshot belongs.
name: VolumeSnapshotClass
type: string
- JSONPath: .spec.volumeSnapshotRef.name
description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent
object is bound.
name: VolumeSnapshot
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshotContent
listKind: VolumeSnapshotContentList
plural: volumesnapshotcontents
singular: volumesnapshotcontent
preserveUnknownFields: false
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
description: VolumeSnapshotContent represents the actual "on-disk" snapshot
object in the underlying storage system
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: spec defines properties of a VolumeSnapshotContent created
by the underlying storage system. Required.
properties:
deletionPolicy:
description: deletionPolicy determines whether this VolumeSnapshotContent
and its physical snapshot on the underlying storage system should
be deleted when its bound VolumeSnapshot is deleted. Supported values
are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent
and its physical snapshot on underlying storage system are kept. "Delete"
means that the VolumeSnapshotContent and its physical snapshot on
underlying storage system are deleted. In dynamic snapshot creation
case, this field will be filled in with the "DeletionPolicy" field
defined in the VolumeSnapshotClass the VolumeSnapshot refers to. For
pre-existing snapshots, users MUST specify this field when creating
the VolumeSnapshotContent object. Required.
enum:
- Delete
- Retain
type: string
driver:
description: driver is the name of the CSI driver used to create the
physical snapshot on the underlying storage system. This MUST be the
same as the name returned by the CSI GetPluginName() call for that
driver. Required.
type: string
source:
description: source specifies from where a snapshot will be created.
This field is immutable after creation. Required.
properties:
snapshotHandle:
description: snapshotHandle specifies the CSI "snapshot_id" of a
pre-existing snapshot on the underlying storage system. This field
is immutable.
type: string
volumeHandle:
description: volumeHandle specifies the CSI "volume_id" of the volume
from which a snapshot should be dynamically taken from. This field
is immutable.
type: string
type: object
volumeSnapshotClassName:
description: name of the VolumeSnapshotClass to which this snapshot
belongs.
type: string
volumeSnapshotRef:
description: volumeSnapshotRef specifies the VolumeSnapshot object to
which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName
field must reference to this VolumeSnapshotContent's name for the
bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent
object, name and namespace of the VolumeSnapshot object MUST be provided
for binding to happen. This field is immutable after creation. Required.
properties:
apiVersion:
description: API version of the referent.
type: string
fieldPath:
description: 'If referring to a piece of an object instead of an
entire object, this string should contain a valid JSON/Go field
access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen only
to have some well-defined way of referencing a part of an object.
TODO: this design is not final and this field is subject to change
in the future.'
type: string
kind:
description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
type: string
namespace:
description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
type: string
resourceVersion:
description: 'Specific resourceVersion to which this reference is
made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
type: string
uid:
description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
type: string
type: object
required:
- deletionPolicy
- driver
- source
- volumeSnapshotRef
type: object
status:
description: status represents the current information of a snapshot.
properties:
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot
is taken by the underlying storage system. In dynamic snapshot creation
case, this field will be filled in with the "creation_time" value
returned from CSI "CreateSnapshotRequest" gRPC call. For a pre-existing
snapshot, this field will be filled with the "creation_time" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it. If not specified, it indicates the creation time is unknown. The
format of this field is a Unix nanoseconds time encoded as an int64.
On Unix, the command `date +%s%N` returns the current time in nanoseconds
since 1970-01-01 00:00:00 UTC.
format: int64
type: integer
error:
description: error is the latest observed error during snapshot creation,
if any.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be logged,
and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in with the "ready_to_use" value returned from CSI
"CreateSnapshotRequest" gRPC call. For a pre-existing snapshot, this
field will be filled with the "ready_to_use" value returned from the
CSI "ListSnapshots" gRPC call if the driver supports it, otherwise,
this field will be set to "True". If not specified, it means the readiness
of a snapshot is unknown.
type: boolean
restoreSize:
description: restoreSize represents the complete size of the snapshot
in bytes. In dynamic snapshot creation case, this field will be filled
in with the "size_bytes" value returned from CSI "CreateSnapshotRequest"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "size_bytes" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. When restoring a volume from
this snapshot, the size of the volume MUST NOT be smaller than the
restoreSize if it is specified, otherwise the restoration will fail.
If not specified, it indicates that the size is unknown.
format: int64
minimum: 0
type: integer
snapshotHandle:
description: snapshotHandle is the CSI "snapshot_id" of a snapshot on
the underlying storage system. If not specified, it indicates that
dynamic snapshot creation has either failed or it is still in progress.
type: string
type: object
required:
- spec
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -1,188 +0,0 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/260"
creationTimestamp: null
name: volumesnapshots.snapshot.storage.k8s.io
spec:
additionalPrinterColumns:
- JSONPath: .status.readyToUse
description: Indicates if a snapshot is ready to be used to restore a volume.
name: ReadyToUse
type: boolean
- JSONPath: .spec.source.persistentVolumeClaimName
description: Name of the source PVC from where a dynamically taken snapshot will
be created.
name: SourcePVC
type: string
- JSONPath: .spec.source.volumeSnapshotContentName
description: Name of the VolumeSnapshotContent which represents a pre-provisioned
snapshot.
name: SourceSnapshotContent
type: string
- JSONPath: .status.restoreSize
description: Represents the complete size of the snapshot.
name: RestoreSize
type: string
- JSONPath: .spec.volumeSnapshotClassName
description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot.
name: SnapshotClass
type: string
- JSONPath: .status.boundVolumeSnapshotContentName
description: The name of the VolumeSnapshotContent to which this VolumeSnapshot
is bound.
name: SnapshotContent
type: string
- JSONPath: .status.creationTime
description: Timestamp when the point-in-time snapshot is taken by the underlying
storage system.
name: CreationTime
type: date
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
group: snapshot.storage.k8s.io
names:
kind: VolumeSnapshot
listKind: VolumeSnapshotList
plural: volumesnapshots
singular: volumesnapshot
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: VolumeSnapshot is a user's request for either creating a point-in-time
snapshot of a persistent volume, or binding to a pre-existing snapshot.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
spec:
description: 'spec defines the desired characteristics of a snapshot requested
by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots
Required.'
properties:
source:
description: source specifies where a snapshot will be created from.
This field is immutable after creation. Required.
properties:
persistentVolumeClaimName:
description: persistentVolumeClaimName specifies the name of the
PersistentVolumeClaim object in the same namespace as the VolumeSnapshot
object where the snapshot should be dynamically taken from. This
field is immutable.
type: string
volumeSnapshotContentName:
description: volumeSnapshotContentName specifies the name of a pre-existing
VolumeSnapshotContent object. This field is immutable.
type: string
type: object
volumeSnapshotClassName:
description: 'volumeSnapshotClassName is the name of the VolumeSnapshotClass
requested by the VolumeSnapshot. If not specified, the default snapshot
class will be used if one exists. If not specified, and there is no
default snapshot class, dynamic snapshot creation will fail. Empty
string is not allowed for this field. TODO(xiangqian): a webhook validation
on empty string. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes'
type: string
required:
- source
type: object
status:
description: 'status represents the current information of a snapshot. NOTE:
status can be modified by sources other than system controllers, and must
not be depended upon for accuracy. Controllers should only use information
from the VolumeSnapshotContent object after verifying that the binding
is accurate and complete.'
properties:
boundVolumeSnapshotContentName:
description: 'boundVolumeSnapshotContentName represents the name of
the VolumeSnapshotContent object to which the VolumeSnapshot object
is bound. If not specified, it indicates that the VolumeSnapshot object
has not been successfully bound to a VolumeSnapshotContent object
yet. NOTE: Specified boundVolumeSnapshotContentName alone does not
mean binding is valid. Controllers MUST always verify bidirectional
binding between VolumeSnapshot and VolumeSnapshotContent to
avoid possible security issues.'
type: string
creationTime:
description: creationTime is the timestamp when the point-in-time snapshot
is taken by the underlying storage system. In dynamic snapshot creation
case, this field will be filled in with the "creation_time" value
returned from CSI "CreateSnapshotRequest" gRPC call. For a pre-existing
snapshot, this field will be filled with the "creation_time" value
returned from the CSI "ListSnapshots" gRPC call if the driver supports
it. If not specified, it indicates that the creation time of the snapshot
is unknown.
format: date-time
type: string
error:
description: error is the last observed error during snapshot creation,
if any. This field could be helpful to upper level controllers(i.e.,
application controller) to decide whether they should continue on
waiting for the snapshot to be created based on the type of error
reported.
properties:
message:
description: 'message is a string detailing the encountered error
during snapshot creation if specified. NOTE: message may be logged,
and it should not contain sensitive information.'
type: string
time:
description: time is the timestamp when the error was encountered.
format: date-time
type: string
type: object
readyToUse:
description: readyToUse indicates if a snapshot is ready to be used
to restore a volume. In dynamic snapshot creation case, this field
will be filled in with the "ready_to_use" value returned from CSI
"CreateSnapshotRequest" gRPC call. For a pre-existing snapshot, this
field will be filled with the "ready_to_use" value returned from the
CSI "ListSnapshots" gRPC call if the driver supports it, otherwise,
this field will be set to "True". If not specified, it means the readiness
of a snapshot is unknown.
type: boolean
restoreSize:
anyOf:
- type: integer
- type: string
description: restoreSize represents the complete size of the snapshot
in bytes. In dynamic snapshot creation case, this field will be filled
in with the "size_bytes" value returned from CSI "CreateSnapshotRequest"
gRPC call. For a pre-existing snapshot, this field will be filled
with the "size_bytes" value returned from the CSI "ListSnapshots"
gRPC call if the driver supports it. When restoring a volume from
this snapshot, the size of the volume MUST NOT be smaller than the
restoreSize if it is specified, otherwise the restoration will fail.
If not specified, it indicates that the size is unknown.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
type: object
required:
- spec
type: object
version: v1beta1
versions:
- name: v1beta1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -16,5 +16,5 @@ parameters:
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -5,6 +5,13 @@ pvcExistingName: zfssa-block-existing-pvc
podBlockName: zfssa-block-existing-pod
applianceName: OVERRIDE
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -16,5 +16,5 @@ parameters:
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -4,6 +4,13 @@ vscBlockName: zfssa-block-example-vsc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-block-example
version: 0.0.1
description: Deploys an end to end iSCSI volume example for Oracle ZFS Storage Appliance CSI driver.

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -0,0 +1,26 @@
# Various names used through example
scBlockName: zfssa-block-example-sc
vscBlockName: zfssa-block-example-vsc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volSize: OVERRIDE

View File

@ -16,5 +16,5 @@ parameters:
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -3,6 +3,13 @@ scBlockName: zfssa-block-example-sc
pvcBlockName: zfssa-block-example-pvc
podBlockName: zfssa-block-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -3,6 +3,13 @@ scNfsName: zfssa-nfs-exp-example-sc
pvcNfsName: zfssa-nfs-exp-example-pvc
podNfsName: zfssa-nfs-exp-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -10,7 +10,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -8,6 +8,13 @@ pvc4: ssp-many
podNfsMultiName: zfssa-nfs-multi-example-pod
namespace: zfssa-nfs-multi
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -16,5 +16,5 @@ parameters:
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: "on"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -5,6 +5,13 @@ pvcExistingFilesystemName: zfssa-fs-existing-pvc
podExistingFilesystemName: zfssa-fs-existing-pod
applianceName: OVERRIDE
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -0,0 +1,156 @@
# Introduction
This is an end-to-end example of using filesystem 'volume clone' as the foundation
for PVCs, allowing clients to use the same base filesystem for
different volumes.
Clones on an Oracle ZFS Storage Appliance start from a snapshot of the
share, the snapshot is then cloned and made available. A snapshot takes
no space as long as the origin filesystem is unchanged.
More information about snapshots and clones on the Oracle ZFS Storage
Appliance can be found in the
[Oracle® ZFS Storage Appliance Administration Guide,](https://docs.oracle.com/cd/F13758_01/html/F13769/gprif.html)
This example comes in two parts:
* The [setup-volume](./setup-volume) helm chart starts a container with
an attached PVC. Exec into the pod and store data onto the volume or
use `kubectl cp` to copy files into the pod. Then the pod is deleted
using `kubectl delete`.
* The [clone-volume](./clone-volume) helm chart starts two pods that
use the existing PVC as the foundation for their volumes through the
use of the Oracle ZFS Storage Appliance Snapshot/Clone operation.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create, this can be
changed for the clone-template example as long as the size is greater
than or equal to the original PVC size.
Check out the parameters section of the storage class configuration file (storage-class.yaml)
to see all supporting properties. Refer to NFS Protocol page of Oracle ZFS Storage Appliance
Administration Guide how to defind the values properly.
## Deploy a pod with a volume
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f local-values/local-values.yaml setup-volume ./setup-volume
```
Once deployed, verify each of the created entities using kubectl:
1. Display the storage class (SC)
The command `kubectl get sc` should now return something similar to this:
```text
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
zfssa-nfs-volume-clone-example-sc zfssa-csi-driver Delete Immediate false 63s
```
2. Display the volume
The command `kubectl get pvc` should return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-nfs-volume-example-pvc Bound pvc-c7ac4970-8ae1-4dc8-ba7f-0e37a35fb39d 50Gi RWX zfssa-nfs-volume-clone-example-sc 7s
```
3. Display the pod mounting the volume
The command `kubectl get pods` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
zfssa-csi-nodeplugin-ph8qr 2/2 Running 0 8m32s
zfssa-csi-nodeplugin-wzgpq 2/2 Running 0 8m32s
zfssa-csi-provisioner-0 4/4 Running 0 8m32s
zfssa-nfs-volume-example-pod 1/1 Running 0 2m21s
```
## Write data to the volume
Once the pod is deployed, write data to the volume in the pod:
```yaml
kubectl exec -it pod/zfssa-nfs-volume-example-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
Write as much data to the volume as you would like.
## Remove the pod
For this step, do *not* use `helm uninstall` yet. Instead, delete the
example pod: `kubectl delete pod/zfssa-nfs-volume-example-pod`.
This leaves the PVC intact and bound to the share on the
Oracle ZFS Storage Appliance.
## Deploy pods using clones of the original volume
```
helm install -f local-values/local-values.yaml clone-volume ./clone-volume
```
Once deployed, there should be two new PVCs (clones of the original) and two
new pods with the cloned volume mounted. Without writing, the cloned filesystems
will have the same data as the original filesystem. The filesystems are different
and will diverge with writes.
2. Display the volumes
The command `kubectl get pvc` should now return something similar to this:
```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zfssa-csi-nfs-pvc Bound pvc-808d9bd7-cbb0-47a7-b400-b144248f1818 10Gi RWX zfssa-csi-nfs-sc 8s
zfssa-nfs-volume-clone-pvc-0
zfssa-nfs-volume-clone-pvc-1
```
3. Display the pods mounting the clones
The command `kubectl get pods` should now return something similar to this:
```text
NAME READY STATUS RESTARTS AGE
pod/zfssa-csi-nodeplugin-lpts9 2/2 Running 0 25m
pod/zfssa-csi-nodeplugin-vdb44 2/2 Running 0 25m
pod/zfssa-csi-provisioner-0 2/2 Running 0 23m
zfssa-nfs-volume-clone-pod-0
zfssa-nfs-volume-clone-pod-1
```
## Verify the data to the volumes
Once the pod is deployed, write data to the volume in the pod:
```yaml
kubectl exec -it zfssa-nfs-volume-clone-pod-0 -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
## Remove the example when complete
Helm will remove all of our pods and data:
```
helm uninstall setup-volume
helm uninstall clone volume
```

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-clone-example
version: 0.0.1
description: Clones an existing PVC using the Oracle ZFS Storage Appliance CSI driver.

View File

@ -0,0 +1,29 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}-0
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.scNfsName }}
resources:
requests:
storage: {{ .Values.volSize }}
dataSource:
kind: PersistentVolumeClaim
name: {{ .Values.pvcSourceName }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}-1
spec:
accessModes:
- ReadWriteOnce
storageClassName: {{ .Values.scNfsName }}
resources:
requests:
storage: {{ .Values.volSize }}
dataSource:
kind: PersistentVolumeClaim
name: {{ .Values.pvcSourceName }}

View File

@ -0,0 +1,43 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}-0
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}-0
readOnly: false
---
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}-1
labels:
name: ol7slim-test
spec:
restartPolicy: Always
containers:
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}-1
readOnly: false

View File

@ -0,0 +1,15 @@
# Various names used through example
scNfsName: zfssa-nfs-volume-clone-example-sc
pvcNfsName: zfssa-nfs-volume-clone-pvc
podNfsName: zfssa-nfs-volume-clone-pod
pvcSourceName: zfssa-nfs-volume-example-pvc
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for volume
volSize: OVERRIDE

View File

@ -0,0 +1,4 @@
apiVersion: v1
name: zfssa-csi-nfs-clone-setup
version: 0.0.1
description: Deploys a PVC and pod for use with the Oracle ZFS Storage Appliance CSI driver.

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.podNfsName }}
labels:
name: ol7slim-test
spec:
containers:
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim
volumeMounts:
- name: vol
mountPath: /mnt
volumes:
- name: vol
persistentVolumeClaim:
claimName: {{ .Values.pvcNfsName }}
readOnly: false

View File

@ -0,0 +1,26 @@
# Various names used through example
scNfsName: zfssa-nfs-volume-clone-example-sc
pvcNfsName: zfssa-nfs-volume-example-pvc
podNfsName: zfssa-nfs-volume-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
# Settings for volume
volSize: OVERRIDE

View File

@ -8,6 +8,13 @@ Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../INSTALLATION.md) instructions for details.
There are two helm deployments in this example
* [Create and use initial volume](./nfs-snapshot-creator)
* [Create and use snapshot](./nfs-snapshot-creator)
The values between the deployments have to be coordinated though a local values
file or the defaults should work.
## Configuration
Set up a local values files. It must contain the values that customize to the
@ -20,20 +27,14 @@ customize are:
* nfsServer: the NFS data path IP address
* volSize: the size of the filesystem share to create
## Enabling Volume Snapshot Feature (Only for Kubernetes v1.17 - v1.19)
The Kubernetes Volume Snapshot feature became GA in Kubernetes v1.20. In order to use
this feature in Kubernetes pre-v1.20, it MUST be enabled prior to deploying ZS CSI Driver.
To enable the feature on Kubernetes pre-v1.20, follow the instructions on
[INSTALLATION](../../INSTALLATION.md).
## Deployment
This step includes deploying a pod with an NFS volume attached using a regular
storage class and a persistent volume claim. It also deploys a volume snapshot class
required to take snapshots of the persistent volume.
Assuming there is a set of values in the local-values directory, deploy using Helm 3. If you plan to exercise creating volume from a snapshot with given yaml files as they are, define the names in the local-values.yaml as follows. You can modify them as per your preference.
If you plan to exercise creating volume from a snapshot with given yaml files as they are,
define the names in the local-values.yaml as follows. You can modify them as per your preference.
```text
scNfsName: zfssa-nfs-vs-example-sc
vscNfsName: zfssa-nfs-vs-example-vsc

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.scNfsName }}
provisioner: zfssa-csi-driver
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
volumeType: {{ .Values.appliance.volumeType }}
targetGroup: {{ .Values.appliance.targetGroup }}
blockSize: "8192"
pool: {{ .Values.appliance.pool }}
project: {{ .Values.appliance.project }}
targetPortal: {{ .Values.appliance.targetPortal }}
nfsServer: {{ .Values.appliance.nfsServer }}
rootUser: {{ .Values.appliance.rootUser }}
rootGroup: {{ .Values.appliance.rootGroup }}
rootPermissions: "777"
shareNFS: {{ .Values.appliance.shareNFS }}
restrictChown: "false"

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Values.pvcNfsName }}
spec:
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: {{ .Values.volSize }}
storageClassName: {{ .Values.scNfsName }}

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -4,6 +4,13 @@ vscNfsName: zfssa-nfs-vs-example-vsc
pvcNfsName: zfssa-nfs-vs-example-pvc
podNfsName: zfssa-nfs-vs-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin

View File

@ -0,0 +1,6 @@
apiVersion: v1
name: zfssa-fs-snapshot-example
version: 0.0.1
description: |
Uses the snapshot feature to create a snapshot of a filesystem and mount it
from a pod.

View File

@ -0,0 +1,91 @@
# Introduction
This is an end-to-end example of using an existing filesystem share on a target
Oracle ZFS Storage Appliance.
Prior to running this example, the NFS environment must be set up properly
on both the Kubernetes worker nodes and the Oracle ZFS Storage Appliance.
Refer to the [INSTALLATION](../../../INSTALLATION.md) instructions for details.
This flow to use an existing volume is:
* create a persistent volume (PV) object
* allocate it to a persistent volume claim (PVC)
* use the PVC from a pod
The following must be set up:
* the volume handle must be a fully formed volume id
* there must be volume attributes defined as part of the persistent volume
In this example, the volume handle is constructed via values in the helm
chart. The only new attribute necessary is the name of the volume on the
target appliance. The remaining is assembled from the information that is still
in the local-values.yaml file (appliance name, pool, project, etc...).
The resulting VolumeHandle appears similar to the following, with the values
in ```<>``` filled in from the helm variables:
```
volumeHandle: /nfs/<appliance name>/<volume name>/<pool name>/local/<project name>/<volume name>
```
From the above, note that the volumeHandle is in the form of an ID with the components:
* 'nfs' - denoting an exported NFS share
* 'appliance name' - this is the management path of the ZFSSA target appliance
* 'volume name' - the name of the share on the appliance
* 'pool name' - the pool on the target appliance
* 'local' - denotes that the pool is owned by the head
* 'project' - the project that the share is in
In the volume attributes, nfsServer must be defined.
Once created, a persistent volume claim can be made for this share and used in a pod.
## Configuration
Set up a local values files. It must contain the values that customize to the
target appliance, but can container others. The minimum set of values to
customize are:
* appliance:
* targetGroup: the target group that contains data path interfaces on the target appliance
* pool: the pool to create shares in
* project: the project to create shares in
* targetPortal: the target iSCSI portal on the appliance
* nfsServer: the NFS data path IP address
* applianceName: the existing appliance name (this is the management path)
* pvExistingFilesystemName: the name of the filesystem share on the target appliance
* volMountPoint: the mount point on the target appliance of the filesystem share
* volSize: the size of the filesystem share
On the target appliance, ensure that the filesystem share is exported via NFS.
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f ../local-values/local-values.yaml zfssa-nfs-existing ./
```
Once deployed, verify each of the created entities using kubectl:
```
kubectl get sc
kubectl get pvc
kubectl get pod
```
## Writing data
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the block volume:
```yaml
kubectl exec -it zfssa-fs-existing-pod -- /bin/sh
/ # cd /mnt
/mnt # ls
/mnt # echo "hello world" > demo.txt
/mnt #
```
The analytics on the appliance should have seen the spikes as data was written.

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -13,4 +13,4 @@ spec:
volumeMode: Filesystem
resources:
requests:
storage: 68796
storage: {{ .Values.volSize }}

View File

@ -0,0 +1,28 @@
# Various names used through example
scExistingFilesystemName: zfssa-fs-existing-sc
pvExistingFilesystemName: OVERRIDE
pvcExistingFilesystemName: zfssa-fs-existing-pvc
podExistingFilesystemName: zfssa-fs-existing-pod
applianceName: OVERRIDE
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin
targetGroup: OVERRIDE
pool: OVERRIDE
project: OVERRIDE
targetPortal: OVERRIDE
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
# Settings for volume
volMountPoint: OVERRIDE
volSize: OVERRIDE

View File

@ -23,14 +23,14 @@ customize are:
Check out the parameters section of the storage class configuration file (storage-class.yaml)
to see all supporting properties. Refer to NFS Protocol page of Oracle ZFS Storage Appliance
Administration Guide how to defind the values properly.
Administration Guide how to define the values properly.
## Deployment
Assuming there is a set of values in the local-values directory, deploy using Helm 3:
```
helm install -f local-values/local-values.yaml zfssa-nfs ./nfs
helm install -f local-values/local-values.yaml zfssa-nfs ./nfs
```
Once deployed, verify each of the created entities using kubectl:
@ -71,7 +71,7 @@ Once deployed, verify each of the created entities using kubectl:
Once the pod is deployed, for demo, start the following analytics in a worksheet on
the Oracle ZFS Storage Appliance that is hosting the target filesystems:
Exec into the pod and write some data to the block volume:
Exec into the pod and write some data to the volume:
```yaml
kubectl exec -it zfssa-nfs-example-pod -- /bin/sh
/ # cd /mnt

View File

@ -7,7 +7,7 @@ metadata:
spec:
restartPolicy: Always
containers:
- image: container-registry.oracle.com/os/oraclelinux:7-slim
- image: {{ .Values.imageBase }}{{ .Values.images.os.name }}:{{ .Values.images.os.tag }}
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
name: ol7slim

View File

@ -3,6 +3,13 @@ scNfsName: zfssa-nfs-example-sc
pvcNfsName: zfssa-nfs-example-pvc
podNfsName: zfssa-nfs-example-pod
# Location for images used
imageBase: container-registry.oracle.com/os/
images:
os:
name: oraclelinux
tag: "7-slim"
# Settings for target appliance
appliance:
volumeType: thin
@ -13,7 +20,7 @@ appliance:
nfsServer: OVERRIDE
rootUser: root
rootGroup: other
shareNFS: "on"
shareNFS: '"on"'
# Settings for volume
volSize: OVERRIDE

133
go.mod
View File

@ -1,53 +1,112 @@
module github.com/oracle/zfssa-csi-driver
go 1.13
go 1.21
require (
github.com/container-storage-interface/spec v1.2.0
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef // indirect
github.com/golang/protobuf v1.4.0
github.com/container-storage-interface/spec v1.6.0
github.com/golang/protobuf v1.5.2
github.com/kubernetes-csi/csi-lib-iscsi v0.0.0-20190415173011-c545557492f4
github.com/kubernetes-csi/csi-lib-utils v0.6.1
github.com/onsi/gomega v1.9.0 // indirect
github.com/prometheus/client_golang v1.2.1 // indirect
golang.org/x/net v0.0.0-20191101175033-0deb6923b6d9
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae // indirect
google.golang.org/grpc v1.23.1
gopkg.in/yaml.v2 v2.2.8
k8s.io/apimachinery v0.17.11
k8s.io/client-go v0.18.2
golang.org/x/net v0.7.0
google.golang.org/grpc v1.47.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/apimachinery v0.25.7
k8s.io/client-go v0.25.7
k8s.io/klog v1.0.0
k8s.io/kubernetes v1.17.5
k8s.io/utils v0.0.0-20191114184206-e782cd3c129f
k8s.io/kubernetes v1.25.7
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
)
require (
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.5 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/google/gofuzz v1.1.0 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/moby/sys/mountinfo v0.6.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/opencontainers/selinux v1.10.0 // indirect
github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/spf13/pflag v1.0.5 // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
golang.org/x/sys v0.5.0 // indirect
golang.org/x/term v0.5.0 // indirect
golang.org/x/text v0.7.0 // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.25.7 // indirect
k8s.io/apiserver v0.25.7 // indirect
k8s.io/cloud-provider v0.0.0 // indirect
k8s.io/component-base v0.25.7 // indirect
k8s.io/component-helpers v0.25.7 // indirect
k8s.io/klog/v2 v2.70.1 // indirect
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
k8s.io/mount-utils v0.0.0 // indirect
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect
)
replace (
k8s.io/api => k8s.io/api v0.17.5
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.17.5
k8s.io/apimachinery => k8s.io/apimachinery v0.17.6-beta.0
k8s.io/apiserver => k8s.io/apiserver v0.17.5
k8s.io/cli-runtime => k8s.io/cli-runtime v0.17.5
k8s.io/client-go => k8s.io/client-go v0.17.5
k8s.io/cloud-provider => k8s.io/cloud-provider v0.17.5
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.17.5
k8s.io/code-generator => k8s.io/code-generator v0.17.6-beta.0
k8s.io/component-base => k8s.io/component-base v0.17.5
k8s.io/cri-api => k8s.io/cri-api v0.17.13-rc.0
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.17.5
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.17.5
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.17.5
k8s.io/kube-proxy => k8s.io/kube-proxy v0.17.5
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.17.5
k8s.io/kubelet => k8s.io/kubelet v0.17.5
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.17.5
k8s.io/metrics => k8s.io/metrics v0.17.5
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.17.5
k8s.io/api => k8s.io/api v0.25.7
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.25.7
k8s.io/apimachinery => k8s.io/apimachinery v0.25.9
k8s.io/apiserver => k8s.io/apiserver v0.25.7
k8s.io/cli-runtime => k8s.io/cli-runtime v0.25.7
k8s.io/client-go => k8s.io/client-go v0.25.7
k8s.io/cloud-provider => k8s.io/cloud-provider v0.25.7
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.25.7
k8s.io/code-generator => k8s.io/code-generator v0.25.9
k8s.io/component-base => k8s.io/component-base v0.25.7
k8s.io/cri-api => k8s.io/cri-api v0.25.9
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.25.7
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.25.7
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.25.7
k8s.io/kube-proxy => k8s.io/kube-proxy v0.25.7
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.25.7
k8s.io/kubelet => k8s.io/kubelet v0.25.7
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.25.7
k8s.io/metrics => k8s.io/metrics v0.25.7
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.25.7
)
replace k8s.io/kubectl => k8s.io/kubectl v0.17.5
replace k8s.io/kubectl => k8s.io/kubectl v0.25.7
replace k8s.io/node-api => k8s.io/node-api v0.17.5
replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.17.5
replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.25.7
replace k8s.io/sample-controller => k8s.io/sample-controller v0.17.5
replace k8s.io/sample-controller => k8s.io/sample-controller v0.25.7
replace k8s.io/component-helpers => k8s.io/component-helpers v0.25.7
replace k8s.io/controller-manager => k8s.io/controller-manager v0.25.7
replace k8s.io/mount-utils => k8s.io/mount-utils v0.25.9
replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.25.7

View File

@ -6,6 +6,7 @@
package service
import (
"context"
"errors"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -14,12 +15,11 @@ import (
)
var (
clusterConfig *rest.Config
clientset *kubernetes.Clientset
clusterConfig *rest.Config
clientset *kubernetes.Clientset
)
// Initializes the cluster interface.
//
func InitClusterInterface() error {
var err error
@ -37,10 +37,9 @@ func InitClusterInterface() error {
}
// Returns the node name based on the passed in node ID.
//
func GetNodeName(nodeID string) (string, error) {
nodeInfo, err := clientset.CoreV1().Nodes().Get(nodeID, metav1.GetOptions{
TypeMeta: metav1.TypeMeta{
func GetNodeName(ctx context.Context, nodeID string) (string, error) {
nodeInfo, err := clientset.CoreV1().Nodes().Get(ctx, nodeID, metav1.GetOptions{
TypeMeta: metav1.TypeMeta{
Kind: "",
APIVersion: "",
},
@ -55,11 +54,10 @@ func GetNodeName(nodeID string) (string, error) {
}
// Returns the list of nodes in the form of a slice containing their name.
//
func GetNodeList() ([]string, error) {
func GetNodeList(ctx context.Context) ([]string, error) {
nodeList, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{
TypeMeta: metav1.TypeMeta{
nodeList, err := clientset.CoreV1().Nodes().List(ctx, metav1.ListOptions{
TypeMeta: metav1.TypeMeta{
Kind: "",
APIVersion: "",
},
@ -70,8 +68,8 @@ func GetNodeList() ([]string, error) {
return nil, err
}
var nodeNameList []string
for _, node:= range nodeList.Items {
var nodeNameList []string
for _, node := range nodeList.Items {
nodeNameList = append(nodeNameList, node.Name)
}

View File

@ -6,10 +6,10 @@
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@ -17,7 +17,7 @@ import (
)
var (
// the current controller service accessModes supported
// controller service capabilities supported
controllerCaps = []csi.ControllerServiceCapability_RPC_Type{
csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME,
csi.ControllerServiceCapability_RPC_PUBLISH_UNPUBLISH_VOLUME,
@ -26,6 +26,7 @@ var (
csi.ControllerServiceCapability_RPC_EXPAND_VOLUME,
csi.ControllerServiceCapability_RPC_CREATE_DELETE_SNAPSHOT,
csi.ControllerServiceCapability_RPC_LIST_SNAPSHOTS,
csi.ControllerServiceCapability_RPC_CLONE_VOLUME,
}
)
@ -51,6 +52,7 @@ func (zd *ZFSSADriver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRe
return nil, err
}
// TODO: check if pool/project are populated if the storage class is left out on volume cloneVolume
parameters := req.GetParameters()
pool := parameters["pool"]
project := parameters["project"]
@ -59,21 +61,32 @@ func (zd *ZFSSADriver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRe
if err != nil {
return nil, err
}
defer zd.releaseVolume(ctx, zvol)
defer zd.releaseVolume(ctx, zvol)
// Check if there is a source for the new volume
if volumeContentSource := req.GetVolumeContentSource(); volumeContentSource != nil {
if snapshot := volumeContentSource.GetSnapshot(); snapshot != nil {
switch volumeContentSource.Type.(type) {
case *csi.VolumeContentSource_Snapshot:
snapshot := volumeContentSource.GetSnapshot()
utils.GetLogCTRL(ctx, 5).Println("CreateSnapshot", "request", snapshot)
zsnap, err := zd.lookupSnapshot(ctx, token, snapshot.GetSnapshotId())
if err != nil {
return nil, err
}
defer zd.releaseSnapshot(ctx, zsnap)
defer zd.releaseSnapshot(ctx, zsnap)
return zvol.cloneSnapshot(ctx, token, req, zsnap)
case *csi.VolumeContentSource_Volume:
volume := volumeContentSource.GetVolume()
utils.GetLogCTRL(ctx, 5).Println("CreateVolumeClone", "request", volume)
// cloneVolume creation is complex, delegate out to it
return zvol.cloneVolume(ctx, token, req)
default:
return nil, status.Errorf(codes.InvalidArgument, "%v type not implemented in driver",
volumeContentSource.GetType())
}
return nil, status.Error(codes.InvalidArgument, "Only snapshots are supported as content source")
} else {
return zvol.create(ctx, token, req)
}
return zvol.create(ctx, token, req)
}
// Retrieve the volume size from the request (if not available, use a default)
@ -91,9 +104,8 @@ func getVolumeSize(capRange *csi.CapacityRange) int64 {
// Check whether the access mode of the volume to create is "block" or "filesystem"
//
// true block access mode
// false filesystem access mode
//
// true block access mode
// false filesystem access mode
func isBlock(capabilities []*csi.VolumeCapability) bool {
for _, capacity := range capabilities {
if capacity.GetBlock() == nil {
@ -104,7 +116,6 @@ func isBlock(capabilities []*csi.VolumeCapability) bool {
}
// Validates as much of the "create volume request" as possible
//
func validateCreateVolumeReq(ctx context.Context, token *zfssarest.Token, req *csi.CreateVolumeRequest) error {
log5 := utils.GetLogCTRL(ctx, 5)
@ -149,7 +160,7 @@ func validateCreateVolumeReq(ctx context.Context, token *zfssarest.Token, req *c
}
if pool.Status != "online" && pool.Status != "degraded" {
log5.Println("Pool not ready", "State", pool.Status)
log5.Println("Pool not ready", "State", pool.Status)
return status.Errorf(codes.InvalidArgument, "pool %s in an error state (%s)", poolName, pool.Status)
}
@ -239,7 +250,7 @@ func (zd *ZFSSADriver) ControllerPublishVolume(ctx context.Context, req *csi.Con
return nil, status.Error(codes.InvalidArgument, "Capability not provided")
}
nodeName, err := GetNodeName(nodeID)
nodeName, err := GetNodeName(ctx, nodeID)
if err != nil {
return nil, status.Errorf(codes.NotFound, "Node (%s) was not found: %v", req.NodeId, err)
}
@ -378,7 +389,7 @@ func (zd *ZFSSADriver) ListVolumes(ctx context.Context, req *csi.ListVolumesRequ
rsp := &csi.ListVolumesResponse{
NextToken: nextToken,
Entries: entries,
Entries: entries,
}
return rsp, nil
@ -387,7 +398,7 @@ func (zd *ZFSSADriver) ListVolumes(ctx context.Context, req *csi.ListVolumesRequ
func (zd *ZFSSADriver) GetCapacity(ctx context.Context, req *csi.GetCapacityRequest) (
*csi.GetCapacityResponse, error) {
utils.GetLogCTRL(ctx,5).Println("GetCapacity", "request", protosanitizer.StripSecrets(req))
utils.GetLogCTRL(ctx, 5).Println("GetCapacity", "request", protosanitizer.StripSecrets(req))
reqCaps := req.GetVolumeCapabilities()
if len(reqCaps) > 0 {
@ -448,14 +459,14 @@ func (zd *ZFSSADriver) GetCapacity(ctx context.Context, req *csi.GetCapacityRequ
}
availableCapacity = project.SpaceAvailable
}
return &csi.GetCapacityResponse{AvailableCapacity: availableCapacity}, nil
}
func (zd *ZFSSADriver) ControllerGetCapabilities(ctx context.Context, req *csi.ControllerGetCapabilitiesRequest) (
*csi.ControllerGetCapabilitiesResponse, error) {
utils.GetLogCTRL(ctx,5).Println("ControllerGetCapabilities",
utils.GetLogCTRL(ctx, 5).Println("ControllerGetCapabilities",
"request", protosanitizer.StripSecrets(req))
var caps []*csi.ControllerServiceCapability
@ -501,7 +512,7 @@ func (zd *ZFSSADriver) CreateSnapshot(ctx context.Context, req *csi.CreateSnapsh
func (zd *ZFSSADriver) DeleteSnapshot(ctx context.Context, req *csi.DeleteSnapshotRequest) (
*csi.DeleteSnapshotResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("DeleteSnapshot", "request", protosanitizer.StripSecrets(req))
utils.GetLogCTRL(ctx, 5).Println("DeleteSnapshot", "request", protosanitizer.StripSecrets(req))
if len(req.GetSnapshotId()) == 0 {
return nil, status.Errorf(codes.InvalidArgument, "no snapshot ID provided")
@ -577,11 +588,11 @@ func (zd *ZFSSADriver) ListSnapshots(ctx context.Context, req *csi.ListSnapshots
if err == nil {
entry := new(csi.ListSnapshotsResponse_Entry)
entry.Snapshot = &csi.Snapshot{
SnapshotId: zsnap.id.String(),
SizeBytes: zsnap.getSize(),
SnapshotId: zsnap.id.String(),
SizeBytes: zsnap.getSize(),
SourceVolumeId: zsnap.getStringSourceId(),
CreationTime: zsnap.getCreationTime(),
ReadyToUse: true,
CreationTime: zsnap.getCreationTime(),
ReadyToUse: true,
}
zd.releaseSnapshot(ctx, zsnap)
utils.GetLogCTRL(ctx, 5).Println("ListSnapshots with snapshot ID", "Snapshot", zsnap.getHref())
@ -623,7 +634,7 @@ func (zd *ZFSSADriver) ListSnapshots(ctx context.Context, req *csi.ListSnapshots
rsp := &csi.ListSnapshotsResponse{
NextToken: nextToken,
Entries: entries,
Entries: entries,
}
return rsp, nil
@ -659,6 +670,44 @@ func (zd *ZFSSADriver) ControllerExpandVolume(ctx context.Context, req *csi.Cont
return zvol.controllerExpandVolume(ctx, token, req)
}
func (zd *ZFSSADriver) ControllerGetVolume(ctx context.Context, req *csi.ControllerGetVolumeRequest) (
*csi.ControllerGetVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("ControllerGetVolume", "request", protosanitizer.StripSecrets(req))
log2 := utils.GetLogCTRL(ctx, 2)
volumeID := req.GetVolumeId()
if len(volumeID) == 0 {
log2.Println("Volume ID not provided, will return")
return nil, status.Error(codes.InvalidArgument, "Volume ID not provided")
}
// req does not contain a secret map
user, password, err := zd.getUserLogin(ctx, nil)
if err != nil {
return nil, status.Error(codes.Unauthenticated, "Invalid credentials")
}
token := zfssarest.LookUpToken(ctx, user, password)
zvol, err := zd.lookupVolume(ctx, token, volumeID)
if err != nil {
log2.Println("ControllerGetVolume request failed, bad VolumeId",
"volume_id", volumeID, "error", err.Error())
return nil, err
}
defer zd.releaseVolume(ctx, zvol)
return &csi.ControllerGetVolumeResponse{
Volume: &csi.Volume{
VolumeId: volumeID,
CapacityBytes: zvol.getCapacity(),
},
// VolumeStatus is not required if LIST_VOLUMES_PUBLISHED_NODES and VOLUME_CONDITION
// are not implemented
}, nil
}
// Check the secrets map (typically in a request context) for a change in the username
// and password or retrieve the username/password from the credentials file, the username
// and password should be scrubbed quickly after use and not remain in memory

View File

@ -6,11 +6,11 @@
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"context"
"fmt"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
context2 "golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@ -20,22 +20,22 @@ import (
// ZFSSA block volume
type zLUN struct {
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
initiatorgroup []string
targetgroup string``
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
initiatorgroup []string
targetgroup string ``
}
var (
// access modes supported by block volumes.
blockVolumeCaps = []csi.VolumeCapability_AccessMode {
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER },
blockVolumeCaps = []csi.VolumeCapability_AccessMode{
{Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER},
}
)
@ -58,8 +58,8 @@ func (lun *zLUN) create(ctx context.Context, token *zfssarest.Token,
capacityRange := req.GetCapacityRange()
capabilities := req.GetVolumeCapabilities()
_, luninfo, httpStatus, err := zfssarest.CreateLUN(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
_, luninfo, httpStatus, err := zfssarest.CreateLUN(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
if err != nil {
if httpStatus != http.StatusConflict {
lun.state = stateDeleted
@ -67,47 +67,47 @@ func (lun *zLUN) create(ctx context.Context, token *zfssarest.Token,
}
utils.GetLogCTRL(ctx, 5).Println("LUN already exits")
// The creation failed because the appliance already has a LUN
// with the same name. We get the information from the appliance,
// The creation failed because the appliance already has a LUN
// with the same name. We get the information from the appliance,
// update the LUN context and check its compatibility with the request.
if lun.state == stateCreated {
luninfo, _, err := zfssarest.GetLun(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
luninfo, _, err := zfssarest.GetLun(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
if err != nil {
return nil, err
}
lun.setInfo(luninfo)
}
// The LUN has already been created. The compatibility of the
// capacity range and accessModes is checked.
if !compareCapacityRange(capacityRange, lun.capacity) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
" capacity range incompatible (%v), requested (%v/%v)",
lun.id.Name, lun.id.Zfssa, lun.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
lun.id.Name, lun.id.Zfssa, lun.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
}
if !compareCapabilities(capabilities, lun.accessModes, true) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
lun.id.Name, lun.id.Zfssa)
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
lun.id.Name, lun.id.Zfssa)
}
} else {
lun.setInfo(luninfo)
}
utils.GetLogCTRL(ctx, 5).Printf(
"LUN created: name=%s, target=%s, assigned_number=%d",
"LUN created: name=%s, target=%s, assigned_number=%d",
luninfo.CanonicalName, luninfo.TargetGroup, luninfo.AssignedNumber[0])
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: lun.id.String(),
CapacityBytes: lun.capacity,
VolumeContext: req.GetParameters()}}, nil
VolumeId: lun.id.String(),
CapacityBytes: lun.capacity,
VolumeContext: req.GetParameters()}}, nil
}
func (lun *zLUN) cloneSnapshot(ctx context.Context, token *zfssarest.Token,
@ -152,6 +152,17 @@ func (lun *zLUN) delete(ctx context.Context, token *zfssarest.Token) (*csi.Delet
return &csi.DeleteVolumeResponse{}, nil
}
func (lun *zLUN) cloneVolume(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
// Create a snapshot to base the clone on
// Clone the snapshot to the volume
utils.GetLogCTRL(ctx, 5).Println("lun.cloneVolume")
return nil, status.Error(codes.Unimplemented, "LUN cloneVolume not implemented yet")
}
func (lun *zLUN) controllerPublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerPublishVolumeRequest, nodeName string) (*csi.ControllerPublishVolumeResponse, error) {
@ -167,12 +178,12 @@ func (lun *zLUN) controllerPublishVolume(ctx context.Context, token *zfssarest.T
return nil, err
}
// When the driver creates a LUN or clones a Lun from a snapshot of another Lun,
// it masks the intiator group of the Lun using zfssarest.MaskAll value.
// When the driver creates a LUN or clones a Lun from a snapshot of another Lun,
// it masks the initiator group of the Lun using zfssarest.MaskAll value.
// When the driver unpublishes the Lun, it also masks the initiator group.
// This block is to test if the Lun to publish was created or unpublished
// This block is to test if the Lun to publish was created or unpublished
// by the driver. Publishing a Lun with unmasked initiator group fails
// to avoid mistakenly publishing a Lun that may be in use by other entity.
// to avoid mistakenly publishing a Lun that may be in use by other entity.
utils.GetLogCTRL(ctx, 5).Printf("Volume to publish: %s:%s", lun.id, list[0])
if len(list) != 1 || list[0] != zfssarest.MaskAll {
var msg string
@ -184,7 +195,7 @@ func (lun *zLUN) controllerPublishVolume(ctx context.Context, token *zfssarest.T
return nil, status.Error(codes.FailedPrecondition, msg)
}
// Reset the masked initiator group with one named by the current node name.
// Reset the masked initiator group with one named by the current node name.
// There must be initiator groups on ZFSSA defined by the node names.
_, err = zfssarest.SetInitiatorGroupList(ctx, token, pool, project, name, nodeName)
if err != nil {
@ -285,15 +296,15 @@ func (lun *zLUN) getSnapshotsList(ctx context.Context, token *zfssarest.Token) (
return zfssaSnapshotList2csiSnapshotList(ctx, token.Name, snapList), nil
}
func (lun *zLUN) getState() volumeState { return lun.state }
func (lun *zLUN) getName() string { return lun.id.Name }
func (lun *zLUN) getHref() string { return lun.href }
func (lun *zLUN) getState() volumeState { return lun.state }
func (lun *zLUN) getName() string { return lun.id.Name }
func (lun *zLUN) getHref() string { return lun.href }
func (lun *zLUN) getVolumeID() *utils.VolumeId { return lun.id }
func (lun *zLUN) getCapacity() int64 { return lun.capacity }
func (lun *zLUN) isBlock() bool { return true }
func (lun *zLUN) getCapacity() int64 { return lun.capacity }
func (lun *zLUN) isBlock() bool { return true }
func (lun *zLUN) getSnapshots(ctx context.Context, token *zfssarest.Token) ([]zfssarest.Snapshot, error) {
return zfssarest.GetSnapshots(ctx, token, lun.href)
return zfssarest.GetSnapshots(ctx, token, lun.href)
}
func (lun *zLUN) setInfo(volInfo interface{}) {
@ -329,7 +340,7 @@ func (lun *zLUN) lock(ctx context.Context) volumeState {
return lun.state
}
func (lun *zLUN) unlock(ctx context.Context) (int32, volumeState){
func (lun *zLUN) unlock(ctx context.Context) (int32, volumeState) {
lun.bolt.Unlock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is unlocked", lun.id.String())
return lun.refcount, lun.state

View File

@ -6,10 +6,11 @@
package service
import (
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"context"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/kubernetes-csi/csi-lib-utils/protosanitizer"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
context2 "golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@ -19,25 +20,25 @@ import (
var (
filesystemAccessModes = []csi.VolumeCapability_AccessMode{
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER },
{ Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER },
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY },
{ Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY },
{ Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY },
{Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER},
{Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER},
{Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY},
{Mode: csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY},
{Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY},
}
)
// ZFSSA mount volume
type zFilesystem struct {
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
mountpoint string
bolt *utils.Bolt
refcount int32
state volumeState
href string
id *utils.VolumeId
capacity int64
accessModes []csi.VolumeCapability_AccessMode
source *csi.VolumeContentSource
mountpoint string
}
// Creates a new filesysyem structure. If no information is provided (fsinfo is nil), this
@ -58,8 +59,8 @@ func (fs *zFilesystem) create(ctx context.Context, token *zfssarest.Token,
capacityRange := req.GetCapacityRange()
capabilities := req.GetVolumeCapabilities()
fsinfo, httpStatus, err := zfssarest.CreateFilesystem(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
fsinfo, httpStatus, err := zfssarest.CreateFilesystem(ctx, token,
req.GetName(), getVolumeSize(capacityRange), &req.Parameters)
if err != nil {
if httpStatus != http.StatusConflict {
fs.state = stateDeleted
@ -71,43 +72,43 @@ func (fs *zFilesystem) create(ctx context.Context, token *zfssarest.Token,
// with the same name. We get the information from the appliance, update
// the file system context and check its compatibility with the request.
if fs.state == stateCreated {
fsinfo, _, err = zfssarest.GetFilesystem(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
fsinfo, _, err = zfssarest.GetFilesystem(ctx, token,
req.Parameters["pool"], req.Parameters["project"], req.GetName())
if err != nil {
return nil, err
}
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
}
// The volume has already been created. The compatibility of the
// capacity range and accessModes is checked.
if !compareCapacityRange(capacityRange, fs.capacity) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s),"+
" capacity range incompatible (%v), requested (%v/%v)",
fs.id.Name, fs.id.Zfssa, fs.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
fs.id.Name, fs.id.Zfssa, fs.capacity,
capacityRange.RequiredBytes, capacityRange.LimitBytes)
}
if !compareCapabilities(capabilities, fs.accessModes, false) {
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
fs.id.Name, fs.id.Zfssa)
return nil,
status.Errorf(codes.AlreadyExists,
"Volume (%s) is already on target (%s), accessModes are incompatible",
fs.id.Name, fs.id.Zfssa)
}
} else {
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
}
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: fs.id.String(),
CapacityBytes: fs.capacity,
VolumeContext: req.Parameters}}, nil
VolumeId: fs.id.String(),
CapacityBytes: fs.capacity,
VolumeContext: req.Parameters}}, nil
}
func (fs *zFilesystem) cloneSnapshot(ctx context.Context, token *zfssarest.Token,
@ -125,8 +126,8 @@ func (fs *zFilesystem) cloneSnapshot(ctx context.Context, token *zfssarest.Token
}
fs.setInfo(fsinfo)
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
// pass mountpoint as a volume context value to use for nfs mount to the pod
req.Parameters["mountpoint"] = fs.mountpoint
return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
@ -160,20 +161,29 @@ func (fs *zFilesystem) delete(ctx context.Context, token *zfssarest.Token) (*csi
return &csi.DeleteVolumeResponse{}, nil
}
func (lun *zFilesystem) cloneVolume(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("cloneVolume", "request", protosanitizer.StripSecrets(req))
// Create a snapshot to base the clone on
// Clone the snapshot to the volume
utils.GetLogCTRL(ctx, 5).Println("fs.cloneVolume")
return nil, status.Error(codes.Unimplemented, "Filesystem cloneVolume not implemented yet")
}
// Publishes a file system. In this case there's nothing to do.
//
func (fs *zFilesystem) controllerPublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerPublishVolumeRequest, nodeName string) (*csi.ControllerPublishVolumeResponse, error) {
// Note: the volume context of the volume provisioned from an existing share does not have the mountpoint.
// Note: the volume context of the volume provisioned from an existing share does not have the mountpoint.
// Use the share (corresponding to volumeAttributes.share of PV configuration) to define the mountpoint.
return &csi.ControllerPublishVolumeResponse{}, nil
}
// Unpublishes a file system. In this case there's nothing to do.
//
func (fs *zFilesystem) controllerUnpublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerUnpublishVolumeRequest) (*csi.ControllerUnpublishVolumeResponse, error) {
utils.GetLogCTRL(ctx, 5).Println("fs.controllerUnpublishVolume")
@ -206,7 +216,7 @@ func (fs *zFilesystem) controllerExpandVolume(ctx context.Context, token *zfssar
reqCapacity := req.GetCapacityRange().RequiredBytes
if fs.capacity >= reqCapacity {
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: fs.capacity,
CapacityBytes: fs.capacity,
NodeExpansionRequired: false,
}, nil
}
@ -221,8 +231,8 @@ func (fs *zFilesystem) controllerExpandVolume(ctx context.Context, token *zfssar
fs.capacity = fsinfo.Quota
return &csi.ControllerExpandVolumeResponse{
CapacityBytes: fsinfo.Quota,
NodeExpansionRequired: false,
CapacityBytes: fsinfo.Quota,
NodeExpansionRequired: false,
}, nil
}
@ -270,12 +280,12 @@ func (fs *zFilesystem) getSnapshotsList(ctx context.Context, token *zfssarest.To
return zfssaSnapshotList2csiSnapshotList(ctx, token.Name, snapList), nil
}
func (fs *zFilesystem) getState() volumeState { return fs.state }
func (fs *zFilesystem) getName() string { return fs.id.Name }
func (fs *zFilesystem) getHref() string { return fs.href }
func (fs *zFilesystem) getState() volumeState { return fs.state }
func (fs *zFilesystem) getName() string { return fs.id.Name }
func (fs *zFilesystem) getHref() string { return fs.href }
func (fs *zFilesystem) getVolumeID() *utils.VolumeId { return fs.id }
func (fs *zFilesystem) getCapacity() int64 { return fs.capacity }
func (fs *zFilesystem) isBlock() bool { return false }
func (fs *zFilesystem) getCapacity() int64 { return fs.capacity }
func (fs *zFilesystem) isBlock() bool { return false }
func (fs *zFilesystem) setInfo(volInfo interface{}) {
@ -309,7 +319,7 @@ func (fs *zFilesystem) lock(ctx context.Context) volumeState {
return fs.state
}
func (fs *zFilesystem) unlock(ctx context.Context) (int32, volumeState){
func (fs *zFilesystem) unlock(ctx context.Context) (int32, volumeState) {
fs.bolt.Unlock(ctx)
utils.GetLogCTRL(ctx, 5).Printf("%s is unlocked", fs.id.String())
return fs.refcount, fs.state

View File

@ -6,21 +6,21 @@
package service
import (
"fmt"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"fmt"
"os"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"os"
)
var (
// nodeCaps represents the capability of node service.
nodeCaps = []csi.NodeServiceCapability_RPC_Type{
csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME,
// csi.NodeServiceCapability_RPC_EXPAND_VOLUME,
// csi.NodeServiceCapability_RPC_EXPAND_VOLUME,
csi.NodeServiceCapability_RPC_UNKNOWN,
}
)
@ -31,7 +31,7 @@ func NewZFSSANodeServer(zd *ZFSSADriver) *csi.NodeServer {
return &ns
}
func (zd *ZFSSADriver) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (
func (zd *ZFSSADriver) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (
*csi.NodeStageVolumeResponse, error) {
utils.GetLogNODE(ctx, 5).Println("NodeStageVolume", "request", req)
@ -102,7 +102,7 @@ func (zd *ZFSSADriver) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnsta
if err != nil {
return nil, status.Errorf(codes.Internal, "Cannot unmount staging target %q: %v", target, err)
}
notMnt, mntErr := zd.NodeMounter.IsLikelyNotMountPoint(target)
if mntErr != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot determine staging target path",

View File

@ -10,9 +10,9 @@ import (
"os"
"strings"
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@ -75,11 +75,18 @@ func (zd *ZFSSADriver) nodeUnpublishFilesystemVolume(token *zfssarest.Token,
return nil, status.Error(codes.InvalidArgument, "Target path not provided")
}
if _, pathErr := os.Stat(targetPath); os.IsNotExist(pathErr) {
//targetPath doesn't exist; nothing to do
utils.GetLogNODE(ctx, 2).Println("nodeUnpublishFilesystemVolume targetPath doesn't exist", targetPath)
return &csi.NodeUnpublishVolumeResponse{}, nil
}
err := zd.NodeMounter.Unmount(targetPath)
if err != nil {
utils.GetLogNODE(ctx, 2).Println("Cannot unmount volume",
"volume_id", req.GetVolumeId(), "error", err.Error())
return nil, status.Error(codes.Internal, err.Error())
if !strings.Contains(err.Error(), "not mounted") {
return nil, status.Error(codes.Internal, err.Error())
}
}
notMnt, mntErr := zd.NodeMounter.IsLikelyNotMountPoint(targetPath)

View File

@ -6,9 +6,9 @@
package service
import (
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/oracle/zfssa-csi-driver/pkg/utils"
"github.com/oracle/zfssa-csi-driver/pkg/zfssarest"
"github.com/container-storage-interface/spec/lib/go/csi"
"golang.org/x/net/context"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@ -58,7 +58,7 @@ import (
// volume source.
//
type volumeState int
type volumeState int
const (
stateCreating volumeState = iota
@ -71,7 +71,7 @@ type zVolumeInterface interface {
create(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error)
delete(ctx context.Context, token *zfssarest.Token,
) (*csi.DeleteVolumeResponse, error)
) (*csi.DeleteVolumeResponse, error)
controllerPublishVolume(ctx context.Context, token *zfssarest.Token,
req *csi.ControllerPublishVolumeRequest, nodeName string) (*csi.ControllerPublishVolumeResponse, error)
controllerUnpublishVolume(ctx context.Context, token *zfssarest.Token,
@ -92,6 +92,8 @@ type zVolumeInterface interface {
req *csi.NodeGetVolumeStatsRequest) (*csi.NodeGetVolumeStatsResponse, error)
cloneSnapshot(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest, zsnap *zSnapshot) (*csi.CreateVolumeResponse, error)
cloneVolume(ctx context.Context, token *zfssarest.Token,
req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error)
getDetails(ctx context.Context, token *zfssarest.Token) (int, error)
setInfo(volInfo interface{})
getSnapshotsList(context.Context, *zfssarest.Token) ([]*csi.ListSnapshotsResponse_Entry, error)
@ -107,20 +109,18 @@ type zVolumeInterface interface {
isBlock() bool
}
// This method must be called when the possibility of the volume not existing yet exists.
// The following 3 scenarios are possible:
//
// * The volume doesn't exists yet or it exists in the appliance but is not in the
// volume cache yet. Either way, a structure representing the volume is created
// in the stateCreating state, stored in the cache and a reference returned to the
// caller.
// * A structure representing the volume already exists in the cache and is in the
// stateCreated state. A reference is returned to the caller.
// * A structure representing the volume already exists in the cache and is NOT in
// the stateCreated state. This means the CO probably lost state and submitted multiple
// simultaneous requests for this volume. In this case an error is returned.
//
// - The volume doesn't exists yet or it exists in the appliance but is not in the
// volume cache yet. Either way, a structure representing the volume is created
// in the stateCreating state, stored in the cache and a reference returned to the
// caller.
// - A structure representing the volume already exists in the cache and is in the
// stateCreated state. A reference is returned to the caller.
// - A structure representing the volume already exists in the cache and is NOT in
// the stateCreated state. This means the CO probably lost state and submitted multiple
// simultaneous requests for this volume. In this case an error is returned.
func (zd *ZFSSADriver) newVolume(ctx context.Context, pool, project, name string,
block bool) (zVolumeInterface, error) {
@ -138,7 +138,7 @@ func (zd *ZFSSADriver) newVolume(ctx context.Context, pool, project, name string
zvol := zd.vCache.lookup(ctx, name)
if zvol != nil {
// Volume already known.
utils.GetLogCTRL(ctx, 5).Println("zd.newVolume", "request", )
utils.GetLogCTRL(ctx, 5).Println("zd.newVolume", "request")
zvol.hold(ctx)
zd.vCache.Unlock(ctx)
if zvol.lock(ctx) != stateCreated {
@ -192,7 +192,7 @@ func (zd *ZFSSADriver) lookupVolume(ctx context.Context, token *zfssarest.Token,
}
switch zvol.getState() {
case stateCreating: // We check with the appliance.
case stateCreating: // We check with the appliance.
httpStatus, err := zvol.getDetails(ctx, token)
if err != nil {
zd.releaseVolume(ctx, zvol)
@ -202,7 +202,7 @@ func (zd *ZFSSADriver) lookupVolume(ctx context.Context, token *zfssarest.Token,
return nil, err
}
return zvol, nil
case stateCreated: // Another Goroutine beat us to it.
case stateCreated: // Another Goroutine beat us to it.
return zvol, nil
default:
zd.releaseVolume(ctx, zvol)
@ -232,15 +232,16 @@ func (zd *ZFSSADriver) releaseVolume(ctx context.Context, zvol zVolumeInterface)
// If a snapshot with the passed in name already exists, it is returned. If it doesn't exist,
// a new snapshot structure is created and returned. This method could fail or reasons:
//
// 1) A snapshot with the passed in name already exists but the volume source
// is not the volume source passed in.
// 2) A snapshot with the passed in name already exists but is not in the stateCreated
// state (or stable state). As for volumes, This would mean the CO lost state and
// issued simultaneous requests for the same snapshot.
// 1. A snapshot with the passed in name already exists but the volume source
// is not the volume source passed in.
//
// If the call is successful, that caller has exclusive access to the snapshot and its volume
// source. When the snapshot returned is not needed anymore, the method releaseSnapshot()
// must be called.
// 2. A snapshot with the passed in name already exists but is not in the stateCreated
// state (or stable state). As for volumes, This would mean the CO lost state and
// issued simultaneous requests for the same snapshot.
//
// If the call is successful, that caller has exclusive access to the snapshot and its volume
// source. When the snapshot returned is not needed anymore, the method releaseSnapshot()
// must be called.
func (zd *ZFSSADriver) newSnapshot(ctx context.Context, token *zfssarest.Token,
name, sourceId string) (*zSnapshot, error) {
@ -287,13 +288,13 @@ func (zd *ZFSSADriver) newSnapshot(ctx context.Context, token *zfssarest.Token,
// exclusive access to the returned snapshot and its volume source. This method could fail
// for the following reasons:
//
// 1) The source volume cannot be found locally or in the appliance.
// 2) The snapshot exists but is in an unstable state. This would mean the
// CO lost state and issued multiple simultaneous requests for the same
// snapshot.
// 3) There's an inconsistency between what the appliance thinks the volume
// source is and what the existing snapshot says it is (a panic is issued).
// 4) The snapshot cannot be found locally or in the appliance.
// 1. The source volume cannot be found locally or in the appliance.
// 2. The snapshot exists but is in an unstable state. This would mean the
// CO lost state and issued multiple simultaneous requests for the same
// snapshot.
// 3. There's an inconsistency between what the appliance thinks the volume
// source is and what the existing snapshot says it is (a panic is issued).
// 4. The snapshot cannot be found locally or in the appliance.
func (zd *ZFSSADriver) lookupSnapshot(ctx context.Context, token *zfssarest.Token,
snapshotId string) (*zSnapshot, error) {
@ -344,14 +345,14 @@ func (zd *ZFSSADriver) lookupSnapshot(ctx context.Context, token *zfssarest.Toke
}
switch zsnap.getState() {
case stateCreating: // We check with the appliance.
case stateCreating: // We check with the appliance.
_, err = zsnap.getDetails(ctx, token)
if err != nil {
zd.releaseSnapshot(ctx, zsnap)
return nil, err
}
return zsnap, nil
case stateCreated: // Another Goroutine beat us to it.
case stateCreated: // Another Goroutine beat us to it.
return zsnap, nil
default:
zd.releaseSnapshot(ctx, zsnap)
@ -396,8 +397,8 @@ func (zd *ZFSSADriver) getVolumesList(ctx context.Context) ([]*csi.ListVolumesRe
for _, zvol := range zd.vCache.vHash {
entry := new(csi.ListVolumesResponse_Entry)
entry.Volume = &csi.Volume{
VolumeId: zvol.getVolumeID().String(),
CapacityBytes: zvol.getCapacity(),
VolumeId: zvol.getVolumeID().String(),
CapacityBytes: zvol.getCapacity(),
}
entries = append(entries, entry)
}
@ -414,8 +415,8 @@ func (zd *ZFSSADriver) updateVolumeList(ctx context.Context) error {
lunChan := make(chan error)
go zd.updateFilesystemList(ctx, fsChan)
go zd.updateLunList(ctx, lunChan)
errfs := <- fsChan
errlun := <- lunChan
errfs := <-fsChan
errlun := <-lunChan
if errfs != nil {
return errfs
@ -494,12 +495,12 @@ func (zd *ZFSSADriver) getSnapshotList(ctx context.Context) ([]*csi.ListSnapshot
entries := make([]*csi.ListSnapshotsResponse_Entry, 0, len(zd.sCache.sHash))
for _, zsnap := range zd.sCache.sHash {
entry := new(csi.ListSnapshotsResponse_Entry)
entry.Snapshot = &csi.Snapshot {
SizeBytes: zsnap.getSize(),
SnapshotId: zsnap.getStringId(),
entry.Snapshot = &csi.Snapshot{
SizeBytes: zsnap.getSize(),
SnapshotId: zsnap.getStringId(),
SourceVolumeId: zsnap.getStringSourceId(),
CreationTime: zsnap.getCreationTime(),
ReadyToUse: zsnap.getState() == stateCreated,
CreationTime: zsnap.getCreationTime(),
ReadyToUse: zsnap.getState() == stateCreated,
}
entries = append(entries, entry)
}
@ -581,8 +582,8 @@ func compareCapabilities(req []*csi.VolumeCapability, cur []csi.VolumeCapability
}
type volumeHashTable struct {
vMutex sync.RWMutex
vHash map[string]zVolumeInterface
vMutex sync.RWMutex
vHash map[string]zVolumeInterface
}
func (h *volumeHashTable) add(ctx context.Context, key string, zvol zVolumeInterface) {
@ -614,8 +615,8 @@ func (h *volumeHashTable) RUnlock(ctx context.Context) {
}
type snapshotHashTable struct {
sMutex sync.RWMutex
sHash map[string]*zSnapshot
sMutex sync.RWMutex
sHash map[string]*zSnapshot
}
func (h *snapshotHashTable) add(ctx context.Context, key string, zsnap *zSnapshot) {

View File

@ -17,41 +17,42 @@ import (
)
const (
VolumeIdLen = 6
VolumeHandleLen = 8
SnapshotIdLen = 7
VolumeHrefLen = 10
SnapshotHrefLen = 12
VolumeMinComponents = 2
VolumeIdLen = 6
VolumeHandleLen = 8
SnapshotIdLen = 7
VolumeHrefLen = 10
SnapshotHrefLen = 12
)
const (
BlockVolume string = "lun"
MountVolume = "mnt"
BlockVolume string = "lun"
MountVolume = "mnt"
)
const (
ResourceNamePattern string = `^[a-zA-Z0-9_\-\.\:]+$`
ResourceNameLength int = 64
ResourceNameLength int = 64
)
// Volume ID
// ---------
// This structure is what identifies a volume (lun or filesystem).
type VolumeId struct {
Type string
Zfssa string
Pool string
Project string
Name string
Type string
Zfssa string
Pool string
Project string
Name string
}
func NewVolumeId(vType, zfssaName, pool, project, name string) *VolumeId {
return &VolumeId{
Type: vType,
Zfssa: zfssaName,
Pool: pool,
Type: vType,
Zfssa: zfssaName,
Pool: pool,
Project: project,
Name: name,
Name: name,
}
}
@ -59,7 +60,7 @@ func VolumeIdStringFromHref(zfssa, hRef string) (string, error) {
result := strings.Split(hRef, "/")
if len(result) < VolumeHrefLen {
return "", status.Errorf(codes.NotFound,
"Volume ID (%s) contains insufficient components (%d)",hRef, VolumeHrefLen)
"Volume ID (%s) contains insufficient components (%d)", hRef, VolumeHrefLen)
}
var vType string
@ -69,7 +70,7 @@ func VolumeIdStringFromHref(zfssa, hRef string) (string, error) {
case "luns":
vType = BlockVolume
default:
return "", status.Errorf(codes.NotFound,"Invalid snapshot href (%s)", hRef)
return "", status.Errorf(codes.NotFound, "Invalid snapshot href (%s)", hRef)
}
return fmt.Sprintf("/%v/%v/%v/%v/%v",
@ -83,6 +84,12 @@ func VolumeIdStringFromHref(zfssa, hRef string) (string, error) {
func VolumeIdFromString(volumeId string) (*VolumeId, error) {
result := strings.Split(volumeId, "/")
if len(result) < VolumeMinComponents {
return nil, status.Errorf(codes.InvalidArgument,
"Volume ID/Handle (%s) contains insufficient components to continue handling (%d)",
volumeId, VolumeMinComponents)
}
var pool, project, share string
switch result[1] {
case "nfs", "iscsi":
@ -116,11 +123,11 @@ func VolumeIdFromString(volumeId string) (*VolumeId, error) {
}
return &VolumeId{
Type: result[1],
Type: result[1],
Zfssa: result[2],
Pool: pool,
Pool: pool,
Project: project,
Name: share,
Name: share,
}, nil
}
@ -130,8 +137,10 @@ func (zvi *VolumeId) String() string {
func (zvi *VolumeId) IsBlock() bool {
switch zvi.Type {
case BlockVolume: return true
case MountVolume: return false
case BlockVolume:
return true
case MountVolume:
return false
}
return false
}
@ -140,14 +149,14 @@ func (zvi *VolumeId) IsBlock() bool {
// -----------
// This structure is what identifies a volume (lun or filesystem).
type SnapshotId struct {
VolumeId *VolumeId
Name string
VolumeId *VolumeId
Name string
}
func NewSnapshotId(volumeId *VolumeId, snapshotName string) *SnapshotId {
return &SnapshotId{
VolumeId: volumeId,
Name: snapshotName,
Name: snapshotName,
}
}
@ -161,23 +170,23 @@ func SnapshotIdFromString(snapshotId string) (*SnapshotId, error) {
return &SnapshotId{
VolumeId: &VolumeId{
Type: result[1],
Zfssa: result[2],
Pool: result[3],
Type: result[1],
Zfssa: result[2],
Pool: result[3],
Project: result[4],
Name: result[5] },
Name: result[6] }, nil
Name: result[5]},
Name: result[6]}, nil
}
func SnapshotIdFromHref(zfssa, hRef string) (*SnapshotId, error) {
result := strings.Split(hRef, "/")
if len(result) < SnapshotHrefLen {
return nil, status.Errorf(codes.NotFound,
"Snapshot ID (%s) contains insufficient components (%d)",hRef, SnapshotHrefLen)
"Snapshot ID (%s) contains insufficient components (%d)", hRef, SnapshotHrefLen)
}
if result[10] != "snapshots" {
return nil, status.Errorf(codes.NotFound,"Invalid snapshot href (%s)", hRef)
return nil, status.Errorf(codes.NotFound, "Invalid snapshot href (%s)", hRef)
}
var vType string
@ -187,28 +196,28 @@ func SnapshotIdFromHref(zfssa, hRef string) (*SnapshotId, error) {
case "luns":
vType = BlockVolume
default:
return nil, status.Errorf(codes.NotFound,"Invalid snapshot href (%s)", hRef)
return nil, status.Errorf(codes.NotFound, "Invalid snapshot href (%s)", hRef)
}
return &SnapshotId{
VolumeId: &VolumeId{
Type: vType,
Zfssa: zfssa,
Pool: result[5],
Type: vType,
Zfssa: zfssa,
Pool: result[5],
Project: result[7],
Name: result[9] },
Name: result[11] }, nil
Name: result[9]},
Name: result[11]}, nil
}
func SnapshotIdStringFromHref(zfssa, hRef string) (string, error) {
result := strings.Split(hRef, "/")
if len(result) < SnapshotHrefLen {
return "", status.Errorf(codes.NotFound,
"Snapshot ID (%s) contains insufficient components (%d)",hRef, SnapshotHrefLen)
"Snapshot ID (%s) contains insufficient components (%d)", hRef, SnapshotHrefLen)
}
if result[10] != "snapshots" {
return "", status.Errorf(codes.NotFound,"Invalid snapshot href (%s)", hRef)
return "", status.Errorf(codes.NotFound, "Invalid snapshot href (%s)", hRef)
}
var vType string
@ -218,7 +227,7 @@ func SnapshotIdStringFromHref(zfssa, hRef string) (string, error) {
case "luns":
vType = BlockVolume
default:
return "", status.Errorf(codes.NotFound,"Invalid snapshot href (%s)", hRef)
return "", status.Errorf(codes.NotFound, "Invalid snapshot href (%s)", hRef)
}
return fmt.Sprintf("/%v/%v/%v/%v/%v/%v",
@ -247,15 +256,15 @@ func (zsi *SnapshotId) GetVolumeId() *VolumeId {
func DateToUnix(date string) (*timestamp.Timestamp, error) {
year, err := strconv.Atoi(date[0:4])
if err == nil {
month, err := strconv.Atoi(date[5:7])
month, err := strconv.Atoi(date[5:7])
if err == nil {
day, err := strconv.Atoi(date[8:10])
day, err := strconv.Atoi(date[8:10])
if err == nil {
hour, err := strconv.Atoi(date[11:13])
hour, err := strconv.Atoi(date[11:13])
if err == nil {
min, err := strconv.Atoi(date[14:16])
min, err := strconv.Atoi(date[14:16])
if err == nil {
sec, err := strconv.Atoi(date[17:19])
sec, err := strconv.Atoi(date[17:19])
if err == nil {
seconds := time.Date(year, time.Month(month), day, hour, min, sec,
0, time.Local).Unix()

View File

@ -26,7 +26,7 @@ IMAGE_NAME=$(REGISTRY_NAME)/$*
build-%:
mkdir -p bin
CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$* ./cmd/$*
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$* ./cmd/$*
container-%: build-%
$(CONTAINER_BUILD) build -t $*:latest -f $(shell if [ -e ./cmd/zfssa-csi-driver/$*/Dockerfile ]; then echo ./cmd/zfssa-csi-driver/$*/Dockerfile; else echo Dockerfile; fi) --label revision=$(REV) . --build-arg var_proxy=$(CONTAINER_PROXY)