* fix: make multiple connections to one same nqn possible in a single node Multiple connections to one same nqn in a single node return -EALREADY error. So, we should use a hostnqn to make every connections seperate. It is appropriate to use a target path from a volume request as a hostnqn. * fix: make ro and rw mounts for one same nqn possible in a single node ReadOnly and ReadWrite mounts for one same nqn in a single node return -EBUSY error. So, we should remove readonly and readwrite options when mounting a block device in a csi driver. A cri runtime makes a mount as readonly inside containers if needed. * fix: add fallback supports for no hostnqn sysfs file supports Directories per each nqn will be created in /run/nvmf and files per each hostnqn will be created in /run/nvmf/{nqn}. If linux kernel has no hostnqn sysfs file supports, we will disconnect all connections for a nqn at once when a directory for the nqn is empty. * refactor: rename sysfs_nqn_path to sysfs_subsysnqn_path * refactor: use filepath.Join instead of strings.Join * fix: add explicitly UnsupportedHostnqnError for fallback supports If linux kernel has no hostnqn sysfs file supports, return UnsupportedHostnqnError and switch to fallback mode which will disconnect all connections at once when a last controller in a nqn is disconnected. * fix: return directly after fallback mode * fix: mount the host's /run/nvmf directory to csi-node-driver |
||
---|---|---|
.github | ||
cmd/nvmfplugin | ||
deploy/kubernetes | ||
doc | ||
examples/kubernetes/example | ||
pkg | ||
release-tools | ||
.gitignore | ||
.prow.sh | ||
code-of-conduct.md | ||
CONTRIBUTING.md | ||
Dockerfile | ||
go.mod | ||
go.sum | ||
LICENSE | ||
Makefile | ||
OWNERS | ||
OWNERS_ALIASES | ||
README.md | ||
SECURITY_CONTACTS |
CSI NVMf driver
Overview
This is a repository for NVMe-oF CSI Driver. Currently it implements bare minimum of th CSI spec.
Requirements
The CSI NVMf driver requires initiator and target kernel versions to be Linux kernel 5.0 or newer. Before using this csi driver, you should create a NVMf remote disk on the target side and record traddr/trport/trtype/nqn/deviceuuid.
Modprobe Nvmf mod on K8sNode
# when use TCP as transport
$ modprobe nvme-tcp
# when use RDMA as transport
$ modprobe nvme-rdma
Test NVMf driver using csc
Get csc tool from https://github.com/rexray/gocsi/tree/master/csc
$ go get github.com/rexray/gocsi/csc
1. Complile NVMf driver
$ make
2.1 Start NVMf driver
$ ./output/nvmfplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode
2.2 Prepare nvmf backend target(Kernel or SPDK)
Kernel
Follow guide to set up kernel target to deploy kernel nvmf storage service on localhost.
SPDK
Follow guide to set up SPDK target to deploy spdk nvmf storage service on localhost.
You can get the information needed for 3.2 through spdk's script/rpc.py nvmf_get_subsystem
3.1 Get plugin info
$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
"csi.nvmf.com" "v1.0.0"
3.2 NodePublish a volume
The information here is what you used in step 2.2
export TargetTrAddr="NVMf Target Server IP (Ex: 192.168.122.18)"
export TargetTrPort="NVMf Target Server Ip Port (Ex: 49153)"
export TargetTrType="NVMf Target Type (Ex: tcp | rdma)"
export DeviceUUID="NVMf Target Device UUID (Ex: 58668891-c3e4-45d0-b90e-824525c16080)"
export NQN="NVMf Target NQN"
csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf --vol-context targetTrAddr=$TargetTrAddr \
--vol-context targetTrPort=$TargetTrPort --vol-context targetTrType=$TargetTrType \
--vol-context deviceUUID=$DeviceUUID --vol-context nqn=$NQN nvmftestvol
nvmftestvol
You can find a new disk on /mnt/nvmf
3.3 NodeUnpublish a volume
$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nvmf nvmftestvol
nvmftestvol
Test NVMf driver in kubernetes cluster
TODO: support dynamic provision.
1. Docker Build image
$ make container
2.1 Load Driver
$ kubectl create -f deploy/kubernetes/
2.2 Unload Driver
$ kubectl delete -f deploy/kubenetes/
3.1 Create Storage Class(Dynamic Provisioning)
NotSupport Now
- Create
$ kubectl create -f examples/kubernetes/example/storageclass.yaml
- Check
$ kubectl get sc
3.2 Create PV and PVC(Static Provisioning)
Supported
- Create Pv
$ kubectl create -f examples/kubernetes/example/pv.yaml
- Check
$ kubectl get pv
- Create Pvc
$ kubectl create -f exameples/kubernetes/example/pvc.yaml
- Check
$ kubectl get pvc
4. Create Nginx Container
- Create Deployment
$ kubectl create -f examples/kubernetes/example/nginx.yaml
- Check
$ kubectl exec -it nginx-451df123421 /bin/bash
$ lsblk
Community,discussion,contribution,and support
You can reach the maintainers of this project at:
Code of conduct
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.