Update the comments on how to check disk conflict

Signed-off-by: Dave Chen <dave.chen@arm.com>
This commit is contained in:
Dave Chen 2019-06-17 17:45:37 +08:00
parent 5ae55b992b
commit a9aea022ce

View File

@ -279,11 +279,11 @@ func isVolumeConflict(volume v1.Volume, pod *v1.Pod) bool {
// NoDiskConflict evaluates if a pod can fit due to the volumes it requests, and those that
// are already mounted. If there is already a volume mounted on that node, another pod that uses the same volume
// can't be scheduled there.
// This is GCE, Amazon EBS, and Ceph RBD specific for now:
// This is GCE, Amazon EBS, ISCSI and Ceph RBD specific for now:
// - GCE PD allows multiple mounts as long as they're all read-only
// - AWS EBS forbids any two pods mounting the same volume ID
// - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image.
// - ISCSI forbids if any two pods share at least same IQN, LUN and Target
// - Ceph RBD forbids if any two pods share at least same monitor, and match pool and image, and the image is read-only
// - ISCSI forbids if any two pods share at least same IQN and ISCSI volume is read-only
// TODO: migrate this into some per-volume specific code?
func NoDiskConflict(pod *v1.Pod, meta PredicateMetadata, nodeInfo *schedulernodeinfo.NodeInfo) (bool, []PredicateFailureReason, error) {
for _, v := range pod.Spec.Volumes {