Let volume plugins decide if they want to mount volumes with "-o
context=XYZ" or let the container runtime relabel the volume on container
startup.
Using NewMounter, as it's the call where a volume plugin gets the other MountOptions.
The field in fact says that the container runtime should relabel a volume
when running a container with it, it does not say that the volume supports
SELinux. For example, NFS can support SELinux, but we don't want NFS
volumes relabeled, because they can be shared among several Pods.
Volumes that are provisioned with `VolumeMode: Block` often have a
MetrucsProvider interface declared in their type. However, the
MetricsProvider should implement a GetMetrics() function. In the cases
where the storage drivers do not implement GetMetrics(), a panic can
occur.
Usual type-assertions are not sufficient in this case. All assertions
assume the interface is present. There is no straight forward way to
verify that a valid GetMetrics() function is provided.
By adding SupportsMetrics(), storage driver implementations require
careful reviewing for metrics support.
- Move SetUpDevice to BlockVolumeStager
- Move MapPodDevice to BlockVolumePublisher
- Move TearDownDevice to BlockVolumeUnstager
- Move UnmapPodDevice to BlockVolumeUnpublisher
- Implement BlockVolumePublisher only in local and csi plugin
- Implement BlockVolumeUnstager only in fc, iscsi, rbd, and csi plugin
- Implement BlockVolumeStager and BlockVolumeUnpublisher only in csi plugin
- Rename MapDevice to MapPodDevice in BlockVolumeMapper
- Add UnmapPodDevice in BlockVolumeUnmapper (This will be used by csi driver later)
- Add CustomBlockVolumeMapper and CustomBlockVolumeUnmapper interface
- Move SetUpDevice and MapPodDevice to CustomBlockVolumeMapper
- Move TearDownDevice and UnmapPodDevice to CustomBlockVolumeUnmapper
- Implement CustomBlockVolumeMapper only in local and csi plugin
- Implement CustomBlockVolumeUnmapper only in fc, iscsi, rbd, and csi plugin
- Change MapPodDevice to return path and SetUpDevice not to return path
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Volume topology aware dynamic provisioning: work based on new API
**What this PR does / why we need it**:
The PR has been split to 3 parts:
Part1: https://github.com/kubernetes/kubernetes/pull/63232 for basic scheduler and PV controller plumbing
Part2: https://github.com/kubernetes/kubernetes/pull/63233 for API change
and the PR itself includes work based on the API change:
- Dynamic provisioning allowed topologies scheduler work
- Update provisioning interface to be aware of selected node and topology
**Which issue(s) this PR fixes**
Feature: https://github.com/kubernetes/features/issues/561
Design: https://github.com/kubernetes/community/issues/2168
**Special notes for your reviewer**:
/sig storage
/sig scheduling
/assign @msau42 @jsafrane @saad-ali @bsalamat
@kubernetes/sig-storage-pr-reviews
@kubernetes/sig-scheduling-pr-reviews
**Release note**:
```release-note
Volume topology aware dynamic provisioning
```
This change is prerequisite for implementing iSCSI attacher
and detacher.
In order to use chap authentication at iSCSI plugin after
implementing attacher and detacher, secret is needed at
AttachDisk() which is called from WaitForAttach().
To obtain secret, pod information is required, but
WaitForAttach() doesn't pass pod information inside.
This patch adds 'pod' as an argument of WaitForAttach()
and adds changes to drivers who implements WaitForAttach().
Fixes#48953
Automatic merge from submit-queue (batch tested with PRs 42369, 42375, 42397, 42435, 42455)
[Bug Fix]: Avoid evicting more pods than necessary by adding Timestamps for fsstats and ignoring stale stats
Continuation of #33121. Credit for most of this goes to @sjenning. I added volume fs timestamps.
**why is this a bug**
This PR attempts to fix part of https://github.com/kubernetes/kubernetes/issues/31362 which results in multiple pods getting evicted unnecessarily whenever the node runs into resource pressure. This PR reduces the chances of such disruptions by avoiding reacting to old/stale metrics.
Without this PR, kubernetes nodes under resource pressure will cause unnecessary disruptions to user workloads.
This PR will also help deflake a node e2e test suite.
The eviction manager currently avoids evicting pods if metrics are old. However, timestamp data is not available for filesystem data, and this causes lots of extra evictions.
See the [inode eviction test flakes](https://k8s-testgrid.appspot.com/google-node#kubelet-flaky-gce-e2e) for examples.
This should probably be treated as a bugfix, as it should help mitigate extra evictions.
cc: @kubernetes/sig-storage-pr-reviews @kubernetes/sig-node-pr-reviews @vishh @derekwaynecarr @sjenning
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.