Merge pull request #56864 from juanvallejo/jvallejo/add-selector-kubectl-drain

Automatic merge from submit-queue (batch tested with PRs 55751, 57337, 56406, 56864, 57347). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add pod-selector kubectl drain

**Release note**:
```release-note
Added the ability to select pods in a chosen node to be drained, based on given pod label-selector
```

This patch adds the ability to select pods in a chosen node to be drained, based on given pod label-selector. Related downstream issue: https://github.com/openshift/origin/issues/17554

Further, it removes explicit, specific, pod-controller check. The `drain` command currently fails if a pod has a controller of a `kind` [not explicitly handled in the command itself](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/drain.go#L331). This causes `drain` to be unusable if a node contains pods managed by third-party, or "unknown" controllers.

Based on [this comment](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/drain.go#L353), the expectation was to fail if a pod's controller was not found for whatever reason. I believe that the `drain` command should not care about the existence of a pod controller. It should only care whether a pod has one, and act according to that controller kind. This solves a downstream bug: https://github.com/openshift/origin/issues/17563

cc @fabianofranz @deads2k @kubernetes/sig-cli-misc
This commit is contained in:
Kubernetes Submit Queue
2017-12-18 18:50:45 -08:00
committed by GitHub
2 changed files with 91 additions and 52 deletions

View File

@@ -4300,6 +4300,51 @@ run_cluster_management_tests() {
kube::test::get_object_assert nodes "{{range.items}}{{$id_field}}:{{end}}" '127.0.0.1:'
# create test pods we can work with
kubectl create -f - "${kube_flags[@]}" << __EOF__
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test-pod-1",
"labels": {
"e": "f"
}
},
"spec": {
"containers": [
{
"name": "container-1",
"resources": {},
"image": "test-image"
}
]
}
}
__EOF__
kubectl create -f - "${kube_flags[@]}" << __EOF__
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test-pod-2",
"labels": {
"c": "d"
}
},
"spec": {
"containers": [
{
"name": "container-1",
"resources": {},
"image": "test-image"
}
]
}
}
__EOF__
### kubectl cordon update with --dry-run does not mark node unschedulable
# Pre-condition: node is schedulable
kube::test::get_object_assert "nodes 127.0.0.1" "{{.spec.unschedulable}}" '<no value>'
@@ -4314,6 +4359,20 @@ run_cluster_management_tests() {
kube::test::get_object_assert nodes "{{range.items}}{{$id_field}}:{{end}}" '127.0.0.1:'
kube::test::get_object_assert "nodes 127.0.0.1" "{{.spec.unschedulable}}" '<no value>'
### kubectl drain with --pod-selector only evicts pods that match the given selector
# Pre-condition: node is schedulable
kube::test::get_object_assert "nodes 127.0.0.1" "{{.spec.unschedulable}}" '<no value>'
# Pre-condition: test-pod-1 and test-pod-2 exist
kube::test::get_object_assert "pods" "{{range .items}}{{.metadata.name}},{{end}}" 'test-pod-1,test-pod-2,'
kubectl drain "127.0.0.1" --pod-selector 'e in (f)'
# only "test-pod-1" should have been matched and deleted - test-pod-2 should still exist
kube::test::get_object_assert "pods/test-pod-2" "{{.metadata.name}}" 'test-pod-2'
# delete pod no longer in use
kubectl delete pod/test-pod-2
# Post-condition: node is schedulable
kubectl uncordon "127.0.0.1"
kube::test::get_object_assert "nodes 127.0.0.1" "{{.spec.unschedulable}}" '<no value>'
### kubectl uncordon update with --dry-run is a no-op
# Pre-condition: node is already schedulable
kube::test::get_object_assert "nodes 127.0.0.1" "{{.spec.unschedulable}}" '<no value>'