Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubeadm config add support for more than one APIEndpoint
**What this PR does / why we need it**:
This PR completes the changes in kubeadm for management of more than one control plane instances introducing the possibility to configure more than one APIEndpoints
**Which issue(s) this PR fixes** :
refs https://github.com/kubernetes/kubeadm/issues/911, refs https://github.com/kubernetes/kubeadm/issues/963
**Special notes for your reviewer**:
Depends on:
- [x] https://github.com/kubernetes/kubernetes/pull/67830
**Release note**:
```release-note
kubeadm: The kubeadm configuration now support definition of more than one control plane instances with their own APIEndpoint. The APIEndpoint for the "bootstrap" control plane instance should be defined using `InitConfiguration.APIEndpoint`, while the APIEndpoints for additional control plane instances should be added using `JoinConfiguration.APIEndpoint`.
```
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
/sig cluster-lifecycle
/area kubeadm
/kind api-change
/kind enhancement
/assign @luxas
/assign @timothysc
/cc @chuckha @rosti @neolit123 @liztio
Automatic merge from submit-queue (batch tested with PRs 67362, 67256, 67809). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
del unused func DefaultEventFilterFunc
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67362, 67256, 67809). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Dry run integration
Implement an integration test for dry-run. Also, this turns on the knob to allow dry-run requests, so let's be careful.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67766, 67642, 67772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable dynamic azure disk volume limits
**What this PR does / why we need it**:
Enable dynamic azure disk volume limits,
This is an azure cloud provider implementation related to feature: [Dynamic Maximum volume count](https://github.com/kubernetes/features/issues/554)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#66269
**Special notes for your reviewer**:
This PR use `az.VirtualMachineSizesClient.List` to list all vm sizes under region, match vm size with current node size, and then got `MaxDataDiskCount`, the `GetVolumeLimits` happens in kubelet and will return `attachable-volumes-azure-disk` in node status as following example:
```
agentpool-22082114-0
...
allocatable:
attachable-volumes-azure-disk: "8"
cpu: "2"
ephemeral-storage: "28043041951"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 7034772Ki
pods: "30"
```
**Release note**:
```
Enable dynamic azure disk volume limits
```
/sig azure
/kind feature
Automatic merge from submit-queue (batch tested with PRs 67766, 67642, 67772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add unit test cases for scheduler/algorithm/predicates.
**What this PR does / why we need it**:
Add unit test cases for scheduler/algorithm/predicates for more code coverage.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67766, 67642, 67772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
WaitForAllNodesSchedulable should check taints as well
**What this PR does / why we need it**:
In https://github.com/kubernetes/kubernetes/issues/67597 we see a lot of cases when test starts before not-ready and network-unavailable taints are removed from the nodes while they already have correct conditions.
This change makes sure that we wait for both.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @wojtek-t
* add snapd_refresh config and handlers to k8s-master and -worker
* lint readmes
* add snapd_refresh doc to the readme; make "max" less specific
* adjust wording to note snapd_refresh only affects store snaps
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
refactor hard code in test/e2e/apimachinery/garbage_collector.go
**What this PR does / why we need it**:
refactor hard code in test/e2e/apimachinery/garbage_collector.go
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66257, 67750). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add unit test cases for scheduler/util.
**What this PR does / why we need it**:
Add unit test cases for scheduler/util for more code coverage.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
The requested Service Protocol is checked against the supported protocols of GCE Internal LB. The supported protocols are TCP and UDP.
SCTP is not supported by OpenStack LBaaS. If SCTP is requested in a Service with type=LoadBalancer, the request is rejected. Comment style is also corrected.
SCTP is not allowed for LoadBalancer Service and for HostPort. Kube-proxy can be configured not to start listening on the host port for SCTP: see the new SCTPUserSpaceNode parameter
changed the vendor github.com/nokia/sctp to github.com/ishidawataru/sctp. I.e. from now on we use the upstream version.
netexec.go compilation fixed. Various test cases fixed
SCTP related conformance tests removed. Netexec's pod definition and Dockerfile are updated to expose the new SCTP port(8082)
SCTP related e2e test cases are removed as the e2e test systems do not support SCTP
sctp related firewall config is removed from cluster/gce/util.sh. Variable name sctp_addr is corrected to sctpAddr in pkg/proxy/ipvs/proxier.go
cluster/gce/util.sh is copied from master
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kubelet creates and manages node leases
This extends the Kubelet to create and periodically update leases in a
new kube-node-lease namespace. Based on [KEP-0009](https://github.com/kubernetes/community/blob/master/keps/sig-node/0009-node-heartbeat.md),
these leases can be used as a node health signal, and will allow us to
reduce the load caused by over-frequent node status reporting.
- add NodeLease feature gate
- add kube-node-lease system namespace for node leases
- add Kubelet option for lease duration
- add Kubelet-internal lease controller to create and update lease
- add e2e test for NodeLease feature
I would like to determine a standard policy for lease renewal frequency, based on the configured lease duration, so that we don't need to expose frequency as an additional knob. The renew interval is currently calculated as 1/3 of the lease duration.
```release-note
kubelet: Users can now enable the alpha NodeLease feature gate to have the Kubelet create and periodically renew a Lease in the kube-node-lease namespace. The lease duration defaults to 40s, and can be configured via the kubelet.config.k8s.io/v1beta1.KubeletConfiguration's NodeLeaseDurationSeconds field.
```
/cc @wojtek-t @liggitt