Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)
Pass correct clientbuilder to cloudproviders
Fixes https://github.com/kubernetes/kubeadm/issues/425 by moving the Initialize call to after the start of the token controller and passing `clientBuilder` instead of `rootClientBuilder` to the cloudproviders.
/assign @bowei
**Release note**:
```release-note
NONE
```
Should fix in 1.8 and cherrypick to 1.7
- If a device plugin exits, its exported resource will be removed.
- No capacity change if a new device plugin instance comes up to replace the old instance.
Move negative check for testing "not patched" output to test-cmd-util.sh
as exiting with code 1 was causing patch_test.go to fail when the error
was expected as part of the test.
This fix is linked to the PR #51153 that introduce the
JobSpec.BackoffLimit.
Previously the Timeout used in the test was too agressive and generates
flaky test execution. Now it used the default framework.JobTimeout used
in others tests.
This implements stats for windows nodes in a new package, winstats.
WinStats exports methods to get cadvisor like datastructures, however
with windows specific metrics. WinStats only gets node level metrics and
information, container stats will go via the CRI. This enables the
use of the summary api to get metrics for windows nodes.
Automatic merge from submit-queue
Remove 1.2.* release notes in CHANGELOG.md
**What this PR does / why we need it**:
Remove 1.2.* release notes in CHANGELOG.md to make the file smaller so its content can be shown.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
ref: https://github.com/kubernetes/kubernetes/issues/48985#issuecomment-328076817
**Special notes for your reviewer**:
This is just a quick fix before we have an ideal solution of #48985
/cc @jdumars
/priority important-soon
/sig release
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Make CPU constraint for l7-lb-controller in density test scale with #nodes
Just noticed that we changed the memory last time, but didn't change cpu. From the last run:
```
Sep 13 04:25:03.360: INFO: Unexpected error occurred: Container l7-lb-controller-v0.9.6-gce-scale-cluster-master/l7-lb-controller is using 0.642709233/0.15 CPU
```
Automatic merge from submit-queue
Fix swallowed errors in various volume packages
**What this PR does / why we need it**: Fixes swallowed errors in various volume packages.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Make advanced audit policy on GCP configurable
Related to https://github.com/kubernetes/kubernetes/issues/52265
Make GCP audit policy configurable
/cc @tallclair
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
fix kubeadm token create error
**What this PR does / why we need it**:
fix kubeadm token create error
**Which issue this PR fixes**
[#436](https://github.com/kubernetes/kubeadm/issues/436)
**Special notes for your reviewer**:
CC @luxas
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
fix Kubeadm phase addon error
What this PR does / why we need it:
fix Kubeadm phase addon error
Which issue this PR fixes
[#437](https://github.com/kubernetes/kubeadm/issues/437)
Special notes for your reviewer:
CC @luxas @andrewrynhard
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Improve kubeadm help text
* Replace 'misc' with more specific at-mentions bugs and feature-requests.
* Replace ReplicaSets with Deployments as example, because ReplicaSets are dated.
* Generalize join example.
Before:
```
┌──────────────────────────────────────────────────────────┐
│ KUBEADM IS BETA, DO NOT USE IT FOR PRODUCTION CLUSTERS! │
│ │
│ But, please try it out! Give us feedback at: │
│ https://github.com/kubernetes/kubeadm/issues │
│ and at-mention @kubernetes/sig-cluster-lifecycle-misc │
└──────────────────────────────────────────────────────────┘
Example usage:
Create a two-machine cluster with one master (which controls the cluster),
and one node (where your workloads, like Pods and ReplicaSets run).
┌──────────────────────────────────────────────────────────┐
│ On the first machine │
├──────────────────────────────────────────────────────────┤
│ master# kubeadm init │
└──────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────┐
│ On the second machine │
├──────────────────────────────────────────────────────────┤
│ node# kubeadm join --token=<token> <ip-of-master>:<port> │
└──────────────────────────────────────────────────────────┘
You can then repeat the second step on as many other machines as you like.
```
After (changes highlighted with `<--`):
```
┌──────────────────────────────────────────────────────────┐
│ KUBEADM IS BETA, DO NOT USE IT FOR PRODUCTION CLUSTERS! │
│ │
│ But, please try it out! Give us feedback at: │
│ https://github.com/kubernetes/kubeadm/issues │
│ and at-mention @kubernetes/sig-cluster-lifecycle-bugs │ <--
│ or @kubernetes/sig-cluster-lifecycle-feature-requests │ <--
└──────────────────────────────────────────────────────────┘
Example usage:
Create a two-machine cluster with one master (which controls the cluster),
and one node (where your workloads, like Pods and Deployments run). <--
┌──────────────────────────────────────────────────────────┐
│ On the first machine │
├──────────────────────────────────────────────────────────┤
│ master# kubeadm init │
└──────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────┐
│ On the second machine │
├──────────────────────────────────────────────────────────┤
│ node# kubeadm join <arguments-returned-from-init> │ <--
└──────────────────────────────────────────────────────────┘
You can then repeat the second step on as many other machines as you like.
```
cc @luxas
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Minor fixes to validation test
Some test cases confuse the new object with the old object. This PR fixed that. Also added a test to verify that deletionTimestamp cannot be added (via the REST endpoints).
Fix#49256
When `ScalingLimited = true` for the `hpa`, there is an accompanying
status message, describing why scaling is limited. Previously if the desired
replica count was 0, and spec.minReplicas > 0, the status message
indicated "the desired replica count was less than the min replica
count". This was particularly confusing when `spec.MinReplicas = 1`. If
there was no `spec.minReplicas`, then the status message indicated "the
desired replica count was zero" which is more informative.
Update the calculation of status message so that if the desired replica
count is 0, we always display the clearer "the desired replica count was
zero" status message, even if spec.minReplicas > 0.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>