Automatic merge from submit-queue
kubeadm: Migrate to client-go
**What this PR does / why we need it**: Finish the migration for kubeadm to use client-go wherever possible
**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/52
**Special notes for your reviewer**: /cc @luxas @pires
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Move private key parsing from serviceaccount/jwt.go to client-go/util/cert
**What this PR does / why we need it**:
Unify private key parsing from serviceaccount/jwt.go into the client-go library.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Partial fix to #40807 - only private key functions.
**Special notes for your reviewer**:
**Release note**:
```release-note
Move private key parsing from serviceaccount/jwt.go to client-go/util/cert
```
Some imports dont exist yet (or so it seems) in client-go (examples
being:
- "k8s.io/kubernetes/pkg/api/validation"
- "k8s.io/kubernetes/pkg/util/initsystem"
- "k8s.io/kubernetes/pkg/util/node"
one change in kubelet to import to client-go
Automatic merge from submit-queue
Allow multipe DNS servers as comma-seperated argument for kubelet --dns
This PR explores how kubectls "--dns" could be extended to specify multiple DNS servers for in-cluster PODs. Testing on the local libvirt-coreos cluster shows that multiple DNS server are injected without issues.
Specifying multiple DNS servers increases resilience against
- Packet drops
- Single server failure
I am debugging services that do 50+ DNS requests for a single incoming interactive request, thus highly increase the chance of a slowdown (+5s) due to a single packet drop. Switching to two DNS servers will reduce the impact of the issues (roughly +1s on glibc, 0s on musl, error-rate goes down to error-rate^2).
Note that there is no need to change any runtime related code as far as I know. In the case of "default" dns the /etc/resolv.conf is parsed and multiple DNS server are send to the backend anyway. This only adds the same capability for the clusterFirst case.
I've heard from @thockin that multiple DNS entries are somehow considered. I've no idea what was considered, though. This is what I would like to see for our production use, though.
```release-note
NONE
```
Automatic merge from submit-queue
Switch resourcequota controller to shared informers
Originally part of #40097
I have had some issues with this change in the past, when I updated `pkg/quota` to use the new informers while `pkg/controller/resourcequota` remained on the old informers. In this PR, both are switched to using the new informers. The issues in the past were lots of flakey test failures in the ResourceQuota e2es, where it would randomly fail to see deletions and handle replenishment. I am hoping that now that everything here is consistently using the new informers, there won't be any more of these flakes, but it's something to keep an eye out for.
I also think `pkg/controller/resourcequota` could be cleaned up. I don't think there's really any need for `replenishment_controller.go` any more since it's no longer running individual controllers per kind to replenish. It instead just uses the shared informer and adds event handlers to it. But maybe we do that in a follow up.
cc @derekwaynecarr @smarterclayton @wojtek-t @deads2k @sttts @liggitt @timothysc @kubernetes/sig-scalability-pr-reviews
This makes it more obvious that they run together and makes the upcoming
rate-limited syncs easier.
Also make test use ints for ports, so it's easier to see when a port is
a literal value vs a name.
This is a weird function, but I didn't want to change any semantics
until the tests are in place. Testing exposed one bug where stale
connections of renamed ports were not marked stale.
There are other things that seem wrong here, more will follow.
Move the feature test to where we are activating the feature, rather
than where we detect locality. This is in service of better tests,
which is in service of less-frequent resyncing, which is going to
require refactoring.
Automatic merge from submit-queue (batch tested with PRs 41332, 41069, 41470, 41474)
"Avoid unnecessary copies in cacher""
This is resend of #40735 (which I reverted when I suspected it to cause issues). But the issue was a completely different. So it's safe to resubmit.
Automatic merge from submit-queue (batch tested with PRs 41332, 41069, 41470, 41474)
Update test owners
@nikhiljindal I've noticed you've duplicated the `test_owners.csv` contents in c1c2a12 was that intentional. I'm removing it here, since it's failing `hack/update_owners.py`
Automatic merge from submit-queue
Added configurable etcd initial-cluster-state to kube-up script.
Added configurable etcd initial-cluster-state to kube-up script. This
allows creation of multi-master cluster from scratch. This is a
cherry-pick of #41320 from 1.5 branch.
```release-note
Added configurable etcd initial-cluster-state to kube-up script.
```
Automatic merge from submit-queue
shortcut expander will take the list of short names from the api ser…
**What this PR does / why we need it**: the shortcut expander will take the list of short names for resources from the API server during the discovery. For backward compatibility a hardcoded list of short names will always be appended while evaluating a short name.
Automatic merge from submit-queue
make kube-aggregator run as static pod for local-up-cluster
Runs the kube-aggregator as a static pod for local-up-cluster. Looks like someone broke kubectl negotiation again, so I'll fix that up separately.
@kubernetes/sig-api-machinery-misc
@lavalamp you're probably looking to run kube-aggregator as a static pod, here's an example.
@jwforres I'll make a secure variant for wiring up to openshift.
Automatic merge from submit-queue
Fix AWS device allocator to only use valid device names
According to
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html
we can only use /dev/xvd[b-c][a-z] as device names - so we can only
allocate upto 52 ebs volumes on a node.
fixes#41453
cc @justinsb @kubernetes/sig-storage-pr-reviews