Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)
Add field to control service account token automounting
Fixes https://github.com/kubernetes/kubernetes/issues/16779
* adds an `automountServiceAccountToken *bool` field to `ServiceAccount` and `PodSpec`
* if set in both the service account and pod, the pod wins
* if unset in both the service account and pod, we automount for backwards compatibility
```release-note
An `automountServiceAccountToken *bool` field was added to ServiceAccount and PodSpec objects. If set to `false` on a pod spec, no service account token is automounted in the pod. If set to `false` on a service account, no service account token is automounted for that service account unless explicitly overridden in the pod spec.
```
Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)
Fix typos in e2e
**What this PR does / why we need it**: fix typos in e2e test
**Special notes for your reviewer**: none
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)
e2e test for storage class diskformat verification for vsphere cloud provider
**What this PR does / why we need it**:
This PR adds a new e2e test for vsphere cloud provider.
Test is to verify diskformat specified in storage-class is being honored while volume creation.
Steps:
1. Create StorageClass with diskformat set to valid type (supported options are `eagerzeroedthick`, `zeroedthick` and `thin`)
2. Create PVC which uses the StorageClass created in step 1.
3. Wait for PV to be provisioned.
4. Wait for PVC's status to become Bound
5. Create POD using PVC on specific node.
6. Wait for Disk to be attached to the node.
7. Get node VM's devices and find PV's Volume Disk.
8. Get Backing Info of the Volume Disk and obtain Property of `VirtualDiskFlatVer2BackingInfo` - `EagerlyScrub` and `ThinProvisioned`
9. Based on the value of `EagerlyScrub` and `ThinProvisioned`, verify if diskformat is correct.
10. Delete POD and Wait for Volume Disk to be detached from the Node.
11. Delete PVC, PV and Storage Class
**Which issue this PR fixes** *
fixes #
**Special notes for your reviewer**:
Test is executed against v1.6.0-alpha.1
Test is failing on v1.4.8
**Release Note**
```release-note
NONE
```
@kerneltime @BaluDontu @abrarshivani please review this PR
Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)
Bump addon-manager version to v6.4-alpha.1 in kubemark
Fixes https://github.com/kubernetes/kubernetes/issues/41493
cc @wojtek-t @liggitt
Automatic merge from submit-queue
Stop controller when the stop channel is closed (when queue is empty and Pop is hanging)
Fixes: #28158
When a ``Pop`` function is invoked over empty queue, the control loop inside the functions is stacked indefinitely. In order to break the loop, introduce logic that waits for a signal to exit the loop.
Intention of the PR is not to handle situation where manipulation operations are invoked over closed queue. Intention is to break the indefinite loop.
Automatic merge from submit-queue
Change node e2e cri-validation configs
Copy the configs to a new directory to test non-cri implementation. We can
remove the original directory after the dependent PRs are merged.
Automatic merge from submit-queue (batch tested with PRs 41104, 41245, 40722, 41439, 41502)
Bump the minimum kubeadm control plane version to v1.6.0-alpha.2
**What this PR does / why we need it**:
There went in quite a lot of useful features into v1.6.0-alpha.2 that kubeadm will use.
This bump the minimum limit so we can depend on those features.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@mikedanese @errordeveloper @pires @dmmcquay @dgoodwin
Automatic merge from submit-queue (batch tested with PRs 41104, 41245, 40722, 41439, 41502)
add sample fuzzing tests
Make fuzzing tests as simple as possible from both the API installer and the scheme, so its easy to add for api groups and so that I can build a scheme and then make sure I got it right.
@kubernetes/sig-api-machinery-pr-reviews @sttts @mikedanese
Automatic merge from submit-queue (batch tested with PRs 41104, 41245, 40722, 41439, 41502)
openstack-heat: do not daemonize salt-minion
_openstack-heat_ does currently not setup a _salt-master_, so it is not necessary to daemonize it.
**What this PR does / why we need it**:
as stated in #40721:
> The _openstack-heat_ provider only installs _salt-minions_, no _salt-master_. The configuration does not take this into account which causes the following issues:
>
> - the _salt minion_ is not able to DNS resolve `salt` (see fist part of error log below)
> - the _salt-minion_ is daemonized and fails finding the master (second part of error log below). From my understanding is not required when there is no salt-master, as the setup uses `salt-call`
> anyway (see [gce provider](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/configure-vm.sh#L328-L339) as reference).
>
> ```
> Jan 31 03:00:04 kube-stack-master salt-minion[9795]: [ERROR ] DNS lookup of 'salt' failed.
> Jan 31 03:00:04 kube-stack-master salt-minion[9795]: [ERROR ] Master hostname: 'salt' not found. Retrying in 30 seconds
> ...
> Jan 31 02:35:30 kube-stack-master salt-minion[9690]: [ERROR ] Error while bringing up minion for multi-master. Is master at salt responding?
> ```
>
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40721
**Release note**:
```release-note
Do not daemonize `salt-minion` for the openstack-heat provider.
```
Automatic merge from submit-queue (batch tested with PRs 41104, 41245, 40722, 41439, 41502)
Change the etcd rollback tool to do rollback to 2.2.1 version.
I did some tests of it and for my 3-node cluster with 1 deployment it worked fine.
But before merging this, we should probably do way more testing (we should rerun tests that @mml was doing for the previous script).
@lavalamp @xiang90
Automatic merge from submit-queue
kubeadm: Migrate to client-go
**What this PR does / why we need it**: Finish the migration for kubeadm to use client-go wherever possible
**Which issue this PR fixes**: fixes #https://github.com/kubernetes/kubeadm/issues/52
**Special notes for your reviewer**: /cc @luxas @pires
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Move private key parsing from serviceaccount/jwt.go to client-go/util/cert
**What this PR does / why we need it**:
Unify private key parsing from serviceaccount/jwt.go into the client-go library.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Partial fix to #40807 - only private key functions.
**Special notes for your reviewer**:
**Release note**:
```release-note
Move private key parsing from serviceaccount/jwt.go to client-go/util/cert
```
Some imports dont exist yet (or so it seems) in client-go (examples
being:
- "k8s.io/kubernetes/pkg/api/validation"
- "k8s.io/kubernetes/pkg/util/initsystem"
- "k8s.io/kubernetes/pkg/util/node"
one change in kubelet to import to client-go
Automatic merge from submit-queue
Allow multipe DNS servers as comma-seperated argument for kubelet --dns
This PR explores how kubectls "--dns" could be extended to specify multiple DNS servers for in-cluster PODs. Testing on the local libvirt-coreos cluster shows that multiple DNS server are injected without issues.
Specifying multiple DNS servers increases resilience against
- Packet drops
- Single server failure
I am debugging services that do 50+ DNS requests for a single incoming interactive request, thus highly increase the chance of a slowdown (+5s) due to a single packet drop. Switching to two DNS servers will reduce the impact of the issues (roughly +1s on glibc, 0s on musl, error-rate goes down to error-rate^2).
Note that there is no need to change any runtime related code as far as I know. In the case of "default" dns the /etc/resolv.conf is parsed and multiple DNS server are send to the backend anyway. This only adds the same capability for the clusterFirst case.
I've heard from @thockin that multiple DNS entries are somehow considered. I've no idea what was considered, though. This is what I would like to see for our production use, though.
```release-note
NONE
```
Automatic merge from submit-queue
Switch resourcequota controller to shared informers
Originally part of #40097
I have had some issues with this change in the past, when I updated `pkg/quota` to use the new informers while `pkg/controller/resourcequota` remained on the old informers. In this PR, both are switched to using the new informers. The issues in the past were lots of flakey test failures in the ResourceQuota e2es, where it would randomly fail to see deletions and handle replenishment. I am hoping that now that everything here is consistently using the new informers, there won't be any more of these flakes, but it's something to keep an eye out for.
I also think `pkg/controller/resourcequota` could be cleaned up. I don't think there's really any need for `replenishment_controller.go` any more since it's no longer running individual controllers per kind to replenish. It instead just uses the shared informer and adds event handlers to it. But maybe we do that in a follow up.
cc @derekwaynecarr @smarterclayton @wojtek-t @deads2k @sttts @liggitt @timothysc @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41332, 41069, 41470, 41474)
"Avoid unnecessary copies in cacher""
This is resend of #40735 (which I reverted when I suspected it to cause issues). But the issue was a completely different. So it's safe to resubmit.
Automatic merge from submit-queue (batch tested with PRs 41332, 41069, 41470, 41474)
Update test owners
@nikhiljindal I've noticed you've duplicated the `test_owners.csv` contents in c1c2a12 was that intentional. I'm removing it here, since it's failing `hack/update_owners.py`
Automatic merge from submit-queue
Added configurable etcd initial-cluster-state to kube-up script.
Added configurable etcd initial-cluster-state to kube-up script. This
allows creation of multi-master cluster from scratch. This is a
cherry-pick of #41320 from 1.5 branch.
```release-note
Added configurable etcd initial-cluster-state to kube-up script.
```
Automatic merge from submit-queue
shortcut expander will take the list of short names from the api ser…
**What this PR does / why we need it**: the shortcut expander will take the list of short names for resources from the API server during the discovery. For backward compatibility a hardcoded list of short names will always be appended while evaluating a short name.
Automatic merge from submit-queue
make kube-aggregator run as static pod for local-up-cluster
Runs the kube-aggregator as a static pod for local-up-cluster. Looks like someone broke kubectl negotiation again, so I'll fix that up separately.
@kubernetes/sig-api-machinery-misc
@lavalamp you're probably looking to run kube-aggregator as a static pod, here's an example.
@jwforres I'll make a secure variant for wiring up to openshift.
Automatic merge from submit-queue
Fix AWS device allocator to only use valid device names
According to
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html
we can only use /dev/xvd[b-c][a-z] as device names - so we can only
allocate upto 52 ebs volumes on a node.
fixes#41453
cc @justinsb @kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)
apiserver: further cleanup of apiserver storage plumbing
- move kubeapiserver`s `RESTOptionsFactory` back to EtcdOptions by adding a `AddWithStorageFactoryTo`
- factor out storage backend `Config` construction from EtcdOptions
- move all `StorageFactory` related code into server/storage subpackage.
In short: remove my stomach ache about `kubeapiserver.RESTOptionsFactory`.
approved based on #40363
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)
PV E2E: dynamically provisioned volume should not create in unmanaged zone
**What this PR does / why we need it**:
Adds e2e test to that attempts to provision a volume in an unmanaged zone and fails on success. This is to catch regressions of #31948.
cc @jeffvance
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)
Isolate recycler behavior in PV E2E
**What this PR does / why we need it**:
Sets the default `reclaimPolicy` for PV E2E to `Retain` and isolates `Recycle` tests to their own context. The purpose of this is to future proof the PV test suite against the possible deprecation of the `Recycle` behavior. This is done by consolidating recycling test code into a single Context block that can be removed en masse without affecting the test suite.
Secondly, adds a liveliness check for the NFS server pod prior to each test to avoid maxing out timeouts if the NFS server becomes unavailable.
cc @saad-ali @jeffvance
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)
[Federation][Kubefed] Bug fix relating kubeconfig path in kubefed init
**What this PR does / why we need it**:
Fixes https://github.com/kubernetes/kubernetes/issues/41305
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/41305
The kubeconfig explicit path is not updated correctly when supplied through the --kubeconfig flag in kubefed init. This leads to the details about the initialised federation control plane not getting updated in the correct kubeconfig file.
**Special notes for your reviewer**:
@madhusudancs
**Release note**:
```
Fixed a bug that caused the kubeconfig entry for the initialized federation control plane to be not written to the supplied kubeconfig file when the file was supplied through the --kubeconfig flag.
```
Automatic merge from submit-queue (batch tested with PRs 41134, 41410, 40177, 41049, 41313)
Refactored kubemark code into provider-specific and provider-independent parts [Part-3]
Fixes#38967
Applying final part of the changes in PR #39033 (which refactored kubemark code completely). The changes included in this PR are:
- Removed `test/kubemark/common.sh` and moved relevant parts of its code to the right places in start-kubemark/stop-kubemark scripts.
- Added DOCKER_REGISTRY, PROJECT, KUBEMARK_IMAGE_MAKE_TARGET variables to `/test/kubemark/cloud-provider-config.sh` to make the kubemark image push location variable wrt provider.
- Removed get-real-pod-for-hollow-node.sh as it doesn't seem to do anything useful.
@kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 41360, 41423, 41430, 40647, 41352)
move kubeadm api group testing to kubeadm package
I think this is sufficient to at least preserve round trip testing.
Automatic merge from submit-queue (batch tested with PRs 41360, 41423, 41430, 40647, 41352)
kubelet: reduce extraneous logging for pods using host network
For pods using the host network, kubelet/shim should not log
error/warning messages when determining the pod IP address.