Commit Graph

49395 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
4c7e1590ee Merge pull request #40760 from mikedanese/gce
Automatic merge from submit-queue (batch tested with PRs 40760, 46706, 46783, 46742, 46751)

enable kubelet csr bootstrap in GCE/GKE

@jcbsmpsn @pipejakob 

Fixes https://github.com/kubernetes/kubernetes/issues/31168

```release-note
Enable kubelet csr bootstrap in GCE/GKE
```
2017-06-03 18:30:38 -07:00
Kubernetes Submit Queue
dbd1503b65 Merge pull request #45924 from janetkuo/daemonset-history
Automatic merge from submit-queue

Implement Daemonset history

~Depends on #45867 (the 1st commit, ignore it when reviewing)~ (already merged)

Ref https://github.com/kubernetes/community/pull/527/ and https://github.com/kubernetes/community/pull/594

@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews @erictune @kow3ns @lukaszo @kargakis 

---

TODOs:
- [x] API changes
  - [x] (maybe) Remove rollback subresource if we decide to do client-side rollback 
- [x] deployment controller 
  - [x] controller revision
    - [x] owner ref (claim & adoption)
    - [x] history reconstruct (put revision number, hash collision avoidance)
    - [x] de-dup history and relabel pods
    - [x] compare ds template with history 
  - [x] hash labels (put it in controller revision, pods, and maybe deployment)
  - [x] clean up old history 
  - [x] Rename status.uniquifier when we reach consensus in #44774 
- [x] e2e tests 
- [x] unit tests 
  - [x] daemoncontroller_test.go 
  - [x] update_test.go 
  - [x] ~(maybe) storage_test.go // if we do server side rollback~

kubectl part is in #46144

--- 

**Release note**:

```release-note
```
2017-06-03 16:52:38 -07:00
Tim Hockin
be987b015c Merge pull request #46716 from thockin/proxy-comments
Kube-proxy cleanups
2017-06-03 15:57:17 -07:00
Kubernetes Submit Queue
b641aedcac Merge pull request #46371 from sjenning/fix-liveness-probe-reset
Automatic merge from submit-queue

reset resultRun on pod restart

xref https://bugzilla.redhat.com/show_bug.cgi?id=1455056

There is currently an issue where, if the pod is restarted due to liveness probe failures exceeding failureThreshold, the failure count is not reset on the probe worker.  When the pod restarts, if the liveness probe fails even once, the pod is restarted again, not honoring failureThreshold on the restart.

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
  - name: busybox
    image: busybox
    command:
    - sleep
    - "3600"
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 3
      timeoutSeconds: 1
      periodSeconds: 3
      successThreshold: 1
      failureThreshold: 5
  terminationGracePeriodSeconds: 0
```

Before this PR:
```
$ kubectl create -f busybox-probe-fail.yaml 
pod "busybox" created
$ kubectl get pod -w
NAME      READY     STATUS    RESTARTS   AGE
busybox   1/1       Running   0          4s
busybox   1/1       Running   1         24s
busybox   1/1       Running   2         33s
busybox   0/1       CrashLoopBackOff   2         39s
```

After this PR:
```
$ kubectl create -f busybox-probe-fail.yaml
$ kubectl get pod -w
NAME      READY     STATUS              RESTARTS   AGE
busybox   0/1       ContainerCreating   0          2s
busybox   1/1       Running   0         4s
busybox   1/1       Running   1         27s
busybox   1/1       Running   2         45s
```

```release-note
Fix kubelet reset liveness probe failure count across pod restart boundaries
```

Restarts are now happen at even intervals.

@derekwaynecarr
2017-06-03 15:15:49 -07:00
Kubernetes Submit Queue
ebb4b0f7c6 Merge pull request #46494 from xiangpengzhao/fix-pod-manifest
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)

Do not log the content of pod manifest if parsing fails.

**What this PR does / why we need it**:
- ~~only accepts text/plain config file~~
- ~~not log config file content when it's invalid~~

Do not log the content of pod manifest if parsing fails.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #46493

**Special notes for your reviewer**:
/cc @yujuhong 

@sig-node-reviewers

**Release note**:

```release-note
NONE
```
2017-06-03 12:32:42 -07:00
Kubernetes Submit Queue
747b3b1b0c Merge pull request #46609 from abhinavdahiya/fix_inconsistent_path_order_cni
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)

Fix inconsistency in finding cni binaries

Fixes [#46476]

Signed-off-by: Abhinav Dahiya <abhinav.dahiya@coreos.com>



**What this PR does / why we need it**:
This fixes the inconsistency in finding the appropriate cni binaries. 

Currently `lo` cniNetwork follows vendorCniDir > binDir whereas default for all others is binDir > vendorCniDir. This PR makes vendorCniDir > binDir as default behavior.

**Why we need it**:
Hypercube right now ships cni binaries in /opt/cni/bin. 
And to use latest version of calico you need to override kubelet's /opt/cni/bin from host which means all other cni plugins (flannel, loopback etc...) have to be mounted from host too. Keeping vendordir at higher order allows easy installation of newer versions of plugins.
2017-06-03 12:32:41 -07:00
Kubernetes Submit Queue
018f8cfd54 Merge pull request #46339 from xilabao/fix-kubectl
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)

update default translation of annotations

**What this PR does / why we need it**:
```
using the local cluster. the help of kubectl is not corrent
# ./cluster/kubectl.sh
.......
Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resourcewatch is only supported on individual resources and resource
collections - %d resources were found
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  help           Help about any command
  plugin         Runs a command-line plugin
  version        Print the client and server version information

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

```
**Which issue this PR fixes**:

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-03 12:32:39 -07:00
Kubernetes Submit Queue
ea5e6bcee7 Merge pull request #46719 from a-robinson/peer
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)

Support custom domains in the cockroachdb example's init container

This switches from using v0.1 of the peer-finder image to a version that
includes https://github.com/kubernetes/contrib/pull/2013

While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1

```release-note
NONE
```

@tschottdorf
2017-06-03 12:32:37 -07:00
Kubernetes Submit Queue
388018fa3d Merge pull request #46782 from dnardo/ip-masq-agent
Automatic merge from submit-queue

Add some initial resource limits to the ip-masq-agent.

These limits were based on observing  the agent over roughly a day RES was typically  ~4M for me but I'd like to make sure we have some headroom.  If there was a huge config map then this could increase  slightly but not significantly since we only allow 64 entries. 

VmPeak:    11164 kB
VmSize:    11164 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      7652 kB
VmRSS:      4260 kB
VmData:     7612 kB
VmStk:       136 kB
VmExe:      1856 kB
VmLib:         0 kB
VmPTE:        40 kB
VmPMD:        20 kB
VmSwap:        0 kB
2017-06-03 12:28:27 -07:00
Kubernetes Submit Queue
903c40b5d3 Merge pull request #46725 from timstclair/apparmor-debug
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Fix AppArmor test for docker 1.13

... & better debugging.

The issue is that we run the pod containers in a shared PID namespace with docker 1.13, so PID 1 is no longer the container's root process. Since it's messy to get the container's root process, I switched to using `/proc/self` to read the apparmor profile. While this wouldn't catch a regression that caused only the init process to run with the wrong profile, I think it's a good approximation.

/cc @aulanov @Amey-D
2017-06-03 11:39:46 -07:00
Kubernetes Submit Queue
a2412f114e Merge pull request #46772 from sttts/sttts-resolve-localhost
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

apiserver: avoid resolving 'localhost'

Fixes https://github.com/kubernetes/kubernetes/issues/46767.
2017-06-03 11:39:44 -07:00
Kubernetes Submit Queue
a281ad8d4b Merge pull request #46773 from wasylkowski/nig-doc-change
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Added missing documentation to NodeInstanceGroup.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-03 11:39:42 -07:00
Kubernetes Submit Queue
6b76c40a62 Merge pull request #46732 from timstclair/audit-metrics
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Instrument advanced auditing

Add prometheus metrics for audit logging, including:

- A total count of audit events generated and sent to the output backend
- A count of audit events that failed to be audited due to an error (per backend)
- A count of request audit levels (1 per request)

For https://github.com/kubernetes/features/issues/22

- [x] TODO: Call `HandlePluginError` from the webhook backend, once https://github.com/kubernetes/kubernetes/pull/45919 merges (in this or a separate PR, depending on timing of the merge)

/cc @ihmccreery @sttts @soltysh @ericchiang
2017-06-03 11:39:40 -07:00
Kubernetes Submit Queue
0bcd9602b4 Merge pull request #46620 from enxebre/kuberuntime-test-coverage
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Improving test coverage for kubelet/kuberuntime.

**What this PR does / why we need it**:
Increases test coverage for kubelet/kuberuntime 
https://github.com/kubernetes/kubernetes/issues/46123

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/46123

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-03 11:39:38 -07:00
Kubernetes Submit Queue
3473b8a792 Merge pull request #45565 from Q-Lee/mds
Automatic merge from submit-queue

Adding a metadata proxy addon

**What this PR does / why we need it**: adds a metadata server proxy daemonset to hide kubelet secrets.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: this partially addresses #8867

**Special notes for your reviewer**:

**Release note**: the gce metadata server can be hidden behind a proxy, hiding the kubelet's token.

```release-note
The gce metadata server can be hidden behind a proxy, hiding the kubelet's token.
```
2017-06-03 08:55:32 -07:00
Kubernetes Submit Queue
36e25df059 Merge pull request #46036 from deads2k/server-25-retry
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

retry clientCA post start hook on transient failures

@smarterclayton retries the poststarthook you saw failing.

Having looked through, it seems that I didn't kill the server on the failure.
2017-06-03 08:08:44 -07:00
Kubernetes Submit Queue
7d599cc190 Merge pull request #46724 from deads2k/agg-32-startup
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

stop special casing the loopback connection for aggregator

Fixes a TODO for the aggregator loopback connection.
2017-06-03 08:08:42 -07:00
Kubernetes Submit Queue
4220b7303e Merge pull request #45500 from nbutton23/nbutton-aws-elb-security-group
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

AWS: Allow configuration of a single security group for ELBs

**What this PR does / why we need it**:
AWS has a hard limit on the number of Security Groups (500).  Right now every time an ELB is created Kubernetes is creating a new Security Group.  This allows for specifying a Security Group to use for all ELBS

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
For some reason the Diff tool makes this look like it was way more changes than it really was. 
**Release note**:

```release-note
```
2017-06-03 08:08:40 -07:00
Kubernetes Submit Queue
445795186d Merge pull request #46483 from shashidharatd/fed-sc-ut-delete
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

Federation: Minor corrections in service controller and add a unit testcase

**What this PR does / why we need it**:
This PR fixes few outdated comments in federation service controller and few other minor fixes.
This also adds a unit test case to test federated service deletion.


/assign @quinton-hoole 
/cc @marun @kubernetes/sig-federation-pr-reviews 

```release-note
NONE
```
2017-06-03 08:08:38 -07:00
Kubernetes Submit Queue
07f85565a2 Merge pull request #36721 from smarterclayton/initializers
Automatic merge from submit-queue

Add initializer support to admission and uninitialized filtering to rest storage

Initializers are the opposite of finalizers - they allow API clients to react to object creation and populate fields prior to other clients seeing them.

High level description:

1. Add `metadata.initializers` field to all objects
2. By default, filter objects with > 0 initializers from LIST and WATCH to preserve legacy client behavior (known as partially-initialized objects)
3. Add an admission controller that populates .initializer values per type, and denies mutation of initializers except by certain privilege levels (you must have the `initialize` verb on a resource)
4. Allow partially-initialized objects to be viewed via LIST and WATCH for initializer types
5. When creating objects, the object is "held" by the server until the initializers list is empty
6. Allow some creators to bypass initialization (set initializers to `[]`), or to have the result returned immediately when the object is created.

The code here should be backwards compatible for all clients because they do not see partially initialized objects unless they GET the resource directly. The watch cache makes checking for partially initialized objects cheap. Some reflectors may need to change to ask for partially-initialized objects.

```release-note
Kubernetes resources, when the `Initializers` admission controller is enabled, can be initialized (defaulting or other additive functions) by other agents in the system prior to those resources being visible to other clients.  An initialized resource is not visible to clients unless they request (for get, list, or watch) to see uninitialized resources with the `?includeUninitialized=true` query parameter.  Once the initializers have completed the resource is then visible.  Clients must have the the ability to perform the `initialize` action on a resource in order to modify it prior to initialization being completed.
```
2017-06-03 07:16:52 -07:00
Kubernetes Submit Queue
e6c74bbaaf Merge pull request #46221 from FengyunPan/close-file
Automatic merge from submit-queue

Close file after os.Open()

None
2017-06-03 04:42:00 -07:00
Kubernetes Submit Queue
efa8c5eb45 Merge pull request #45008 from xilabao/fix-cert-dir
Automatic merge from submit-queue

fix cert dir in kubeadm

1.fixes https://github.com/kubernetes/kubeadm/issues/232
2. use manifests as a constant
2017-06-03 03:48:26 -07:00
Kubernetes Submit Queue
5b5b8e6390 Merge pull request #44844 from dixudx/update_gophercloud
Automatic merge from submit-queue

update gophercloud that fixed code format

**What this PR does / why we need it**:

mainly to include [#265](https://github.com/gophercloud/gophercloud/pull/265), which fixed the code format including below two files:

* vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/v1/apiversions/urls.go
* vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go
2017-06-03 01:22:08 -07:00
Janet Kuo
85ec49c9bb Verify histories and pods in DaemonSet e2e test 2017-06-03 00:46:11 -07:00
Janet Kuo
d2cf00fcd6 Test both strategies in all daemonSet controller unit tests 2017-06-03 00:46:11 -07:00
Janet Kuo
d02f40a5e7 Implement DaemonSet history logic in controller
1. Create controllerrevisions (history) and label pods with template
   hash for both RollingUpdate and OnDelete update strategy
2. Clean up old, non-live history based on revisionHistoryLimit
3. Remove duplicate controllerrevisions (the ones with the same template)
   and relabel their pods
4. Update RBAC to allow DaemonSet controller to manage
   controllerrevisions
5. In DaemonSet controller unit tests, create new pods with hash labels
2017-06-03 00:44:23 -07:00
Janet Kuo
4e6f70ff67 Autogen: run hack/update-all.sh 2017-06-03 00:43:53 -07:00
Janet Kuo
8275e8f017 Update DaemonSet API for rollback and history
1. Add revisionHistoryLimit (default 10), collisionCount, and validation code
2. Add daemonset-controller-hash label, and deprecate templateGeneration
2017-06-03 00:43:17 -07:00
Kubernetes Submit Queue
78a9e4feba Merge pull request #46375 from deads2k/auth-05-nameprotection
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

prevent illegal verb/name combinations in default policy rules

Names aren't presented with some kinds of "normal" verbs.  This prevents people from making common mistakes.

@timothysc as I noted in your pull.  This will prevent some classes of errors.
2017-06-03 00:28:53 -07:00
Kubernetes Submit Queue
ead40ebdce Merge pull request #46416 from CaoShuFeng/audit_doc
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Fix doc about Verb for advanced audit feature

This patch is commited to address the following comment:
https://github.com/kubernetes/kubernetes/pull/45315#discussion_r117107507
**Release note**:

```
NONE
```
2017-06-03 00:28:50 -07:00
Kubernetes Submit Queue
2ff0fb7e26 Merge pull request #46676 from gyliu513/masq
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Move tolerations to PodSpec for ip-masq-agent.yaml.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-03 00:28:48 -07:00
Kubernetes Submit Queue
8325943822 Merge pull request #46675 from gyliu513/calico
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Move tolerations to PodSpec for calico-node.yaml.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
none
```
2017-06-03 00:28:46 -07:00
Kubernetes Submit Queue
b8c9ee8abb Merge pull request #46456 from jingxu97/May/allocatable
Automatic merge from submit-queue

Add local storage (scratch space) allocatable support

This PR adds the support for allocatable local storage (scratch space).
This feature is only for root file system which is shared by kubernetes
componenets, users' containers and/or images. User could use
--kube-reserved flag to reserve the storage for kube system components.
If the allocatable storage for user's pods is used up, some pods will be
evicted to free the storage resource.

This feature is part of local storage capacity isolation and described in the proposal https://github.com/kubernetes/community/pull/306

**Release note**:

```release-note
This feature exposes local storage capacity for the primary partitions, and supports & enforces storage reservation in Node Allocatable 
```
2017-06-03 00:24:29 -07:00
Kubernetes Submit Queue
822e29dd3c Merge pull request #46524 from ajitak/npd_version
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

Configure NPD version through env variable

This lets user specify NPD version to be installed with kubernetes.
2017-06-02 23:37:45 -07:00
Kubernetes Submit Queue
e837c3bbc2 Merge pull request #46388 from lavalamp/whitlockjc-generic-webhook-admission
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

Dynamic webhook admission control plugin

Unit tests pass.

Needs plumbing:
* [ ] service resolver (depends on @wfender PR)
* [x] client cert (depends on ????)
* [ ] hook source (depends on @caesarxuchao PR)

Also at least one thing will need to be renamed after Chao's PR merges.

```release-note
Allow remote admission controllers to be dynamically added and removed by administrators.  External admission controllers make an HTTP POST containing details of the requested action which the service can approve or reject.
```
2017-06-02 23:37:42 -07:00
Kubernetes Submit Queue
d8374eaae4 Merge pull request #46346 from zjj2wry/ds-controller
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

add test and fix typo in daemoncontroller

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-02 23:37:40 -07:00
Kubernetes Submit Queue
348bf1e032 Merge pull request #46627 from deads2k/api-12-labels
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

move labels to components which own the APIs

During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery.  They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet.  This pull moves the labels into their owning components and out of API machinery.

@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers 
@derekwaynecarr  since most of these are related to the kubelet
2017-06-02 23:37:38 -07:00
Kubernetes Submit Queue
fcf183dcaa Merge pull request #46239 from mtanino/issue/45394
Automatic merge from submit-queue

Log out from multiple target portals when using iscsi storage plugin

**What this PR does / why we need it**:

When using iscsi storage with multiple target portal (TP) addresses
and multipathing the volume manager logs on to the IQN for all
portal addresses, but when a pod gets destroyed the volume manager
only logs out for the primary TP and sessions for another TPs are
always remained.

This patch adds mount points for all TPs, and then log out from all
TPs when a pod is destroyed. If a TP is referred from another pods,
the connection will be remained as usual.



**Which issue this PR fixes** 
fixes #45394

**Special notes for your reviewer**:

**Release note**:

```
NONE
```
2017-06-02 23:27:14 -07:00
Kubernetes Submit Queue
3093936a18 Merge pull request #46551 from caesarxuchao/rule-validation
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Fix validation of Rule.Resouces
2017-06-02 21:42:43 -07:00
Kubernetes Submit Queue
047c4667fe Merge pull request #46034 from kensimon/canonical-aggregate-events
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Event aggregation: include latest event message in aggregate event

**What this PR does / why we need it**:

This changes the event aggregation behavior so that, when multiple events are deduplicated, the aggregated event includes the message of the latest related event.

This fixes an issue where the original event expires due to TTL, and the aggregate event doesn't contain any useful message.

**Which issue this PR fixes**:

fixes #45971

```release-note
Duplicate recurring Events now include the latest event's Message string
```
2017-06-02 21:42:41 -07:00
Kubernetes Submit Queue
9baeab9dd8 Merge pull request #46695 from gyliu513/daemoncontrollertest
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Added a new test case for daemoncontroller.

This patch added a new test case of daemonSet with node selector,
matching some nodes, and launch pods on all the nodes.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 21:42:39 -07:00
Kubernetes Submit Queue
24d09977fb Merge pull request #41912 from jcbsmpsn/rotate-client-certificate
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Rotate kubelet client certificate.

Changes the kubelet so it bootstraps off the cert/key specified in the
config file and uses those to request new cert/key pairs from the
Certificate Signing Request API, as well as rotating client certificates
when they approach expiration.

Default behavior is for client certificate rotation to be disabled. If enabled
using a command line flag, the kubelet exits each time the certificate is
rotated. I tried to use `GetCertificate` in [tls.Config](https://golang.org/pkg/crypto/tls/#Config) but it is only called
on the server side of connections. Then I tried `GetClientCertificate`,
but it is new in 1.8.

**Release note**
```release-note
With --feature-gates=RotateKubeletClientCertificate=true set, the kubelet will
request a client certificate from the API server during the boot cycle and pause
waiting for the request to be satisfied. It will continually refresh the certificate
as the certificates expiration approaches.
```
2017-06-02 21:42:37 -07:00
Kubernetes Submit Queue
aab12f217e Merge pull request #46726 from deads2k/crd-09-proto
Automatic merge from submit-queue

add protobuf for CRD

Adds protobuf encoding to CRD and simplifies loopback initialization.

xref: https://github.com/kubernetes/features/issues/95
2017-06-02 21:34:54 -07:00
Kubernetes Submit Queue
ea5183262a Merge pull request #45331 from k82cn/k8s_39559_node_cache
Automatic merge from submit-queue

Added unit test for node operation in schedulercache.

Added unit test for node operation in schedulercache.

The code coverage is 62.4% (did not add cases for get/set and util.go which is used by algorithms.)

[combined-coverage.html.gz](https://github.com/kubernetes/kubernetes/files/975427/combined-coverage.html.gz)
2017-06-02 20:42:19 -07:00
Kubernetes Submit Queue
85e43bada9 Merge pull request #46721 from mikedanese/fooloo
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

change kubemark image project to match new cos image project

The old project is not available anymore.

https://github.com/kubernetes/kubernetes/pull/45136
2017-06-02 19:53:44 -07:00
Kubernetes Submit Queue
0d4fda7746 Merge pull request #46462 from vmware/vsphere-storage-metrics
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Add metric collections for vSphere cloud provider operations

**What this PR does / why we need it**:
This PR adds metric collections for vSphere Cloud Provider Operations.

**Which issue this PR fixes** 
fixes #

**Special notes for your reviewer**:
Verified Prometheus pod is able to scrape vSphere metrics from Kubernetes Controller’s URL.

`providers/vsphere/vsphere.go` file is intentionally kept not formatted with gofmt, to keep diff review-able.

After review is complete, I will apply the formatting. 

**Release note**:

```release-note
None
```

@BaluDontu @tusharnt

@gnufied Verified with executing various operations on the Kubernetes Cluster deployed using this change.

```
$ curl -s 10.160.18.128:10252/metrics | grep "cloudprovider_vsphere"
# HELP cloudprovider_vsphere_api_request_duration_seconds Latency of vsphere api call
# TYPE cloudprovider_vsphere_api_request_duration_seconds histogram
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="AttachVolume"} 3.9742241939999996
cloudprovider_vsphere_api_request_duration_seconds_count{request="AttachVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="2.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="10"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="+Inf"} 1
cloudprovider_vsphere_api_request_duration_seconds_sum{request="CreateVolume"} 0.920856776
cloudprovider_vsphere_api_request_duration_seconds_count{request="CreateVolume"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.5"} 2
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="1"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DeleteVolume"} 1.3301585450000002
cloudprovider_vsphere_api_request_duration_seconds_count{request="DeleteVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="1"} 4
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="2.5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="10"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="+Inf"} 6
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DetachVolume"} 5.350829375
cloudprovider_vsphere_api_request_duration_seconds_count{request="DetachVolume"} 6
# HELP cloudprovider_vsphere_api_request_errors vsphere Api errors
# TYPE cloudprovider_vsphere_api_request_errors counter
cloudprovider_vsphere_api_request_errors{request="DeleteVolume"} 4
# HELP cloudprovider_vsphere_operation_duration_seconds Latency of vsphere operation call
# TYPE cloudprovider_vsphere_operation_duration_seconds histogram
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="1"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="AttachVolumeOperation"} 4.732579923
cloudprovider_vsphere_operation_duration_seconds_count{operation="AttachVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="2.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="10"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeOperation"} 1.2753096990000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithPolicyOperation"} 15.066558008
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithPolicyOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="+Inf"} 2
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithRawVSANPolicyOperation"} 21.805354686
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithRawVSANPolicyOperation"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DeleteVolumeOperation"} 1.4869503179999999
cloudprovider_vsphere_operation_duration_seconds_count{operation="DeleteVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="1"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="2.5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="10"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="+Inf"} 6
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DetachVolumeOperation"} 7.15601343
cloudprovider_vsphere_operation_duration_seconds_count{operation="DetachVolumeOperation"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.25"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.5"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DiskIsAttachedOperation"} 1.0603705730000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="DiskIsAttachedOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.25"} 4
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="1"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="2.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="10"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="+Inf"} 12
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DisksAreAttachedOperation"} 3.282661207
cloudprovider_vsphere_operation_duration_seconds_count{operation="DisksAreAttachedOperation"} 12
# HELP cloudprovider_vsphere_operation_errors vsphere operation errors
# TYPE cloudprovider_vsphere_operation_errors counter
cloudprovider_vsphere_operation_errors{operation="DeleteVolumeOperation"} 4
```
2017-06-02 19:53:42 -07:00
Kubernetes Submit Queue
2629bf79f2 Merge pull request #46265 from waseem/printers-genericity
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Denote if a printer is generic.

This fixes #38779.

This allows us to avoid case in which printers.GetStandardPrinter
returns nil for both printer and err removing any potential panics that
may arise throughout kubectl commands.

Please see #38779 and #38112 for complete context.
2017-06-02 19:53:40 -07:00
Kubernetes Submit Queue
284132ee88 Merge pull request #45251 from gyliu513/taint-typo
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Toleration should be `notReady:NoExecute` in defaulttolerationseconds…

… test.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 19:53:38 -07:00
Kubernetes Submit Queue
b68b4aeb20 Merge pull request #41563 from gyliu513/kubelet-util
Automatic merge from submit-queue

Improved code coverage for pkg/kubelet/util.

The test coverage for pkg/kubelet/util.go increased from 45.1%
to 84.3%.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 19:41:57 -07:00
Clayton Coleman
b993f7d303
generated: client-go staging 2017-06-02 22:09:05 -04:00