Commit Graph

49580 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
388018fa3d Merge pull request #46782 from dnardo/ip-masq-agent
Automatic merge from submit-queue

Add some initial resource limits to the ip-masq-agent.

These limits were based on observing  the agent over roughly a day RES was typically  ~4M for me but I'd like to make sure we have some headroom.  If there was a huge config map then this could increase  slightly but not significantly since we only allow 64 entries. 

VmPeak:    11164 kB
VmSize:    11164 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      7652 kB
VmRSS:      4260 kB
VmData:     7612 kB
VmStk:       136 kB
VmExe:      1856 kB
VmLib:         0 kB
VmPTE:        40 kB
VmPMD:        20 kB
VmSwap:        0 kB
2017-06-03 12:28:27 -07:00
Kubernetes Submit Queue
903c40b5d3 Merge pull request #46725 from timstclair/apparmor-debug
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Fix AppArmor test for docker 1.13

... & better debugging.

The issue is that we run the pod containers in a shared PID namespace with docker 1.13, so PID 1 is no longer the container's root process. Since it's messy to get the container's root process, I switched to using `/proc/self` to read the apparmor profile. While this wouldn't catch a regression that caused only the init process to run with the wrong profile, I think it's a good approximation.

/cc @aulanov @Amey-D
2017-06-03 11:39:46 -07:00
Kubernetes Submit Queue
a2412f114e Merge pull request #46772 from sttts/sttts-resolve-localhost
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

apiserver: avoid resolving 'localhost'

Fixes https://github.com/kubernetes/kubernetes/issues/46767.
2017-06-03 11:39:44 -07:00
Kubernetes Submit Queue
a281ad8d4b Merge pull request #46773 from wasylkowski/nig-doc-change
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Added missing documentation to NodeInstanceGroup.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-03 11:39:42 -07:00
Kubernetes Submit Queue
6b76c40a62 Merge pull request #46732 from timstclair/audit-metrics
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Instrument advanced auditing

Add prometheus metrics for audit logging, including:

- A total count of audit events generated and sent to the output backend
- A count of audit events that failed to be audited due to an error (per backend)
- A count of request audit levels (1 per request)

For https://github.com/kubernetes/features/issues/22

- [x] TODO: Call `HandlePluginError` from the webhook backend, once https://github.com/kubernetes/kubernetes/pull/45919 merges (in this or a separate PR, depending on timing of the merge)

/cc @ihmccreery @sttts @soltysh @ericchiang
2017-06-03 11:39:40 -07:00
Kubernetes Submit Queue
0bcd9602b4 Merge pull request #46620 from enxebre/kuberuntime-test-coverage
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)

Improving test coverage for kubelet/kuberuntime.

**What this PR does / why we need it**:
Increases test coverage for kubelet/kuberuntime 
https://github.com/kubernetes/kubernetes/issues/46123

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/46123

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-03 11:39:38 -07:00
Kubernetes Submit Queue
3473b8a792 Merge pull request #45565 from Q-Lee/mds
Automatic merge from submit-queue

Adding a metadata proxy addon

**What this PR does / why we need it**: adds a metadata server proxy daemonset to hide kubelet secrets.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: this partially addresses #8867

**Special notes for your reviewer**:

**Release note**: the gce metadata server can be hidden behind a proxy, hiding the kubelet's token.

```release-note
The gce metadata server can be hidden behind a proxy, hiding the kubelet's token.
```
2017-06-03 08:55:32 -07:00
Kubernetes Submit Queue
36e25df059 Merge pull request #46036 from deads2k/server-25-retry
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

retry clientCA post start hook on transient failures

@smarterclayton retries the poststarthook you saw failing.

Having looked through, it seems that I didn't kill the server on the failure.
2017-06-03 08:08:44 -07:00
Kubernetes Submit Queue
7d599cc190 Merge pull request #46724 from deads2k/agg-32-startup
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

stop special casing the loopback connection for aggregator

Fixes a TODO for the aggregator loopback connection.
2017-06-03 08:08:42 -07:00
Kubernetes Submit Queue
4220b7303e Merge pull request #45500 from nbutton23/nbutton-aws-elb-security-group
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

AWS: Allow configuration of a single security group for ELBs

**What this PR does / why we need it**:
AWS has a hard limit on the number of Security Groups (500).  Right now every time an ELB is created Kubernetes is creating a new Security Group.  This allows for specifying a Security Group to use for all ELBS

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
For some reason the Diff tool makes this look like it was way more changes than it really was. 
**Release note**:

```release-note
```
2017-06-03 08:08:40 -07:00
Kubernetes Submit Queue
445795186d Merge pull request #46483 from shashidharatd/fed-sc-ut-delete
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)

Federation: Minor corrections in service controller and add a unit testcase

**What this PR does / why we need it**:
This PR fixes few outdated comments in federation service controller and few other minor fixes.
This also adds a unit test case to test federated service deletion.


/assign @quinton-hoole 
/cc @marun @kubernetes/sig-federation-pr-reviews 

```release-note
NONE
```
2017-06-03 08:08:38 -07:00
David Ashpole
889afa5e2d trigger aggressive container garbage collection when under disk pressure 2017-06-03 07:52:36 -07:00
Kubernetes Submit Queue
07f85565a2 Merge pull request #36721 from smarterclayton/initializers
Automatic merge from submit-queue

Add initializer support to admission and uninitialized filtering to rest storage

Initializers are the opposite of finalizers - they allow API clients to react to object creation and populate fields prior to other clients seeing them.

High level description:

1. Add `metadata.initializers` field to all objects
2. By default, filter objects with > 0 initializers from LIST and WATCH to preserve legacy client behavior (known as partially-initialized objects)
3. Add an admission controller that populates .initializer values per type, and denies mutation of initializers except by certain privilege levels (you must have the `initialize` verb on a resource)
4. Allow partially-initialized objects to be viewed via LIST and WATCH for initializer types
5. When creating objects, the object is "held" by the server until the initializers list is empty
6. Allow some creators to bypass initialization (set initializers to `[]`), or to have the result returned immediately when the object is created.

The code here should be backwards compatible for all clients because they do not see partially initialized objects unless they GET the resource directly. The watch cache makes checking for partially initialized objects cheap. Some reflectors may need to change to ask for partially-initialized objects.

```release-note
Kubernetes resources, when the `Initializers` admission controller is enabled, can be initialized (defaulting or other additive functions) by other agents in the system prior to those resources being visible to other clients.  An initialized resource is not visible to clients unless they request (for get, list, or watch) to see uninitialized resources with the `?includeUninitialized=true` query parameter.  Once the initializers have completed the resource is then visible.  Clients must have the the ability to perform the `initialize` action on a resource in order to modify it prior to initialization being completed.
```
2017-06-03 07:16:52 -07:00
Kubernetes Submit Queue
e6c74bbaaf Merge pull request #46221 from FengyunPan/close-file
Automatic merge from submit-queue

Close file after os.Open()

None
2017-06-03 04:42:00 -07:00
Kubernetes Submit Queue
efa8c5eb45 Merge pull request #45008 from xilabao/fix-cert-dir
Automatic merge from submit-queue

fix cert dir in kubeadm

1.fixes https://github.com/kubernetes/kubeadm/issues/232
2. use manifests as a constant
2017-06-03 03:48:26 -07:00
Kubernetes Submit Queue
5b5b8e6390 Merge pull request #44844 from dixudx/update_gophercloud
Automatic merge from submit-queue

update gophercloud that fixed code format

**What this PR does / why we need it**:

mainly to include [#265](https://github.com/gophercloud/gophercloud/pull/265), which fixed the code format including below two files:

* vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/v1/apiversions/urls.go
* vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go
2017-06-03 01:22:08 -07:00
Cao Shufeng
541935b13f fix invalid status code for hijacker
When using hijacker to take over the connection, the http status code
should be 101 not 200.

PS:
Use "kubectl exec" as an example to review this change.
2017-06-03 16:03:52 +08:00
Janet Kuo
85ec49c9bb Verify histories and pods in DaemonSet e2e test 2017-06-03 00:46:11 -07:00
Janet Kuo
d2cf00fcd6 Test both strategies in all daemonSet controller unit tests 2017-06-03 00:46:11 -07:00
Janet Kuo
d02f40a5e7 Implement DaemonSet history logic in controller
1. Create controllerrevisions (history) and label pods with template
   hash for both RollingUpdate and OnDelete update strategy
2. Clean up old, non-live history based on revisionHistoryLimit
3. Remove duplicate controllerrevisions (the ones with the same template)
   and relabel their pods
4. Update RBAC to allow DaemonSet controller to manage
   controllerrevisions
5. In DaemonSet controller unit tests, create new pods with hash labels
2017-06-03 00:44:23 -07:00
Janet Kuo
4e6f70ff67 Autogen: run hack/update-all.sh 2017-06-03 00:43:53 -07:00
Janet Kuo
8275e8f017 Update DaemonSet API for rollback and history
1. Add revisionHistoryLimit (default 10), collisionCount, and validation code
2. Add daemonset-controller-hash label, and deprecate templateGeneration
2017-06-03 00:43:17 -07:00
Kubernetes Submit Queue
78a9e4feba Merge pull request #46375 from deads2k/auth-05-nameprotection
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

prevent illegal verb/name combinations in default policy rules

Names aren't presented with some kinds of "normal" verbs.  This prevents people from making common mistakes.

@timothysc as I noted in your pull.  This will prevent some classes of errors.
2017-06-03 00:28:53 -07:00
Kubernetes Submit Queue
ead40ebdce Merge pull request #46416 from CaoShuFeng/audit_doc
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Fix doc about Verb for advanced audit feature

This patch is commited to address the following comment:
https://github.com/kubernetes/kubernetes/pull/45315#discussion_r117107507
**Release note**:

```
NONE
```
2017-06-03 00:28:50 -07:00
Kubernetes Submit Queue
2ff0fb7e26 Merge pull request #46676 from gyliu513/masq
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Move tolerations to PodSpec for ip-masq-agent.yaml.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-03 00:28:48 -07:00
Kubernetes Submit Queue
8325943822 Merge pull request #46675 from gyliu513/calico
Automatic merge from submit-queue (batch tested with PRs 46456, 46675, 46676, 46416, 46375)

Move tolerations to PodSpec for calico-node.yaml.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
none
```
2017-06-03 00:28:46 -07:00
Kubernetes Submit Queue
b8c9ee8abb Merge pull request #46456 from jingxu97/May/allocatable
Automatic merge from submit-queue

Add local storage (scratch space) allocatable support

This PR adds the support for allocatable local storage (scratch space).
This feature is only for root file system which is shared by kubernetes
componenets, users' containers and/or images. User could use
--kube-reserved flag to reserve the storage for kube system components.
If the allocatable storage for user's pods is used up, some pods will be
evicted to free the storage resource.

This feature is part of local storage capacity isolation and described in the proposal https://github.com/kubernetes/community/pull/306

**Release note**:

```release-note
This feature exposes local storage capacity for the primary partitions, and supports & enforces storage reservation in Node Allocatable 
```
2017-06-03 00:24:29 -07:00
Dr. Stefan Schimanski
4846c0d167 apiserver: return BadRequest 400 for invalid query params 2017-06-03 08:54:40 +02:00
Kubernetes Submit Queue
822e29dd3c Merge pull request #46524 from ajitak/npd_version
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

Configure NPD version through env variable

This lets user specify NPD version to be installed with kubernetes.
2017-06-02 23:37:45 -07:00
Kubernetes Submit Queue
e837c3bbc2 Merge pull request #46388 from lavalamp/whitlockjc-generic-webhook-admission
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

Dynamic webhook admission control plugin

Unit tests pass.

Needs plumbing:
* [ ] service resolver (depends on @wfender PR)
* [x] client cert (depends on ????)
* [ ] hook source (depends on @caesarxuchao PR)

Also at least one thing will need to be renamed after Chao's PR merges.

```release-note
Allow remote admission controllers to be dynamically added and removed by administrators.  External admission controllers make an HTTP POST containing details of the requested action which the service can approve or reject.
```
2017-06-02 23:37:42 -07:00
Kubernetes Submit Queue
d8374eaae4 Merge pull request #46346 from zjj2wry/ds-controller
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

add test and fix typo in daemoncontroller

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-02 23:37:40 -07:00
Kubernetes Submit Queue
348bf1e032 Merge pull request #46627 from deads2k/api-12-labels
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)

move labels to components which own the APIs

During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery.  They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet.  This pull moves the labels into their owning components and out of API machinery.

@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers 
@derekwaynecarr  since most of these are related to the kubelet
2017-06-02 23:37:38 -07:00
Kubernetes Submit Queue
fcf183dcaa Merge pull request #46239 from mtanino/issue/45394
Automatic merge from submit-queue

Log out from multiple target portals when using iscsi storage plugin

**What this PR does / why we need it**:

When using iscsi storage with multiple target portal (TP) addresses
and multipathing the volume manager logs on to the IQN for all
portal addresses, but when a pod gets destroyed the volume manager
only logs out for the primary TP and sessions for another TPs are
always remained.

This patch adds mount points for all TPs, and then log out from all
TPs when a pod is destroyed. If a TP is referred from another pods,
the connection will be remained as usual.



**Which issue this PR fixes** 
fixes #45394

**Special notes for your reviewer**:

**Release note**:

```
NONE
```
2017-06-02 23:27:14 -07:00
Kubernetes Submit Queue
3093936a18 Merge pull request #46551 from caesarxuchao/rule-validation
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Fix validation of Rule.Resouces
2017-06-02 21:42:43 -07:00
Kubernetes Submit Queue
047c4667fe Merge pull request #46034 from kensimon/canonical-aggregate-events
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Event aggregation: include latest event message in aggregate event

**What this PR does / why we need it**:

This changes the event aggregation behavior so that, when multiple events are deduplicated, the aggregated event includes the message of the latest related event.

This fixes an issue where the original event expires due to TTL, and the aggregate event doesn't contain any useful message.

**Which issue this PR fixes**:

fixes #45971

```release-note
Duplicate recurring Events now include the latest event's Message string
```
2017-06-02 21:42:41 -07:00
Kubernetes Submit Queue
9baeab9dd8 Merge pull request #46695 from gyliu513/daemoncontrollertest
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Added a new test case for daemoncontroller.

This patch added a new test case of daemonSet with node selector,
matching some nodes, and launch pods on all the nodes.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 21:42:39 -07:00
Kubernetes Submit Queue
24d09977fb Merge pull request #41912 from jcbsmpsn/rotate-client-certificate
Automatic merge from submit-queue (batch tested with PRs 46726, 41912, 46695, 46034, 46551)

Rotate kubelet client certificate.

Changes the kubelet so it bootstraps off the cert/key specified in the
config file and uses those to request new cert/key pairs from the
Certificate Signing Request API, as well as rotating client certificates
when they approach expiration.

Default behavior is for client certificate rotation to be disabled. If enabled
using a command line flag, the kubelet exits each time the certificate is
rotated. I tried to use `GetCertificate` in [tls.Config](https://golang.org/pkg/crypto/tls/#Config) but it is only called
on the server side of connections. Then I tried `GetClientCertificate`,
but it is new in 1.8.

**Release note**
```release-note
With --feature-gates=RotateKubeletClientCertificate=true set, the kubelet will
request a client certificate from the API server during the boot cycle and pause
waiting for the request to be satisfied. It will continually refresh the certificate
as the certificates expiration approaches.
```
2017-06-02 21:42:37 -07:00
Kubernetes Submit Queue
aab12f217e Merge pull request #46726 from deads2k/crd-09-proto
Automatic merge from submit-queue

add protobuf for CRD

Adds protobuf encoding to CRD and simplifies loopback initialization.

xref: https://github.com/kubernetes/features/issues/95
2017-06-02 21:34:54 -07:00
Kubernetes Submit Queue
ea5183262a Merge pull request #45331 from k82cn/k8s_39559_node_cache
Automatic merge from submit-queue

Added unit test for node operation in schedulercache.

Added unit test for node operation in schedulercache.

The code coverage is 62.4% (did not add cases for get/set and util.go which is used by algorithms.)

[combined-coverage.html.gz](https://github.com/kubernetes/kubernetes/files/975427/combined-coverage.html.gz)
2017-06-02 20:42:19 -07:00
Kubernetes Submit Queue
85e43bada9 Merge pull request #46721 from mikedanese/fooloo
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

change kubemark image project to match new cos image project

The old project is not available anymore.

https://github.com/kubernetes/kubernetes/pull/45136
2017-06-02 19:53:44 -07:00
Kubernetes Submit Queue
0d4fda7746 Merge pull request #46462 from vmware/vsphere-storage-metrics
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Add metric collections for vSphere cloud provider operations

**What this PR does / why we need it**:
This PR adds metric collections for vSphere Cloud Provider Operations.

**Which issue this PR fixes** 
fixes #

**Special notes for your reviewer**:
Verified Prometheus pod is able to scrape vSphere metrics from Kubernetes Controller’s URL.

`providers/vsphere/vsphere.go` file is intentionally kept not formatted with gofmt, to keep diff review-able.

After review is complete, I will apply the formatting. 

**Release note**:

```release-note
None
```

@BaluDontu @tusharnt

@gnufied Verified with executing various operations on the Kubernetes Cluster deployed using this change.

```
$ curl -s 10.160.18.128:10252/metrics | grep "cloudprovider_vsphere"
# HELP cloudprovider_vsphere_api_request_duration_seconds Latency of vsphere api call
# TYPE cloudprovider_vsphere_api_request_duration_seconds histogram
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="AttachVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="AttachVolume"} 3.9742241939999996
cloudprovider_vsphere_api_request_duration_seconds_count{request="AttachVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="0.5"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="1"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="2.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="10"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="CreateVolume",le="+Inf"} 1
cloudprovider_vsphere_api_request_duration_seconds_sum{request="CreateVolume"} 0.920856776
cloudprovider_vsphere_api_request_duration_seconds_count{request="CreateVolume"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="0.5"} 2
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="1"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="2.5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="5"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="10"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DeleteVolume",le="+Inf"} 3
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DeleteVolume"} 1.3301585450000002
cloudprovider_vsphere_api_request_duration_seconds_count{request="DeleteVolume"} 3
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.005"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.01"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.025"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.05"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.1"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.25"} 0
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="0.5"} 1
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="1"} 4
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="2.5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="5"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="10"} 6
cloudprovider_vsphere_api_request_duration_seconds_bucket{request="DetachVolume",le="+Inf"} 6
cloudprovider_vsphere_api_request_duration_seconds_sum{request="DetachVolume"} 5.350829375
cloudprovider_vsphere_api_request_duration_seconds_count{request="DetachVolume"} 6
# HELP cloudprovider_vsphere_api_request_errors vsphere Api errors
# TYPE cloudprovider_vsphere_api_request_errors counter
cloudprovider_vsphere_api_request_errors{request="DeleteVolume"} 4
# HELP cloudprovider_vsphere_operation_duration_seconds Latency of vsphere operation call
# TYPE cloudprovider_vsphere_operation_duration_seconds histogram
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="1"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="AttachVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="AttachVolumeOperation"} 4.732579923
cloudprovider_vsphere_operation_duration_seconds_count{operation="AttachVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="2.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="10"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeOperation"} 1.2753096990000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithPolicyOperation",le="+Inf"} 1
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithPolicyOperation"} 15.066558008
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithPolicyOperation"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="2.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="10"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="CreateVolumeWithRawVSANPolicyOperation",le="+Inf"} 2
cloudprovider_vsphere_operation_duration_seconds_sum{operation="CreateVolumeWithRawVSANPolicyOperation"} 21.805354686
cloudprovider_vsphere_operation_duration_seconds_count{operation="CreateVolumeWithRawVSANPolicyOperation"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="0.5"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DeleteVolumeOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DeleteVolumeOperation"} 1.4869503179999999
cloudprovider_vsphere_operation_duration_seconds_count{operation="DeleteVolumeOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.25"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="0.5"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="1"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="2.5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="5"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="10"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DetachVolumeOperation",le="+Inf"} 6
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DetachVolumeOperation"} 7.15601343
cloudprovider_vsphere_operation_duration_seconds_count{operation="DetachVolumeOperation"} 6
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.25"} 1
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="0.5"} 2
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="1"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="2.5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="5"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="10"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DiskIsAttachedOperation",le="+Inf"} 3
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DiskIsAttachedOperation"} 1.0603705730000001
cloudprovider_vsphere_operation_duration_seconds_count{operation="DiskIsAttachedOperation"} 3
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.005"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.01"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.025"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.05"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.1"} 0
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.25"} 4
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="0.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="1"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="2.5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="5"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="10"} 12
cloudprovider_vsphere_operation_duration_seconds_bucket{operation="DisksAreAttachedOperation",le="+Inf"} 12
cloudprovider_vsphere_operation_duration_seconds_sum{operation="DisksAreAttachedOperation"} 3.282661207
cloudprovider_vsphere_operation_duration_seconds_count{operation="DisksAreAttachedOperation"} 12
# HELP cloudprovider_vsphere_operation_errors vsphere operation errors
# TYPE cloudprovider_vsphere_operation_errors counter
cloudprovider_vsphere_operation_errors{operation="DeleteVolumeOperation"} 4
```
2017-06-02 19:53:42 -07:00
Kubernetes Submit Queue
2629bf79f2 Merge pull request #46265 from waseem/printers-genericity
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Denote if a printer is generic.

This fixes #38779.

This allows us to avoid case in which printers.GetStandardPrinter
returns nil for both printer and err removing any potential panics that
may arise throughout kubectl commands.

Please see #38779 and #38112 for complete context.
2017-06-02 19:53:40 -07:00
Kubernetes Submit Queue
284132ee88 Merge pull request #45251 from gyliu513/taint-typo
Automatic merge from submit-queue (batch tested with PRs 41563, 45251, 46265, 46462, 46721)

Toleration should be `notReady:NoExecute` in defaulttolerationseconds…

… test.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 19:53:38 -07:00
Kubernetes Submit Queue
b68b4aeb20 Merge pull request #41563 from gyliu513/kubelet-util
Automatic merge from submit-queue

Improved code coverage for pkg/kubelet/util.

The test coverage for pkg/kubelet/util.go increased from 45.1%
to 84.3%.



**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-02 19:41:57 -07:00
Clayton Coleman
b993f7d303
generated: client-go staging 2017-06-02 22:09:05 -04:00
Clayton Coleman
536a1bcd3b
Allow initialization when no authorizer present
Running without an authorizer is a valid configuration.
2017-06-02 22:09:04 -04:00
Clayton Coleman
4ce3907639
Add Initializers to all admission control paths by default 2017-06-02 22:09:04 -04:00
Clayton Coleman
2568a92119
Grow signature for predicate attributes to include init status 2017-06-02 22:09:04 -04:00
Clayton Coleman
331eea67d8
Allow initialization of resources
Add support for creating resources that are not immediately visible to
naive clients, but must first be initialized by one or more privileged
cluster agents. These controllers can mark the object as initialized,
allowing others to see them.

Permission to override initialization defaults or modify an initializing
object is limited per resource to a virtual subresource "RESOURCE/initialize"
via RBAC.

Initialization is currently alpha.
2017-06-02 22:09:03 -04:00
Kubernetes Submit Queue
abe63a1890 Merge pull request #46649 from monopole/addToOwnersForBuildFiles
Automatic merge from submit-queue

Add jregan to OWNERS for kubectl isolation work.

The kubectl decoupling project (#598) requires many BUILD edits.

Even relatively simple PR's involve many OWNER files, e.g. #46317 involves five.

We plan to script-generate some PRs, and those may involve _hundreds_ of BUILD files.

This project will take many PRs, and collecting all approvals for each will be very time consuming.

**Release note**:
```release-note
NONE
```
2017-06-02 18:54:44 -07:00