Commit Graph

38987 Commits

Author SHA1 Message Date
Bowei Du
9478c4b01f Add dnsmasq-metrics to the standard DNS pod
- Enables prometheus metrics on kube-dns
- Explicitly set v=0 logging for now
2016-11-10 00:08:14 -08:00
Kubernetes Submit Queue
a330acddee Merge pull request #36358 from Crassirostris/use-new-fluentd-gcp-config
Automatic merge from submit-queue

Use new fluentd-gcp image version

In #35618 we used new version of fluentd agent, which includes new version of jeamalloc, allowing us to use it.

Additionally, we came up with a hacky way to encourage Ruby GC to be invoked more often by using RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR variable.

@piosz
2016-11-09 21:50:53 -08:00
Kubernetes Submit Queue
6fcf8e415c Merge pull request #34584 from ymqytw/support_force_apply
Automatic merge from submit-queue

support kubectl apply --force

Support `kubectl apply --force` which is first delete the resource and then re-apply the resource, when the patch fails.

Fixes: #16569
2016-11-09 21:14:25 -08:00
Kubernetes Submit Queue
526746288a Merge pull request #33080 from pweil-/psp-authorizer
Automatic merge from submit-queue

Add authz to psp admission

Add authz integration to PSP admission to enable granting access to use specific PSPs on a per-user and per-service account basis.  This allows an administrator to use multiple policies in a cluster that grant different levels of access for different types of users.

Builds on https://github.com/kubernetes/kubernetes/pull/32555.  Second commit adds authz check to matching policy function in psp admission.

@deads2k @sttts @timstclair
2016-11-09 20:39:31 -08:00
Kubernetes Submit Queue
0f082c6663 Merge pull request #36280 from rkouj/better-mount-error
Automatic merge from submit-queue

Better messaging for missing volume binaries on host

**What this PR does / why we need it**:
When mount binaries are not present on a host, the error returned is a generic one.
This change is to check the mount binaries before the mount and return a user-friendly error message.

This change is specific to GCI and the flag is experimental now.

https://github.com/kubernetes/kubernetes/issues/36098

**Release note**:
Introduces a flag `check-node-capabilities-before-mount` which if set, enables a check (`CanMount()`) prior to mount operations to verify that the required components (binaries, etc.) to mount the volume are available on the underlying node. If the check is enabled and `CanMount()` returns an error, the mount operation fails. Implements the `CanMount()` check for NFS.















Sample output post change :


rkouj@rkouj0:~/go/src/k8s.io/kubernetes$ kubectl describe pods
Name:		sleepyrc-fzhyl
Namespace:	default
Node:		e2e-test-rkouj-minion-group-oxxa/10.240.0.3
Start Time:	Mon, 07 Nov 2016 21:28:36 -0800
Labels:		name=sleepy
Status:		Pending
IP:		
Controllers:	ReplicationController/sleepyrc
Containers:
  sleepycontainer1:
    Container ID:	
    Image:		gcr.io/google_containers/busybox
    Image ID:		
    Port:		
    Command:
      sleep
      6000
    QoS Tier:
      cpu:	Burstable
      memory:	BestEffort
    Requests:
      cpu:		100m
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  data:
    Type:	NFS (an NFS mount that lasts the lifetime of a pod)
    Server:	127.0.0.1
    Path:	/export
    ReadOnly:	false
  default-token-d13tj:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-d13tj
Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----						-------------	--------	------		-------
  7s		7s		1	{default-scheduler }						Normal		Scheduled	Successfully assigned sleepyrc-fzhyl to e2e-test-rkouj-minion-group-oxxa
  6s		3s		4	{kubelet e2e-test-rkouj-minion-group-oxxa}			Warning		FailedMount	Unable to mount volume kubernetes.io/nfs/32c7ef16-a574-11e6-813d-42010af00002-data (spec.Name: data) on pod sleepyrc-fzhyl (UID: 32c7ef16-a574-11e6-813d-42010af00002). Verify that your node machine has the required components before attempting to mount this volume type. Required binary /sbin/mount.nfs is missing
2016-11-09 18:51:00 -08:00
Kubernetes Submit Queue
de2bec7691 Merge pull request #36550 from yujuhong/kern_timestamps
Automatic merge from submit-queue

Get kernel logs with timestamps
2016-11-09 18:13:06 -08:00
Kubernetes Submit Queue
6a8edf72e1 Merge pull request #35957 from jsafrane/implement-external-provisioner
Automatic merge from submit-queue

Implement external provisioning proposal

In other words, add "provisioned-by" annotation to all PVCs that should be provisioned dynamically.

Most of the changes are actually in tests.

@kubernetes/sig-storage
2016-11-09 18:12:56 -08:00
Kubernetes Submit Queue
b392910bc7 Merge pull request #36505 from Crassirostris/kibana-image-fix
Automatic merge from submit-queue

Fix startup script bug in kibana image

Big thanks to @lhopki01 for noticing this!

As mention in discussion in https://github.com/kubernetes/kubernetes/pull/36103 current image crashes if we don't want to work behind proxy because of string interpolation in bash.

@piosz
2016-11-09 17:33:58 -08:00
Kubernetes Submit Queue
9922489abc Merge pull request #36384 from Crassirostris/fluentd-es-rescheduler-config
Automatic merge from submit-queue

Add rescheduler logs to the fluentd-elasticsearch configuration

Same as https://github.com/kubernetes/kubernetes/pull/36359 for elasticsearch plugin

@piosz
2016-11-09 17:33:50 -08:00
Kubernetes Submit Queue
5d894d5164 Merge pull request #36495 from mwielgus/kubectl_pdb
Automatic merge from submit-queue

Support for PodDisruptionBudget in Kubectl

Based on:

https://github.com/kubernetes/kubernetes/pull/35287

cc: @davidopp @soltysh @wojtek-t
2016-11-09 17:33:41 -08:00
Yu-Ju Hong
fac2aeb416 Get kernel logs with timestamps
Without the timestamps, the log is not very useful.
2016-11-09 17:23:33 -08:00
Kubernetes Submit Queue
7bb031da3a Merge pull request #30237 from mikedanese/csr-porcelain
Automatic merge from submit-queue

implement kubectl procelain csr commands

cc @gtank

ref #30163
2016-11-09 16:57:49 -08:00
Kubernetes Submit Queue
986839e9fb Merge pull request #35886 from MrHohn/addon-manager-token
Automatic merge from submit-queue

Fixes token_found bug in addon manager

From #35832.

Above PR exposed addon manager's logs on Jenkins, found below error on the gce e2e test artifacts:
```
Error from server: serviceaccounts "default" not found
error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil
== default service account in the kube-system namespace has token Error executing template: template: output:1:7: executing "output" at <index .secrets 0>: error calling index: index of untyped nil. Printing more information for debugging the template:
	template was:
		{{with index .secrets 0}}{{.name}}{{end}}
	raw data was:
		{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"default","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/serviceaccounts/default","uid":"de3f2f85-9d6a-11e6-9df3-42010af00002","resourceVersion":"48","creationTimestamp":"2016-10-29T00:01:40Z"}}
	object given to template engine was:
		map[apiVersion:v1 metadata:map[selfLink:/api/v1/namespaces/kube-system/serviceaccounts/default uid:de3f2f85-9d6a-11e6-9df3-42010af00002 resourceVersion:48 creationTimestamp:2016-10-29T00:01:40Z name:default namespace:kube-system] kind:ServiceAccount] ==
```

Seems like the script failed to retrieve service token at the first time and mistakenly used the error message as the token content. Fixes by replacing `|| true` with if condition.
2016-11-09 15:55:02 -08:00
Rajat Ramesh Koujalagi
d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Kubernetes Submit Queue
6ea9ff68c8 Merge pull request #36155 from deads2k/rbac-20-node-role
Automatic merge from submit-queue

add nodes role to RBAC bootstrap policy

Add a nodes role.  

@sttts @pweil-
2016-11-09 14:10:20 -08:00
Jess Frazelle
3bd8704489 Merge pull request #36536 from nikhiljindal/disableTest
Disabling flaky federation unit tests
2016-11-09 16:07:49 -05:00
nikhiljindal
6b5375b32c Disabling flaky unit tests 2016-11-09 12:22:36 -08:00
Kubernetes Submit Queue
8b5264e095 Merge pull request #36483 from nikhiljindal/fedE2e
Automatic merge from submit-queue

Fixing script to bring up federation control plane

Fixes https://github.com/kubernetes/kubernetes/issues/36287

Adding a wait to check if load balancer status is set before checking the ingress field.

cc @kubernetes/sig-cluster-federation
2016-11-09 12:14:10 -08:00
Kubernetes Submit Queue
06fa13efd1 Merge pull request #36455 from dims/fix-issue-36454
Automatic merge from submit-queue

Fix build break

Problem introduced in #31996

Fixes #36454
2016-11-09 10:41:54 -08:00
Kubernetes Submit Queue
5d4d596667 Merge pull request #36438 from mwielgus/pdb-generation
Automatic merge from submit-queue

Use generation in pod disruption budget

Fixes #35324

Previously it was possible to use allowedDirsruptions calculated for the previous spec with the current spec. With generation check API servers always make sure that allowedDisruptions were calculated for the current spec. 

At the same time I set the registry policy to only accept updates if the version based on which the update was made matches to the current version in etcd. That ensures that parallel eviction executions don't use the same allowed disruption.

cc: @davidopp @kargakis @wojtek-t
2016-11-09 10:02:29 -08:00
nikhiljindal
a519506c35 Fixing scripts to bring up federation control plane 2016-11-09 09:47:24 -08:00
Kubernetes Submit Queue
916f526811 Merge pull request #36435 from wojtek-t/fix_max_inflight_requests
Automatic merge from submit-queue

Increase max-requests-inflight in large clusters

Fix #35402
2016-11-09 09:27:02 -08:00
Kubernetes Submit Queue
b6461c9536 Merge pull request #36504 from wojtek-t/increase_initialization-timeout
Automatic merge from submit-queue

Increase initialization timeout for podStore

Fix #33839 - see https://github.com/kubernetes/kubernetes/issues/33839#issuecomment-259442714 for more details.
2016-11-09 08:50:08 -08:00
Kubernetes Submit Queue
2a674307f5 Merge pull request #36430 from kargakis/fix-deployment-progress-estimation
Automatic merge from submit-queue

Use the correct time field to estimate progress in deployments

Fixes https://github.com/kubernetes/kubernetes/issues/36427

@kubernetes/deployment
2016-11-09 08:49:53 -08:00
Kubernetes Submit Queue
658d010633 Merge pull request #34539 from jszczepkowski/ha-e2e-zones
Automatic merge from submit-queue

Added e2e test for HA master replicas in different zones.
2016-11-09 08:09:28 -08:00
Mik Vyatskov
94eeca8d2c Fixed startup script bug in kibana image 2016-11-09 16:35:34 +01:00
Wojciech Tyczynski
3ed6ea96c0 Increase initialization timeout for podStore 2016-11-09 16:32:58 +01:00
Kubernetes Submit Queue
b504d4d991 Merge pull request #36445 from soltysh/update_owners
Automatic merge from submit-queue

Updated test_owners.csv

@kargakis you asked for it
2016-11-09 07:32:32 -08:00
Kubernetes Submit Queue
5464d42a36 Merge pull request #36491 from kargakis/panic-deployment-controller
Automatic merge from submit-queue

controller: fix panic in deployments

Fixes https://github.com/kubernetes/kubernetes/issues/36488

@kubernetes/deployment
2016-11-09 06:12:29 -08:00
Marcin
5e83188327 Autogenerated bazel 2016-11-09 13:41:18 +01:00
Marcin
9cee1456a6 Describe and get support for the updated api + tests 2016-11-09 13:39:16 +01:00
Kubernetes Submit Queue
052b31a989 Merge pull request #36241 from kargakis/validate-pds-against-mrs
Automatic merge from submit-queue

extensions: invalidate progress deadline less than minreadyseconds

@kubernetes/deployment ptal
2016-11-09 04:28:11 -08:00
Kubernetes Submit Queue
fb75f8d3d6 Merge pull request #36329 from derekwaynecarr/replenishment_informers
Automatic merge from submit-queue

Use available informers in quota replenishment

more iteration on the goal to use informers where available in quota system.  this time adding persistent volume claims so the same informer is used here and https://github.com/kubernetes/kubernetes/pull/36316
2016-11-09 03:49:15 -08:00
Michail Kargakis
9fe910dac5 controller: fix panic in deployments 2016-11-09 12:25:23 +01:00
Kubernetes Submit Queue
c41c603baa Merge pull request #36471 from Random-Liu/fix-flag-description
Automatic merge from submit-queue

Kubelet: Fix the description of MaxContainers kubelet flag.

Found this during code review.

The default number has been changed to `-1` and `1`. 82c488bd6e/pkg/apis/componentconfig/v1alpha1/defaults.go (L279-L285)
@yujuhong 

/cc @saad-ali This PR fixed incorrect doc.
2016-11-09 03:13:51 -08:00
Kubernetes Submit Queue
9b1d99ca46 Merge pull request #36375 from dims/fix-issue-36350
Automatic merge from submit-queue

Fix default Seccomp profile directory

Looks like some of the refactoring caused us to lose the default
directory. Setting that explicitly here.

Fixes #36350
2016-11-09 03:13:42 -08:00
Kubernetes Submit Queue
6515e3573e Merge pull request #34818 from nebril/eviction-test-cleanup
Automatic merge from submit-queue

Cleanup kubelet eviction manager tests

It cleans up kubelet eviction manager tests

Extracted parts of tests that were similar to each other to functions
2016-11-09 02:36:46 -08:00
Maciej Szulik
5f8e76b479 Add resource printer and describer for PodDisruptionBudget 2016-11-09 11:33:51 +01:00
Kubernetes Submit Queue
73e497fb44 Merge pull request #35437 from markturansky/loosen_pvc_limit_range_validation
Automatic merge from submit-queue

Loosened validation on PVC LimitRanger

This PR loosens validation on PVC LimitRanger so that either Min or Max are required, but not both.

Per @derekwaynecarr  https://github.com/openshift/origin/pull/11396#discussion_r84533061
2016-11-09 02:01:52 -08:00
Kubernetes Submit Queue
c52efa570d Merge pull request #36079 from apprenda/windows_kube_proxy
Automatic merge from submit-queue

Add Windows support to kube-proxy

<!--  Thanks for sending a pull request!  Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->

**What this PR does / why we need it**:
This is the first stab at supporting kube-proxy (userspace mode) on Windows

**Which issue this PR fixes** : 
fixes #30278

**Special notes for your reviewer**:
The MVP uses `netsh portproxy` to redirect traffic from `ServiceIP:ServicePort` to a `LocalIP:LocalPort`. 
For the next version we are expecting to have guidance from Microsoft Container Networking team.

**Limitations**:
Current implementation does not support DNS queries over UDP as `netsh portproxy` currently only supports TCP. We are working with Microsoft to remediate this.

cc: @brendandburns @dcbw 

**Release note**:
<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access) 
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`. 
-->
```release-note
```
2016-11-09 01:26:27 -08:00
Kubernetes Submit Queue
87584919e5 Merge pull request #35912 from liggitt/test-deep-copy
Automatic merge from submit-queue

Add tests for deepcopy of structs

Until https://github.com/kubernetes/kubernetes/pull/35728 merges, we want to at least fuzz/test that deepcopy isn't shallow-copying problematic fields
2016-11-09 00:48:29 -08:00
Kubernetes Submit Queue
00458a12a8 Merge pull request #36442 from wojtek-t/extend_lb_timeout
Automatic merge from submit-queue

Extend test timeout for LB creation in large clusters

This will most probably be necessary to test 3000-node clusters.
2016-11-09 00:10:35 -08:00
Kubernetes Submit Queue
b3e4083f49 Merge pull request #36133 from luomiao/photon-support-PR-v2
Automatic merge from submit-queue

Support persistent volume usage for kubernetes running on Photon Controller platform

**What this PR does / why we need it:**
Enable the persistent volume usage for kubernetes running on Photon platform.
Photon Controller: https://vmware.github.io/photon-controller/

_Only the first commit include the real code change.
The following commits are for third-party vendor dependency and auto-generated code/docs updating._

Two components are added:
pkg/cloudprovider/providers/photon: support Photon Controller as cloud provider
pkg/volume/photon_pd: support Photon persistent disk as volume source for persistent volume

Usage introduction:
a. Photon Controller is supported as cloud provider.
When choosing to use photon controller as a cloud provider, "--cloud-provider=photon --cloud-config=[path_to_config_file]" is required for kubelet/kube-controller-manager/kube-apiserver. The config file of Photon Controller should follow the following usage:

```
[Global]
target = http://[photon_controller_endpoint_IP]
ignoreCertificate = true
tenant = [tenant_name]
project = [project_name]
overrideIP = true
```

b. Photon persistent disk is supported as volume source/persistent volume source.
yaml usage:

```
volumes:
  - name: photon-storage-1
    photonPersistentDisk:
        pdID: "643ed4e2-3fcc-482b-96d0-12ff6cab2a69"
```
pdID is the persistent disk ID from Photon Controller.

c. Enable Photon Controller as volume provisioner.
yaml usage:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: gold_sc
provisioner: kubernetes.io/photon-pd
parameters:
  flavor: persistent-disk-gold
```

The flavor "persistent-disk-gold" needs to be created by Photon platform admin before hand.
2016-11-09 00:10:22 -08:00
Kubernetes Submit Queue
4d3fd065d1 Merge pull request #36478 from yujuhong/rm_mounter_flags
Automatic merge from submit-queue

Remove mounter flags from cri test configs

The flags are no longer needed.
2016-11-08 23:26:57 -08:00
Kubernetes Submit Queue
9b89161a1a Merge pull request #36246 from kad/integration-tests-fix
Automatic merge from submit-queue

build-image fix for running integration-tests locally

**What this PR does / why we need it**: fix for running integration tests locally.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
current situation if users tries to run 
```console
$ make release
```
or 
```console
$ build-tools/run.sh make test-integration
```
output:
```console
+++ [1104 12:29:13] Stopping any currently running rsyncd container
+++ [1104 12:29:14] Running build command...
+++ [1104 12:29:26] Checking etcd is on PATH
/usr/local/bin/etcd
+++ [1104 12:29:26] Starting etcd instance
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp.k8s/tmp.FVPV9pGEWB --listen-client-urls http://127.0.0.1:2379 --debug > "/dev/null" 2>/dev/null
Waiting for etcd to come up.
+++ [1104 12:29:26] On try 1, etcd: : http://127.0.0.1:2379
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
+++ [1104 12:29:26] Running integration test cases
Running tests for APIVersion: v1,apps/v1beta1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1beta1,autoscaling/v1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1alpha1,extensions/v1beta
1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1
+++ [1104 12:29:30] Running tests without code coverage
ok      k8s.io/kubernetes/test/integration/auth 9.670s
ok      k8s.io/kubernetes/test/integration/client       11.181s
ok      k8s.io/kubernetes/test/integration/configmap    0.780s
setting up a handler for /apis
setting up a handler for /api
Server running on port 9090
Error creating cert: mkdir /var/run/kubernetes: permission deniedW1104 12:31:03.340082    7565 handlers.go:50] Authentication is disabled
[restful] 2016/11/04 12:31:03 log.go:30: [restful/swagger] listing is available at https:///swaggerapi/
[restful] 2016/11/04 12:31:03 log.go:30: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
F1104 12:31:03.367346    7565 genericapiserver.go:195] unable to load server certificate: open /var/run/kubernetes/apiserver.crt: no such file or directory
goroutine 90 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0x2200600, 0xc400000000, 0x9c, 0x198)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:766 +0xa5
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x21e0340, 0xc400000003, 0xc420160600, 0x2058e22, 0x13, 0xc3, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:717 +0x337
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printDepth(0x21e0340, 0xc400000003, 0x1, 0xc420825b50, 0x1, 0x1)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:646 +0x126
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).print(0x21e0340, 0xc400000003, 0xc420825b50, 0x1, 0x1)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:637 +0x5a
k8s.io/kubernetes/vendor/github.com/golang/glog.Fatal(0xc420825b50, 0x1, 0x1)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:1125 +0x53
k8s.io/kubernetes/pkg/genericapiserver.preparedGenericAPIServer.Run(0xc42078eb40, 0xc4200d0a80)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/genericapiserver/genericapiserver.go:195 +0x294
k8s.io/kubernetes/examples/apiserver.Run(0xc4204a5000, 0xc4200d0a80, 0x0, 0x0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/examples/apiserver/apiserver.go:108 +0xb1f
k8s.io/kubernetes/test/integration/discoverysummarizer.runAPIServer.func1(0xc4204a5000, 0xc4200d0a80, 0xc4201603c0)
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/integration/discoverysummarizer/discoverysummarizer_test.go:74 +0x39
created by k8s.io/kubernetes/test/integration/discoverysummarizer.runAPIServer
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/integration/discoverysummarizer/discoverysummarizer_test.go:77 +0x82
FAIL    k8s.io/kubernetes/test/integration/discoverysummarizer  1.463s
```
and so on. few other tests would be failing with same symptoms, because directory missing and non-writable by regular user. 

**Release note**:
```release-note
NONE
```

Currently if developer tries to run integration tests locally,
it would be failing due to missing /var/run/kubernetes inside
cross container image. As tests are run by non-privileged
user, this directory must be pre-created and made user
writable at the time cross build/test container created.
2016-11-08 23:26:49 -08:00
Kubernetes Submit Queue
54274807d9 Merge pull request #35832 from MrHohn/addon-manager-logs
Automatic merge from submit-queue

Expose addon manager's log by logging to file

Fixes #35823.

Use the same way as  how [`kube-proxy`](https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-proxy/kube-proxy.manifest) deals with logging. We would be able to check Addon Manager's logs for Jenkins tests after this.

Would like to see the Jenkins test result to examine.

@mikedanese
2016-11-08 22:50:57 -08:00
Yu-Ju Hong
cbe2358940 Remove mounter flags from cri test configs 2016-11-08 22:14:28 -08:00
Kubernetes Submit Queue
c640eeb841 Merge pull request #33260 from svanharmelen/b-cloudstack-loadbalancer
Automatic merge from submit-queue

cloudprovider/cloudstack: Fix a bug where we assume IP addresses instead of a hostnames

Because of how our test environment was setup, we didn’t notice that we were assuming the load balancer hosts list to always be IP addresses, while they actually are hostnames.

So without this PR, the load balancer code will not work as expected as it will not be able to find the nodes that need to be load balanced.

Also updated some comments and added a check to prevent trying to release a public IP if we don’t have one.
2016-11-08 21:36:16 -08:00
Kubernetes Submit Queue
ee029d9f3f Merge pull request #33850 from ymqytw/add_e2e_test_for_kubectl_in_pod
Automatic merge from submit-queue

add e2e test for kubectl in a Pod

Add a e2e test to make sure kubectl can talk to the api server when it is mounted in a pod.
Fixes: #33138
2016-11-08 21:00:53 -08:00
Kubernetes Submit Queue
b600533794 Merge pull request #36423 from Random-Liu/support-root-nobody
Automatic merge from submit-queue

CRI: Support string user name.

https://github.com/kubernetes/kubernetes/pull/33239 and https://github.com/kubernetes/kubernetes/pull/34811 combined together broke the cri e2e test. https://k8s-testgrid.appspot.com/google-gce#gci-gce-cri

The reason is that:
1) In dockershim and dockertools, we assume that `Image.Config.User` should be an integer. However, sometimes when user build the image with `USER nobody:nobody` or `USER root:root`, the field will become `nobody:nobody` and `root:root`. This makes dockershim to always return error.
2) The new kube-dns-autoscaler image is using `USER nobody:nobody`. (See https://github.com/kubernetes-incubator/cluster-proportional-autoscaler/blob/master/Dockerfile.in#L21)

This doesn't break the normal e2e test, because in dockertools [we only inspect image uid if `RunAsNonRoot` is set](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockertools/docker_manager.go#L2333-L2338), which is just a coincidence. However, in kuberuntime, [we always inspect image uid first](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_container.go#L141).

This PR adds literal `root` and `nobody` support. One problem is that `nobody` is not quite the same in different OS distros. Usually it should be `65534`, but some os distro doesn't follow that. For example, Fedora is using `99`. (See https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/Q5GCKZ7Q7PAUQW66EV7IBJGSRJWYXBBH/?sort=date)

Possible solution:
* Option 1: ~~Just use `65534`. This is fine because currently we only need to know whether the user is root or not.~~ Actually, we need to pass the user id to runtime when creating a container.
* Option 2: Return the uid as string in CRI, and let kuberuntime handle the string directly.

This PR is using option 1.

@yujuhong @feiskyer 
/cc @kubernetes/sig-node
/cc @MrHohn
2016-11-08 20:24:31 -08:00