Yu-Ju Hong
a29432163e
node_e2e: disable serialized image pulls and increase test timeout
2016-04-21 15:34:28 -07:00
goltermann
dddc6cb6c8
Fix a few spellings.
2016-04-21 15:16:42 -07:00
Tim St. Clair
c4eacd3b76
Update cadvisor godeps
2016-04-21 15:14:19 -07:00
Tim St. Clair
903067c6c2
Update go-dockerclient godeps
...
Update go-dockerclient license
2016-04-21 15:14:19 -07:00
k8s-merge-robot
e73606b974
Merge pull request #24574 from nikhiljindal/buildFederatedServer
...
Automatic merge from submit-queue
update hack/build-go to build federation/cmd/federated-apiserver as well
federation/cmd/federated-apiserver was added in https://github.com/kubernetes/kubernetes/pull/23509
cc @jianhuiz
2016-04-21 15:00:06 -07:00
k8s-merge-robot
8c24c68315
Merge pull request #24324 from zjmchn/fix-vagrant-halt-up-issue
...
Automatic merge from submit-queue
fix ./cluster/kube-up.sh failed after vagrant halt. (issue #18990 )
2016-04-21 15:00:04 -07:00
Jeff Grafton
308692a0c5
Use cluster/log-dump.sh to collect base cluster logs in kubemark
2016-04-21 14:39:10 -07:00
Zach Loafman
fa2a516fce
First pass at a GKE large cluster Jenkins job
...
Runs a 1000 node GKE parallel e2e test. On demand only. We'll add more
tests as I see what actually works - this is going to have some
flakiness on its own.
2016-04-21 14:36:56 -07:00
derekwaynecarr
2b9cfd414d
Add utility for determining qos of a pod
2016-04-21 17:15:17 -04:00
nikhiljindal
75b0842388
Removing KUBE_API_VERSIONS from our test scripts
2016-04-21 13:56:04 -07:00
gmarek
3627bb7be9
Add Services to Load test
2016-04-21 22:00:26 +02:00
Russ Cox
58629a28e4
pkg/registry/pod: avoid allocation in common pod search
...
PodToSelectableFields creates a map of field attributes
for a particular pod filter query to use. If the result
of the query does not depend on the fields at all, avoid
creating the map.
This is the source of about half the allocated memory
(by byte volume) during the kubemark benchmark, and it
is in turn the main driver of CPU usage during the benchmark,
because of the many background pod watches going on,
as well as the occasional list pods.
These benchmarks for 1000-node kubemark show the difference
from my previous CL (caching timers) to this CL:
name old ms/op new ms/op delta
LIST_nodes_p50 124 ±13% 121 ± 9% ~ (p=0.136 n=29+27)
LIST_nodes_p90 278 ±15% 266 ±12% -4.26% (p=0.031 n=29+27)
LIST_nodes_p99 405 ±19% 400 ±14% ~ (p=0.864 n=28+28)
LIST_pods_p50 65.3 ±13% 56.3 ± 9% -13.75% (p=0.000 n=29+28)
LIST_pods_p90 115 ±12% 93 ± 8% -18.75% (p=0.000 n=27+28)
LIST_pods_p99 226 ±21% 202 ±14% -10.52% (p=0.000 n=28+28)
LIST_replicationcontrollers_p50 26.6 ±43% 26.2 ±54% ~ (p=0.487 n=29+29)
LIST_replicationcontrollers_p90 68.7 ±63% 68.6 ±59% ~ (p=0.931 n=29+28)
LIST_replicationcontrollers_p99 173 ±41% 177 ±49% ~ (p=0.618 n=28+29)
PUT_replicationcontrollers_p50 5.83 ±36% 5.94 ±32% ~ (p=0.818 n=28+29)
PUT_replicationcontrollers_p90 15.9 ± 6% 15.5 ± 6% -2.23% (p=0.019 n=28+29)
PUT_replicationcontrollers_p99 56.7 ±41% 39.5 ±55% -30.29% (p=0.000 n=28+29)
DELETE_pods_p50 24.3 ±17% 24.3 ±13% ~ (p=0.855 n=28+29)
DELETE_pods_p90 30.6 ± 0% 30.7 ± 1% ~ (p=0.140 n=28+29)
DELETE_pods_p99 56.3 ±27% 54.2 ±23% ~ (p=0.188 n=28+27)
PUT_nodes_p50 14.9 ± 1% 14.8 ± 2% ~ (p=0.781 n=28+27)
PUT_nodes_p90 16.4 ± 2% 16.3 ± 2% ~ (p=0.321 n=28+28)
PUT_nodes_p99 44.6 ±42% 41.3 ±35% ~ (p=0.361 n=29+28)
POST_replicationcontrollers_p50 6.33 ±23% 6.34 ±20% ~ (p=0.993 n=28+28)
POST_replicationcontrollers_p90 15.2 ± 6% 15.0 ± 5% ~ (p=0.106 n=28+29)
POST_replicationcontrollers_p99 53.4 ±52% 32.9 ±46% -38.41% (p=0.000 n=27+27)
POST_pods_p50 9.33 ±13% 8.95 ±16% ~ (p=0.069 n=29+29)
POST_pods_p90 16.3 ± 4% 16.1 ± 4% -1.43% (p=0.044 n=29+29)
POST_pods_p99 28.4 ±23% 26.4 ±12% -7.05% (p=0.004 n=29+28)
DELETE_replicationcontrollers_p50 2.50 ±13% 2.50 ±13% ~ (p=0.649 n=29+28)
DELETE_replicationcontrollers_p90 11.7 ±10% 11.8 ±13% ~ (p=0.863 n=28+28)
DELETE_replicationcontrollers_p99 19.0 ±22% 19.1 ±21% ~ (p=0.818 n=28+29)
PUT_pods_p50 10.3 ± 5% 10.2 ± 5% ~ (p=0.235 n=28+27)
PUT_pods_p90 16.0 ± 1% 16.0 ± 1% ~ (p=0.380 n=29+28)
PUT_pods_p99 21.6 ±14% 20.9 ± 9% -3.15% (p=0.010 n=28+27)
POST_bindings_p50 8.98 ±17% 8.92 ±15% ~ (p=0.666 n=29+28)
POST_bindings_p90 16.5 ± 2% 16.5 ± 3% ~ (p=0.840 n=26+29)
POST_bindings_p99 21.4 ± 5% 21.1 ± 4% -1.21% (p=0.049 n=27+28)
GET_nodes_p90 1.18 ±19% 1.14 ±24% ~ (p=0.137 n=29+29)
GET_nodes_p99 8.29 ±40% 7.50 ±46% ~ (p=0.106 n=28+29)
GET_replicationcontrollers_p90 1.03 ±21% 1.01 ±27% ~ (p=0.489 n=29+29)
GET_replicationcontrollers_p99 10.0 ±123% 10.0 ±145% ~ (p=0.794 n=28+29)
GET_pods_p90 1.08 ±21% 1.02 ±19% ~ (p=0.083 n=29+28)
GET_pods_p99 2.81 ±39% 2.45 ±38% -12.78% (p=0.021 n=28+25)
Overall the two CLs combined have this effect:
name old ms/op new ms/op delta
LIST_nodes_p50 127 ±16% 121 ± 9% -4.58% (p=0.000 n=29+27)
LIST_nodes_p90 326 ±12% 266 ±12% -18.48% (p=0.000 n=29+27)
LIST_nodes_p99 453 ±11% 400 ±14% -11.79% (p=0.000 n=29+28)
LIST_replicationcontrollers_p50 29.4 ±49% 26.2 ±54% ~ (p=0.085 n=30+29)
LIST_replicationcontrollers_p90 83.0 ±78% 68.6 ±59% -17.33% (p=0.013 n=30+28)
LIST_replicationcontrollers_p99 216 ±43% 177 ±49% -17.68% (p=0.000 n=29+29)
DELETE_pods_p50 24.5 ±14% 24.3 ±13% ~ (p=0.562 n=30+29)
DELETE_pods_p90 30.7 ± 1% 30.7 ± 1% -0.30% (p=0.011 n=29+29)
DELETE_pods_p99 77.2 ±34% 54.2 ±23% -29.76% (p=0.000 n=30+27)
PUT_replicationcontrollers_p50 5.86 ±26% 5.94 ±32% ~ (p=0.734 n=29+29)
PUT_replicationcontrollers_p90 15.8 ± 7% 15.5 ± 6% -2.06% (p=0.010 n=29+29)
PUT_replicationcontrollers_p99 57.8 ±35% 39.5 ±55% -31.60% (p=0.000 n=29+29)
PUT_nodes_p50 14.9 ± 2% 14.8 ± 2% -0.68% (p=0.012 n=30+27)
PUT_nodes_p90 16.5 ± 1% 16.3 ± 2% -0.90% (p=0.000 n=27+28)
PUT_nodes_p99 57.9 ±47% 41.3 ±35% -28.61% (p=0.000 n=30+28)
POST_replicationcontrollers_p50 6.35 ±29% 6.34 ±20% ~ (p=0.944 n=30+28)
POST_replicationcontrollers_p90 15.4 ± 5% 15.0 ± 5% -2.18% (p=0.001 n=29+29)
POST_replicationcontrollers_p99 52.2 ±71% 32.9 ±46% -36.99% (p=0.000 n=29+27)
POST_pods_p50 8.99 ±13% 8.95 ±16% ~ (p=0.903 n=30+29)
POST_pods_p90 16.2 ± 4% 16.1 ± 4% ~ (p=0.287 n=29+29)
POST_pods_p99 30.9 ±21% 26.4 ±12% -14.73% (p=0.000 n=28+28)
POST_bindings_p50 9.34 ±12% 8.92 ±15% -4.54% (p=0.013 n=30+28)
POST_bindings_p90 16.6 ± 1% 16.5 ± 3% -0.73% (p=0.017 n=28+29)
POST_bindings_p99 23.5 ± 9% 21.1 ± 4% -10.09% (p=0.000 n=27+28)
PUT_pods_p50 10.8 ±11% 10.2 ± 5% -5.47% (p=0.000 n=30+27)
PUT_pods_p90 16.1 ± 1% 16.0 ± 1% -0.64% (p=0.000 n=29+28)
PUT_pods_p99 23.4 ± 9% 20.9 ± 9% -10.93% (p=0.000 n=28+27)
DELETE_replicationcontrollers_p50 2.42 ±16% 2.50 ±13% ~ (p=0.054 n=29+28)
DELETE_replicationcontrollers_p90 11.5 ±12% 11.8 ±13% ~ (p=0.141 n=30+28)
DELETE_replicationcontrollers_p99 19.5 ±21% 19.1 ±21% ~ (p=0.397 n=29+29)
GET_nodes_p50 0.77 ±10% 0.76 ±10% ~ (p=0.317 n=28+28)
GET_nodes_p90 1.20 ±16% 1.14 ±24% -4.66% (p=0.036 n=28+29)
GET_nodes_p99 11.4 ±48% 7.5 ±46% -34.28% (p=0.000 n=28+29)
GET_replicationcontrollers_p50 0.74 ±17% 0.73 ±17% ~ (p=0.222 n=30+28)
GET_replicationcontrollers_p90 1.04 ±25% 1.01 ±27% ~ (p=0.231 n=30+29)
GET_replicationcontrollers_p99 12.1 ±81% 10.0 ±145% ~ (p=0.063 n=28+29)
GET_pods_p50 0.78 ±12% 0.77 ±10% ~ (p=0.178 n=30+28)
GET_pods_p90 1.06 ±19% 1.02 ±19% ~ (p=0.120 n=29+28)
GET_pods_p99 3.92 ±43% 2.45 ±38% -37.55% (p=0.000 n=27+25)
LIST_services_p50 0.20 ±13% 0.20 ±16% ~ (p=0.854 n=28+29)
LIST_services_p90 0.28 ±15% 0.27 ±14% ~ (p=0.219 n=29+28)
LIST_services_p99 0.49 ±20% 0.47 ±24% ~ (p=0.140 n=29+29)
LIST_endpoints_p50 0.19 ±14% 0.19 ±15% ~ (p=0.709 n=29+29)
LIST_endpoints_p90 0.26 ±16% 0.26 ±13% ~ (p=0.274 n=29+28)
LIST_endpoints_p99 0.46 ±24% 0.44 ±21% ~ (p=0.111 n=29+29)
LIST_horizontalpodautoscalers_p50 0.16 ±15% 0.15 ±13% ~ (p=0.253 n=30+27)
LIST_horizontalpodautoscalers_p90 0.22 ±24% 0.21 ±16% ~ (p=0.152 n=30+28)
LIST_horizontalpodautoscalers_p99 0.31 ±33% 0.31 ±38% ~ (p=0.817 n=28+29)
LIST_daemonsets_p50 0.16 ±20% 0.15 ±11% ~ (p=0.135 n=30+27)
LIST_daemonsets_p90 0.22 ±18% 0.21 ±25% ~ (p=0.135 n=29+28)
LIST_daemonsets_p99 0.29 ±28% 0.29 ±32% ~ (p=0.606 n=28+28)
LIST_jobs_p50 0.16 ±16% 0.15 ±12% ~ (p=0.375 n=29+28)
LIST_jobs_p90 0.22 ±18% 0.21 ±16% ~ (p=0.090 n=29+26)
LIST_jobs_p99 0.31 ±28% 0.28 ±35% -10.29% (p=0.005 n=29+27)
LIST_deployments_p50 0.15 ±16% 0.15 ±13% ~ (p=0.565 n=29+28)
LIST_deployments_p90 0.22 ±22% 0.21 ±19% ~ (p=0.107 n=30+28)
LIST_deployments_p99 0.31 ±27% 0.29 ±34% ~ (p=0.068 n=29+28)
LIST_namespaces_p50 0.21 ±25% 0.21 ±26% ~ (p=0.768 n=29+27)
LIST_namespaces_p90 0.28 ±29% 0.26 ±25% ~ (p=0.101 n=30+28)
LIST_namespaces_p99 0.30 ±48% 0.29 ±42% ~ (p=0.339 n=30+29)
LIST_replicasets_p50 0.15 ±18% 0.15 ±16% ~ (p=0.612 n=30+28)
LIST_replicasets_p90 0.22 ±19% 0.21 ±18% -5.13% (p=0.011 n=28+27)
LIST_replicasets_p99 0.31 ±39% 0.28 ±29% ~ (p=0.066 n=29+28)
LIST_persistentvolumes_p50 0.16 ±23% 0.15 ±21% ~ (p=0.124 n=30+29)
LIST_persistentvolumes_p90 0.21 ±23% 0.20 ±23% ~ (p=0.092 n=30+25)
LIST_persistentvolumes_p99 0.21 ±24% 0.20 ±23% ~ (p=0.053 n=30+25)
LIST_resourcequotas_p50 0.16 ±12% 0.16 ±13% ~ (p=0.175 n=27+28)
LIST_resourcequotas_p90 0.20 ±22% 0.20 ±24% ~ (p=0.388 n=30+28)
LIST_resourcequotas_p99 0.22 ±24% 0.22 ±23% ~ (p=0.575 n=30+28)
LIST_persistentvolumeclaims_p50 0.15 ±21% 0.15 ±29% ~ (p=0.079 n=30+28)
LIST_persistentvolumeclaims_p90 0.19 ±26% 0.18 ±34% ~ (p=0.446 n=29+29)
LIST_persistentvolumeclaims_p99 0.19 ±26% 0.18 ±34% ~ (p=0.446 n=29+29)
LIST_pods_p50 68.0 ±16% 56.3 ± 9% -17.19% (p=0.000 n=29+28)
LIST_pods_p90 119 ±19% 93 ± 8% -21.88% (p=0.000 n=28+28)
LIST_pods_p99 230 ±18% 202 ±14% -12.13% (p=0.000 n=27+28)
2016-04-21 15:53:47 -04:00
Russ Cox
6a19e46ed6
pkg/storage: cache timers
...
A previous change here replaced time.After with an explicit
timer that can be stopped, to avoid filling up the active timer list
with timers that are no longer needed. But an even better fix is to
reuse the timers across calls, to avoid filling the allocated heap
with work for the garbage collector. On top of that, try a quick
non-blocking send to avoid the timer entirely.
For the e2e 1000-node kubemark test, basically everything gets faster,
some things significantly so. The 90th and 99th percentile for LIST nodes
in particular are the worst case that has caused SLO/SLA problems
in the past, and this reduces 99th percentile by 10%.
name old ms/op new ms/op delta
LIST_nodes_p50 127 ±16% 124 ±13% ~ (p=0.136 n=29+29)
LIST_nodes_p90 326 ±12% 278 ±15% -14.85% (p=0.000 n=29+29)
LIST_nodes_p99 453 ±11% 405 ±19% -10.70% (p=0.000 n=29+28)
LIST_replicationcontrollers_p50 29.4 ±49% 26.6 ±43% ~ (p=0.176 n=30+29)
LIST_replicationcontrollers_p90 83.0 ±78% 68.7 ±63% -17.30% (p=0.020 n=30+29)
LIST_replicationcontrollers_p99 216 ±43% 173 ±41% -19.53% (p=0.000 n=29+28)
DELETE_pods_p50 24.5 ±14% 24.3 ±17% ~ (p=0.562 n=30+28)
DELETE_pods_p90 30.7 ± 1% 30.6 ± 0% -0.44% (p=0.000 n=29+28)
DELETE_pods_p99 77.2 ±34% 56.3 ±27% -26.99% (p=0.000 n=30+28)
PUT_replicationcontrollers_p50 5.86 ±26% 5.83 ±36% ~ (p=1.000 n=29+28)
PUT_replicationcontrollers_p90 15.8 ± 7% 15.9 ± 6% ~ (p=0.936 n=29+28)
PUT_replicationcontrollers_p99 57.8 ±35% 56.7 ±41% ~ (p=0.725 n=29+28)
PUT_nodes_p50 14.9 ± 2% 14.9 ± 1% -0.55% (p=0.020 n=30+28)
PUT_nodes_p90 16.5 ± 1% 16.4 ± 2% -0.60% (p=0.040 n=27+28)
PUT_nodes_p99 57.9 ±47% 44.6 ±42% -23.02% (p=0.000 n=30+29)
POST_replicationcontrollers_p50 6.35 ±29% 6.33 ±23% ~ (p=0.957 n=30+28)
POST_replicationcontrollers_p90 15.4 ± 5% 15.2 ± 6% -1.14% (p=0.034 n=29+28)
POST_replicationcontrollers_p99 52.2 ±71% 53.4 ±52% ~ (p=0.720 n=29+27)
POST_pods_p50 8.99 ±13% 9.33 ±13% +3.79% (p=0.023 n=30+29)
POST_pods_p90 16.2 ± 4% 16.3 ± 4% ~ (p=0.113 n=29+29)
POST_pods_p99 30.9 ±21% 28.4 ±23% -8.26% (p=0.001 n=28+29)
POST_bindings_p50 9.34 ±12% 8.98 ±17% ~ (p=0.083 n=30+29)
POST_bindings_p90 16.6 ± 1% 16.5 ± 2% -0.76% (p=0.000 n=28+26)
POST_bindings_p99 23.5 ± 9% 21.4 ± 5% -8.98% (p=0.000 n=27+27)
PUT_pods_p50 10.8 ±11% 10.3 ± 5% -4.67% (p=0.000 n=30+28)
PUT_pods_p90 16.1 ± 1% 16.0 ± 1% -0.55% (p=0.003 n=29+29)
PUT_pods_p99 23.4 ± 9% 21.6 ±14% -8.03% (p=0.000 n=28+28)
DELETE_replicationcontrollers_p50 2.42 ±16% 2.50 ±13% ~ (p=0.072 n=29+29)
DELETE_replicationcontrollers_p90 11.5 ±12% 11.7 ±10% ~ (p=0.190 n=30+28)
DELETE_replicationcontrollers_p99 19.5 ±21% 19.0 ±22% ~ (p=0.298 n=29+28)
GET_nodes_p90 1.20 ±16% 1.18 ±19% ~ (p=0.626 n=28+29)
GET_nodes_p99 11.4 ±48% 8.3 ±40% -27.31% (p=0.000 n=28+28)
GET_replicationcontrollers_p90 1.04 ±25% 1.03 ±21% ~ (p=0.682 n=30+29)
GET_replicationcontrollers_p99 12.1 ±81% 10.0 ±123% ~ (p=0.135 n=28+28)
GET_pods_p90 1.06 ±19% 1.08 ±21% ~ (p=0.597 n=29+29)
GET_pods_p99 3.92 ±43% 2.81 ±39% -28.39% (p=0.000 n=27+28)
LIST_pods_p50 68.0 ±16% 65.3 ±13% ~ (p=0.066 n=29+29)
LIST_pods_p90 119 ±19% 115 ±12% ~ (p=0.091 n=28+27)
LIST_pods_p99 230 ±18% 226 ±21% ~ (p=0.251 n=27+28)
2016-04-21 15:53:47 -04:00
k8s-merge-robot
5555f6e118
Merge pull request #24575 from nikhiljindal/fedeServer
...
Automatic merge from submit-queue
call genericapiserver directly instead of going via master in federated-apiserver
I can cleanup more code, but this is the minimum required to unblock https://github.com/kubernetes/kubernetes/pull/23847
cc @jianhuiz
2016-04-21 12:46:19 -07:00
k8s-merge-robot
34fc7f0401
Merge pull request #24461 from wojtek-t/enable_components_to_use_protobufs
...
Automatic merge from submit-queue
Allow components to use protobufs while talking to apiserver.
2016-04-21 12:46:17 -07:00
nikhiljindal
aa4cdac005
hack/build-go tp build federation/cmd/federated-apiserver as well
2016-04-21 12:38:53 -07:00
k8s-merge-robot
72e51dacfe
Merge pull request #24034 from AdoHe/log_spam
...
Automatic merge from submit-queue
remove log spam from nodecontroller
@thockin @quinton-hoole ptal.
2016-04-21 12:11:05 -07:00
k8s-merge-robot
9d4eee63ab
Merge pull request #24589 from derekwaynecarr/fix_shm
...
Automatic merge from submit-queue
docker daemon complains SHM size must be greater than 0
Fixes https://github.com/kubernetes/kubernetes/issues/24588
I am hitting this on Fedora 23 w/ docker 1.9.1 using systemd cgroup-driver.
```
$ docker version
Client:
Version: 1.9.1
API version: 1.21
Package version: docker-1.9.1-9.gitee06d03.fc23.x86_64
Go version: go1.5.3
Git commit: ee06d03/1.9.1
Built:
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Package version: docker-1.9.1-9.gitee06d03.fc23.x86_64
Go version: go1.5.3
Git commit: ee06d03/1.9.1
Built:
OS/Arch: linux/amd64
```
Not sure why I am on the only one hitting it right now, but putting this out here for comment.
/cc @kubernetes/sig-node @kubernetes/rh-cluster-infra @smarterclayton
2016-04-21 12:11:03 -07:00
Random-Liu
d981fee2ee
Refactor Info and Version.
2016-04-21 12:02:50 -07:00
nikhiljindal
c3b124d550
call genericapiserver directly instead of going via master in federated-apiserver
2016-04-21 12:00:19 -07:00
k8s-merge-robot
de3ce4f465
Merge pull request #24519 from fejta/heapster
...
Automatic merge from submit-queue
Disable heapster job which has been broken for a month
https://github.com/kubernetes/kubernetes/issues/23538
This job is no longer producing a useful signal. http://kubekins.dls.corp.google.com/ shows that the last pass was nearly two months ago. I would like to disable the job until someone has the chance to fix it so we are not wasting jenkins resources, contributing to system instability.
2016-04-21 11:29:19 -07:00
k8s-merge-robot
145917368a
Merge pull request #24398 from caesarxuchao/fqresource-fake-client
...
Automatic merge from submit-queue
Make fake client actions use fully qualified resource
The output of a versioned clientset is version object. The fake client used to assume only internal objects will be returned. This PR removes this assumption by making fake actions initialized with a fully qualified resource instead of a resource string.
We have to regenerate fake clients in release_1_2 clientset to let it compile. For the test fakes, we are breaking the backwards compatibility promise.
Part of #24155 .
2016-04-21 10:45:32 -07:00
Andy Zheng
b8fd9e1a8d
Trusty: Add debug supports for docker and kubelet
2016-04-21 09:49:52 -07:00
gmarek
b76bed0cc9
All clients under ClientSet share one RateLimiter.
2016-04-21 18:48:22 +02:00
k8s-merge-robot
a0e4a80eb4
Merge pull request #24492 from mikebrow/vagrant-doc-updates
...
Automatic merge from submit-queue
updates to vagrant.md
Addresses issue #24259 ; merges in edits from the now deleted version of vagrant.md from the kubernetes.github.io/docs/getting-started-guides directory see PR
https://github.com/kubernetes/kubernetes.github.io/pull/294
Signed-off-by: mikebrow <brownwm@us.ibm.com >
2016-04-21 09:16:54 -07:00
derekwaynecarr
cbf1cb81a9
SHM size must be greater than 0
2016-04-21 11:45:28 -04:00
Clayton Coleman
9ec92559e5
Merge pull request #24594 from wojtek-t/fix_protobuf_verifier
...
Fix verification script for proto-generation
2016-04-21 11:31:09 -04:00
k8s-merge-robot
85de6acadc
Merge pull request #23208 from deads2k/fix-version-override
...
Automatic merge from submit-queue
make storage enablement, serialization, and location orthogonal
This allows a caller (command-line, config, code) to specify multiple separate pieces of config information regarding storage and have them properly composed at runtime. The information provided is exposed through interfaces to allow alternate implementations, which allows us to change the expression of the config moving forward. I also fixed up the types to be correct as I moved through.
The same options still exist, but they're composed slightly differently
1. specify target etcd servers per Group or per GroupResource
1. specify storage GroupVersions per Groups or per GroupResource
1. specify etcd prefixes per GroupVersion or per GroupResource
1. specify that multiple GroupResources share the same location in etcd
1. enable GroupResources by GroupVersion or by GroupResource whitelist or GroupResource blacklist
The `storage.Interface` is built per GroupResource by:
1. find the set of possible storage GroupResource based on the priority list of cohabitators
1. choose a GroupResource from the set by looking at which Groups have the resource enabled
1. find the target etcd server, etcd prefix, and storage encoding based on the GroupResource
The API server can have its resources separately enabled, but for now I've kept them linked.
@liggitt I think we need this (or something like it) to be able to go from config to these interfaces. Given another round of refactoring, we may be able to reshape these to be more forward driving.
@smarterclayton this is important for rebasing and for a seamless 1.2 to 1.3 migration for us.
2016-04-21 08:24:29 -07:00
gmarek
d344c2e32b
Create multiple RCs in NC - prerequisite for adding services
2016-04-21 17:20:05 +02:00
Wojciech Tyczynski
3f06755566
Regenerate protobuf files
2016-04-21 16:22:52 +02:00
Wojciech Tyczynski
43f5102903
Verify protobufs
2016-04-21 16:22:52 +02:00
k8s-merge-robot
35ea9b87b8
Merge pull request #24185 from jsafrane/devel/stabilize-provisioning-e2e
...
Automatic merge from submit-queue
Increase provisioning test timeouts.
We've encountered flakes in our e2e infrastructure when kubelet took more than one minute to detach a volume used by a deleted pod.
Let's increase the wait period from 1 to 3 minutes. This slows down the test by 2 minutes, but it makes the test more stable.
In addition, when kubelet cannot detach a volume for 3 minutes, let the test wait for additional recycle controller retry interval (10 minutes) and hope the volume is deleted by then. This should not increase usual test time, it makes the test stable when kubelet is _extremely_ slow when releasing the volume.
Fixes : #24161
2016-04-21 06:03:37 -07:00
deads2k
60fe17d338
update resource quota controller for shared informers
2016-04-21 08:20:39 -04:00
deads2k
8c4e3af1a3
switch job controller to shared informer
2016-04-21 08:20:39 -04:00
deads2k
8b707016f9
convert daemonset controller to SharedInformer
2016-04-21 08:20:39 -04:00
k8s-merge-robot
0a5d57a383
Merge pull request #24079 from hongchaodeng/comp
...
Automatic merge from submit-queue
etcd3 store: provide compactor util
What's this PR?
- Provides a util to compact keys in etcd.
Reason:
We want to save the most recent 10 minutes event history. It should be more than enough for slow watchers. It is not number based, so it can tolerate event bursts too. We do not want to save longer since the current storage API cannot take advantage of the multi-version key yet. We might keep a longer history in the future.
2016-04-21 05:19:54 -07:00
deads2k
6670b73b18
make storage enablement, serialization, and location orthogonal
2016-04-21 08:18:55 -04:00
Wojciech Tyczynski
31e2f8e485
Regenerate files
2016-04-21 14:12:13 +02:00
Wojciech Tyczynski
d6896fa45a
Allow setting content-type in binaries
2016-04-21 14:12:13 +02:00
deads2k
3be4b690ea
create a negotiating serializer that wraps a single serializer
2016-04-21 07:51:59 -04:00
k8s-merge-robot
767fa6913d
Merge pull request #24118 from smarterclayton/proxy_args
...
Automatic merge from submit-queue
Allow Proxy to be initialized with store
2016-04-21 04:42:43 -07:00
k8s-merge-robot
09adffb318
Merge pull request #23317 from aanm/removing-ipv4-enforcement
...
Automatic merge from submit-queue
Remove requirement that Endpoints IPs be IPv4
Signed-off-by: André Martins <aanm90@gmail.com >
Release Note: The `Endpoints` API object now allows IPv6 addresses to be stored. Other components of the system are not ready for IPv6 yet, and many cloud providers are not IPv6 compatible, but installations that use their own controller logic can now store v6 endpoints.
2016-04-21 03:34:50 -07:00
kulke
ba4d74f3c7
Added Block Storage support to Rackspace provider, improved Node discovery.
2016-04-21 10:31:37 +02:00
Klaus Ma
b6d2c2b295
Clarify kubectl output.
2016-04-21 15:35:32 +08:00
Wojciech Tyczynski
739d0a61d3
Merge pull request #24587 from smarterclayton/provide_generated_proto
...
Generate protobuf marshallers for new apps group
2016-04-21 08:26:33 +02:00
derekwaynecarr
fd04ff1bd1
Generated artifacts for ConfigMapList change
2016-04-21 02:07:45 -04:00
derekwaynecarr
155c1a4465
ConfigMapList.Items should not be omitempty
2016-04-21 02:07:45 -04:00
Clayton Coleman
6ab9cfcc39
Generate protobuf marshallers for new apps group
2016-04-21 01:39:50 -04:00
Xiangpeng Zhao
c381a7b61e
Improve error messages in jwt_test.go
...
Fix typos and add more info to error messages.
2016-04-21 11:37:14 +08:00
Erick Fejta
e5dfba496d
Disable heapster job which has been broken for a month
2016-04-20 20:09:09 -07:00