If not, using `go test -count=n` would make them pile up and ultimately
get to the limit of open files:
client_test.go:522: expected an error, got Get http://127.0.0.1:46070/api: dial tcp 127.0.0.1:46070: socket: too many open files
Steps to reproduce (no longer fails):
godep go test -short -run '^$' -o test .
./test -test.run '^TestGetSwaggerSchema' -test.count 100
Note that this might not fail if your `ulimit -n` is not low enough.
I broke out the error retry logic into a named function that could be
tested independently of the rest of the event processing framework. This
allows the test to know when the retry logic is done.
The problem with the original test was there was no reliable way to know
when it was done trying record an event. A sentinal event was being
used, but there is no ordering guarantee. I could have added
synchronization around attempts tracking to fix the data race, but the
test case was still fundamentally flawed and would error occasionally.
If not, using `go test -count=n` would make them pile up and ultimately
get to the limit of open files:
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:39768: accept4: too many open files; retrying in 5ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 5ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 10ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 20ms
Steps to reproduce (no longer fails):
godep go test -short -run '^$' -o test .
./test -test.run '^TestDoRequestNewWayFile$' -test.count 100
Note that this might not fail if your `ulimit -n` is not low enough.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
Skip updating resources that already meet the desired replica count.
This change has an impact in both kubectl scale and kubectl delete in
that reapable resources that already have the desired replicas (number
provided via --replicas for scale, or zero for delete) won't be updated
again and a "already scaled" message will be printed (in case of scale).