In order to synchronize the current state of Kubernetes's objects (e.g. pods, containers, etc.),
periodic synch loops are run. When there is a lot of objects to synchronize with,
loops increase communication traffic. At some point when all the traffic interfere cpu usage curve
hits the roof causing 100% cpu utilization.
To distribute the traffic in time, some sync loops can jitter their period in each loop
and help to flatten the curve.
This commit adds JitterUntil function with jitterFactor parameter.
JitterUntil generalizes Until which is a special case for jitterFactor being zero.
Add a mutex to guard SetUpAt() and TearDownAt() calls - they should not
run in parallel. There is a race in these calls when there are two pods
using the same volume, one of them is dying and the other one starting.
TearDownAt() checks that a volume is not needed by any pods and detaches the
volume. It does so by counting how many times is the volume mounted
(GetMountRefs() call below).
When SetUpAt() of the starting pod already attached the volume and did not mount
it yet, TearDownAt() of the dying pod will detach it - GetMountRefs() does not
count with this volume.
These two threads run in parallel:
dying pod.TearDownAt("myVolume") starting pod.SetUpAt("myVolume")
| |
| AttachDisk("myVolume")
refs, err := mount.GetMountRefs() |
Unmount("myDir") |
if refs == 1 { |
| | Mount("myVolume", "myDir")
| | |
| DetachDisk("myVolume") |
| start containers - OOPS! The volume is detached!
|
finish the pod cleanup
Also, add some logs to cinder plugin for easier debugging in the future, add
a test and update the fake mounter to know about bind mounts.
Add a recognizer that is capable of sniffing content type from data by
asking each serializer to try to decode - this is for a "universal
decoder/deserializer" which can be used by client logic.
Add codec factory, which provides the core primitives for content type
negotiation. Codec factories depend only on schemes, serializers, and
groupversion pairs.
Used like:
var pod *api.Pod
err := client.RetryOnConflict(client.DefaultBackoff, func() (err error) {
pod, err = c.Pods("mynamespace").UpdateStatus(update)
return
})
// err may be conflict