It includes some performance improvements for parsing JSON (which is
very important for us, since all Docker logs are JSON) as well as a
couple new settings, like forcing of a flush of multiline logs after a
time period rather than having to wait until a new log is seen before
feeling confident flushing the previous one.
-Remove CPU limits to enable CPU bursting once 1.2 begins enforcing CPU limits.
-Add a memory limit for fluentd-es to match fluentd-gcp.
-Explicitly set requests to match limits.
Internal types are not supposed to have json metadata (though in kubernetes
they do) as it is true with openshift types. That means sort-by must work
on versioned objects for sorting, otherwise it produces "error: metadata
is not found" error if it sorts internal types without json metadata.
This PR converts internal types objects to versioned objects and sort-by
sorts them correctly without medata error, and then it prints
corresponding internal objects in sorted order.
This change revises the way to provide kube-system manifests for clusters on Trusty. Originally, we maintained copies of some manifests under cluster/gce/trusty/kube-manifests, which is not scalable and hard to maintain. With this change, clusters on Trusty will use the same source of manifests as ContainerVM. This change also fixes some minor problems such as shell variables and comments to meet the style guidance better.
Also register replicationcontrollers/scale subresource. Along with
registering the resource, also specify the cross-group override for the
subresource since Scale belongs belongs to autoscaling/v1 but
ReplicationController belongs to api/v1.
Starting docker through Salt has always been problematic. Kubelet or
the babysitter process should start it. We've kept it around primarily
so we have a `service: docker` node for the Salt DAG.
Instead, we enable (but do not start) the Docker service in Salt. This
lets us keep the DAG node, but won't start it.
There's another bug in Salt, where watches will start the service even
on `service.enabled`. So we remove the watches, and move them to our
existing Salt bug-fix script.
During a rolling update for Deployments, the total count of surge pods
is calculated by adding the desired number of pods (deployment.Spec.Replicas)
to maxSurge. During a kubectl rolling update, the total count of surge
pods is calculated by adding the original number of pods (oldRc.Spec.Replicas
via an annotation) to maxSurge. This commit changes this to use desired
replicas.