kubelet sends up status updates to flip the ready condition of a pod after the
pod is already in the running state. RunRC should wait until the pod condition
is ready to make sure there is no pending status update which may affect the
follow-up performance test.
Without this, the Jobs client used by
kubectl had codec type v1. You would not notice this
on, say, a GET. But when you tried to do an
Update, which did client-side conversion, then
you would get an error.
- Don't mess with non-test node labels in daemonset e2e test
Other e2e tests will expect labels on the nodes. The daemonset test should only
add and remove its own labels.
- Refactor node updating in deamonset e2e test
In upstream the kubelet is responsible for all pods which have the spec.NodeName
set. In Mesos we have a two-stage scheduling process:
1. pods with a pre-set spec.NodeName are still scheduled by the scheduler.
2. The kubelet will only see them when a Mesos task was started and the executor
passes the pod to the kubelet.
With this PR a pod with spec.NodeName which is gracefully terminated, but not
yet scheduled, e.g.
- because the termination happened just after creation and the scheduler was
not fast enough
- because the NodeSelector does not match
is deleted by the Mesos scheduler.
- pre-create node api objects from the scheduler when offers arrive
- decline offers until nodes a registered
- turn slave attributes as k8s.mesosphere.io/attribute-* labels
- update labels from executor Register/Reregister
- watch nodes in scheduler to make non-Mesos labels available for NodeSelector matching
- add unit tests for label predicate
- add e2e test to check that slave attributes really end up as node labels
Add an experimental network plugin implementation named "cni" that
uses the Container Networking Interface (CNI) specification for
configuring networking for pods.
https://github.com/appc/cni/blob/master/SPEC.md