Automatic merge from submit-queue
rkt: Update the directory path for saving auth config.
Since #23308 is merged, now we have more stable way to determine where to store the auth configs.
cc @yujuhong @sjpotter
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Allow lazy binding in credential providers; don't use it in AWS yet
This is step one for cross-region ECR support and has no visible effects yet.
I'm not crazy about the name LazyProvide. Perhaps the interface method could
remain like that and the package method of the same name could become
LateBind(). I still don't understand why the credential provider has a
DockerConfigEntry that has the same fields but is distinct from
docker.AuthConfiguration. I had to write a converter now that we do that in
more than one place.
In step two, I'll add another intermediate, lazy provider for each AWS region,
whose empty LazyAuthConfiguration will have a refresh time of months or years.
Behind the scenes, it'll use an actual ecrProvider with the usual ~12 hour
credentials, that will get created (and later refreshed) only when kubelet is
attempting to pull an image. If we simply turned ecrProvider directly into a
lazy provider, we would bypass all the caching and get new credentials for
each image pulled.
Automatic merge from submit-queue
Kubelet: Better-defined Container Waiting state
For issue #20478 and #21125.
This PR corrected logic and add unit test for `ShouldContainerBeRestarted()`, cleaned up `Waiting` state related code and added unit test for `generateAPIPodStatus()`.
Fixes#20478Fixes#17971
@yujuhong
Automatic merge from submit-queue
Do not throw creation errors for containers that fail immediately after being started
Fixes (hopefully) #23607
cc @dchen1107
Automatic merge from submit-queue
Updating go-restful to generate "type":"object" instead of "type":"any" in swagger-spec (breaks kubectl 1.1)
Updating go-restful to include https://github.com/emicklei/go-restful/pull/270 (replacing type "any" by type "object". Ref https://github.com/swagger-api/swagger-codegen/issues/2347 on why we want to do that)
Ref https://github.com/kubernetes/kubernetes/issues/4700#issuecomment-194719759
First commit generated using:
```
godep restore
go get -u github.com/emicklei/go-restful
godep update github.com/emicklei/go-restful
```
Second commit generated by running
```
./hack/update-swagger-spec.sh
```
Third commit generated by running:
```
./hack/update-api-reference-docs.sh
```
cc @kubernetes/sig-api-machinery @bgrant0607
Automatic merge from submit-queue
Client-gen: handle dotted group name, e.g., "authentication.k8s.io"
The client-gen used to assume the group name doesn't include dot, but it's not true, e.g., we have group `authentication.k8s.io`.
With this PR, Client-gen will use the full group name when creating directory (e.g., the client for the authentication group will be generated at pkg/client/clientset_generated/release_1_3/typed/`authentication.k8s.io`/v1/). However, because golang doesn't allow dot in variable/function names, so when the group name is used as part of variable/function name, client-gen extracts the part before the first dot (e.g., authentication).
This PR also changes the group name of the test group from `testgroup` to `testgroup.k8s.io` to verify if client-gen generates sane code.
cc @deads2k for #20573
Automatic merge from submit-queue
Honor starting resourceVersion in watch cache
Compare the requested resourceVersion to each event's resourceVersion to ensure events that occurred
in the past are not sent to the client.
Fixes#24194
cc @liggitt @wojtek-t
Automatic merge from submit-queue
Fix GKE kube-up to correctly find an IGM from a multi-zone cluster
I've confirmed that this successfully brings up a cluster, fixing the immediate issue with the new e2e test. Sorry about not properly vetting it in the original PR (#24075).
This does cause a warning message to be printed based on the handling of the NUM_NODES variable though, which I could fix if you guys think it's worth it:
```
Detected 6 ready nodes, found 6 nodes out of expected 3. Found more nodes than expected, your cluster may not behave correctly.
```
@quinton-hoole
Automatic merge from submit-queue
rkt: Add pre-stop lifecycle hooks for rkt.
When a pod is being terminated, the pre-stop hooks of all the containers
will be run before the containers are stopped.
cc @yujuhong @Random-Liu @sjpotter