Automatic merge from submit-queue
CRI: add docs for sysctls
#34830 adds `sysctls` features in CRI, it is based on sandbox annotations, this PR adds docs for it.
@yujuhong @timstclair @jonboulle
Automatic merge from submit-queue
Node E2E: Avoid printing test result twice.
This is a problem since long time ago.
`RunSshCommand` includes the command output to the error. If the command running the test fails, the test output will also be included in the error. [The runner prints both the test output and the error](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/runner/remote/run_remote.go#L270), which leads the test result to be printed twice. (See the [test result](https://storage.googleapis.com/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10968/build-log.txt) on node tmp-node-e2e-af900a4d-e2e-node-ubuntu-trusty-docker9-v1-image)
This PR changes `RunSshCommand` not to put command output into the error, and leave the caller to decide how to deal with command output when the command fails.
Automatic merge from submit-queue
CRI: Clarify User in CRI.
Addressed https://github.com/kubernetes/kubernetes/pull/36423#issuecomment-259343135.
This PR clarifies the user related fields in CRI.
One question is that:
What is the meaning of the `run_as_user` field in `LinuxSandboxSecurityContext`?
* **Is it user on the host?** Then it doesn't make sense, user shouldn't care about what users are on the host.
* **Is it user inside the infra container image?** This is how the field is currently used. However, Infra container is docker specific, I'm not sure whether we should expose this in CRI.
* **Is it the default user inside the pod?** It tells runtime that if there is a container (infra container, or some other helper containers like streaming container etc.), if their `user` is not specified, use the default "sandbox user". Then how can we guarantee that infra or helper container image have the `user`?
* **It doesn't make sense?** If we remove it, we are relying on the shim to set right user (maybe always root) for infra or helper containers (if there will be any in the future), I'm not sure whether this is what we expect.
@yujuhong @feiskyer @jonboulle @yifan-gu
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Add e2e test for CockroachDB statefulset
Refactor the code of statefulset e2e test for clustered applications, and add a test for CockroachDB.
The yaml file is copied from examples/cockroachdb/
cc @erictune @foxish @kow3ns @kubernetes/sig-apps
Automatic merge from submit-queue
e2e pod cleanup test: restrict pods to be assigned to nodes observed …
The test checks the individual kubelet /runningPods endpoint based on the
initial list of nodes it observes. It is important that all pods are
scheduled only onto those nodes. Apply node labels to ensure no stray pods on
other nodes.
This fixes#35197
Automatic merge from submit-queue
K8s 1.5 keeps container-vm as default node image on GCE
There is a concern that some GCE users may be running automation that
(a) turns up ephemeral clusters and (b) always uses the latest K8s
release. If any of these workloads fall outside the set supported on
GCI, cutting the release will break the automation. We are therefore
delaying this change until we have provided sufficient warning.
```release-note
K8s 1.5 keeps container-vm as the default node image on GCE for backwards compatibility reasons. Please beware that container-vm is officially deprecated and you should replace it with GCI if at all possible. You can review the migration guide here for more detail: https://cloud.google.com/container-engine/docs/node-image-migration
```
/cc @aronchick @vishh @roberthbailey
Automatic merge from submit-queue
V2resource fixes
when using kubectl set resources it resets all resource fields that are not being set.
for example
$ kubectl set resources deployments nginx --limits=cpu=100m
followed by
$ kubectl set resources deployments nginx --limits=memory=256Mi
would result in the nginx deployment only limiting memory at 256Mi with the previous
limit placed on the cpu being wiped out. This behavior is corrected so that each invocation
only modifies fields set in that command and changed the testing so that the desired behavior
is checked.
Also a typo:
you must specify an update to requests or limits or (in the form of --requests/--limits)
corrected to
you must specify an update to requests or limits (in the form of --requests/--limits)
Implemented both the dry run and local flags.
Added test cases to show that both flags are operating as intended.
Removed the print statement "running in local mode" as in PR#35112
The original PR associated with these fixes where reverted due to causing a flake in hack/make-rules/test-cmd.sh, I gave the 'kubectl set resources' tests there own deployment and set the terminationGracePeriodSeconds to 0 and have run test-cmd.sh for hours without hitting the flake
Automatic merge from submit-queue
[Federation][init-10d] Use the right service names in controller manager.
Please review only the last commit here. This is based on PR #36048 which will be reviewed independently.
Design Doc: PR #34484
cc @kubernetes/sig-cluster-federation @nikhiljindal
Automatic merge from submit-queue
Garbage collection tests the MaxPerPodContainers and MaxContainers constraints
This is the first version of this test. It tests that containers are garbage collected according to the default configuration.
Automatic merge from submit-queue
Close tunnels after failed healthchecks.
When we fail an ssh-tunnel healthcheck, we currently leak a file descriptor keeping the SSH connection open.
This closes the underlying tunnel before removing our pointer to it. It is possible that the tunnel was functional, but the healthcheck failed for some other reason (e.g. kubelet healthz down), which could close an in-use tunnel, but I think that is acceptable.
Automatic merge from submit-queue
[kubelet]update some --cgroups-per-qos to --experimental-cgroups-per-qos
Follow https://github.com/kubernetes/kubernetes/pull/36767, there are some fields still need update in docs or hack/local-up-cluster.sh
There is a concern that some GCE users may be running automation that
(a) turns up ephemeral clusters and (b) always uses the latest K8s
release. If any of these workloads fall outside the set supported on
GCI, cutting the release will break the automation. We are therefore
delaying this change until we have provided sufficient warning.
Automatic merge from submit-queue
Add a flag allowing contention profiling of the API server
Useful for performance debugging.
cc @smarterclayton @timothysc @lavalamp
```release-note
Add a flag allowing contention profiling of the API server
```
Automatic merge from submit-queue
kubectl: add less verbose version
The kubectl version output is very complex and makes it hard for users
and vendors to give actionable information. For example during the
recent Kubernetes 1.4.3 TLS security scramble I had to write a one-liner
for users to get out the version number to give to figure out if they
are vulnerable:
```
$ kubectl version | grep -i Server | sed -n 's%.*GitVersion:"\([^"]*\).*%\1%p'
```
Instead this patch outputs simply output by default
```
./kubectl version
Client Version: v1.4.3
Server Version: v1.4.3
```
Adding the `--verbose` flag will output the old format.
Automatic merge from submit-queue
Cleanup pod in MatchContainerOutput
MatchContainerOutput always creates a pod and does not cleanup. We need
to fix this to be better at re-trying the scenarios.
When there is an error say in the first attempt of ExpectNoErrorWithRetries
(for example in "Pods should contain environment variables for services" test)
the retries logic calls MatchContainerOutput another time and the
podClient.create fails correctly since the pod was not cleaned up the
first time MatchContainerOutput was called.
Fixes#35089
Automatic merge from submit-queue
Change dnsutils image to use alpine
This reduces the size of the dnsutils image and should reduce the # of failed e2e test runs due to image pull timeout.
MatchContainerOutput always creates a pod and does not cleanup. We need
to fix this to be better at re-trying the scenarios.
When there is an error say in the first attempt of ExpectNoErrorWithRetries
(for example in "Pods should contain environment variables for services" test)
the retries logic calls MatchContainerOutput another time and the
podClient.create fails correctly since the pod was not cleaned up the
first time MatchContainerOutput was called.
Fixes#35089
Automatic merge from submit-queue
[kubelet] rename --cgroups-per-qos to --experimental-cgroups-per-qos
This reflects the true nature of "cgroups per qos" feature.
```release-note
* Rename `--cgroups-per-qos` to `--experimental-cgroups-per-qos` in Kubelet
```
Automatic merge from submit-queue
tests: update memory resource limits
```release-note
NONE
```
On ubuntu, the `RestartNever` test OOMs its cgroup limit fairly
frequently.
This bumps the number up to something suitably large since the test
isn't testing anything related to this anyways.
Fixes#36159
Fix based on https://github.com/kubernetes/kubernetes/issues/36159#issuecomment-258992255
cc @yujuhong @saad-ali
Automatic merge from submit-queue
Add test for: mount a secret with another secret having same name in different namespace
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
If two secrets exist with the same name but different namespace, a pod in one namespace should be able to mount the secret without issue.
**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes #
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
``` release-note
```
…nother secret with same name and different namespace
Automatic merge from submit-queue
Implement CanMount() for gfsMounter for linux
**What this PR does / why we need it**:
To implement CanMount() check for glusterfs. If mount binaries are not present on the underlying node, the mount will not proceed and return an error message stating so.
Related to issue : https://github.com/kubernetes/kubernetes/issues/36098
Related to similar change for NFS :
https://github.com/kubernetes/kubernetes/pull/36280
**Release note**:
`Check binaries for GlusterFS on the underlying node before doing mount`
Sample output from testing in GCE/GCI:
rkouj@rkouj0:~/go/src/k8s.io/kubernetes$ kubectl describe pods
Name: glusterfs
Namespace: default
Node: e2e-test-rkouj-minion-group-kjq3/10.240.0.3
Start Time: Fri, 11 Nov 2016 17:22:04 -0800
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
glusterfs:
Container ID:
Image: gcr.io/google_containers/busybox
Image ID:
Port:
QoS Tier:
cpu: Burstable
memory: BestEffort
Requests:
cpu: 100m
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
glusterfs:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
Path: kube_vol
ReadOnly: true
default-token-2zcao:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2zcao
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
8s 8s 1 {default-scheduler } Normal Scheduled Successfully assigned glusterfs to e2e-test-rkouj-minion-group-kjq3
7s 4s 4 {kubelet e2e-test-rkouj-minion-group-kjq3} Warning FailedMount Unable to mount volume kubernetes.io/glusterfs/6bb04587-a876-11e6-a712-42010af00002-glusterfs (spec.Name: glusterfs) on pod glusterfs (UID: 6bb04587-a876-11e6-a712-42010af00002). Verify that your node machine has the required components before attempting to mount this volume type. Required binary /sbin/mount.glusterfs is missing
The test checks the individual kubelet /runningPods endpoint based on the
initial list of nodes it observes. It is important that all pods are
scheduled only onto those nodes. Apply node labels to ensure no stray pods on
other nodes.