Commit Graph

1732 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
c0c6b082b8 Merge pull request #54230 from jsafrane/ceph-test-sleep
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Allow Ceph server some time to start

Ceph server needs to create our "foo" volume on startup. It keeps the image small, however it makes the server container start slow.

Add sleep before the server is usable. Without this PR, all pods that use Ceph fail to start for couple of seconds with cryptic "image foo not found" error and it clutters logs and pod logs and makes it harder to spot real errors.

```release-note
NONE
```

/sig storage
/sig testing
/assign @jeffvance @copejon
2018-01-10 13:19:33 -08:00
Lantao Liu
f64c508e2e Add getCRIClient and set default values for CRI related flags 2018-01-09 22:59:17 +00:00
Kubernetes Submit Queue
bd4d511a40 Merge pull request #57852 from misterikkit/moveScheduler
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Move scheduler out of plugin directory

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
This is but one step toward resolving the referenced issue.
/ref #57579

**Special notes for your reviewer**:

**Release note**:

```release-note
Default scheduler code is moved out of the plugin directory.
plugin/pkg/scheduler -> pkg/scheduler
plugin/cmd/kube-scheduler -> cmd/kube-scheduler
```
/sig scheduling
2018-01-05 22:20:13 -08:00
Kubernetes Submit Queue
8ba39a3fcb Merge pull request #57037 from runcom/nfs-overlayfs
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

test: e2e: support NFS test on overlayfs

TL;DR; NFS is not supported over overlayfs. In CRI-O we default to
overlayfs as the storage driver, therefore running the NFS test always
fails to export the directory. This patch adds a volume mount from the
host so /exports isn't on overlayfs anymore.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>

@Random-Liu @mrunalp  PTAL - we got this after enabling all the e2e tests as discussed at kubecon (thanks!)

```release-note
NONE
```
2018-01-05 15:51:58 -08:00
Jonathan Basseri
85c5862552 Fix scheduler refs in BUILD files.
Update references to moved scheduler code.
2018-01-05 15:05:01 -08:00
Jonathan Basseri
30b89d830b Move scheduler code out of plugin directory.
This moves plugin/pkg/scheduler to pkg/scheduler and
plugin/cmd/kube-scheduler to cmd/kube-scheduler.

Bulk of the work was done with gomvpkg, except for kube-scheduler main
package.
2018-01-05 15:05:01 -08:00
Walter Fender
9187b343e1 Split the NodeController into lifecycle and ipam pieces.
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.

Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in  feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
2018-01-04 12:48:08 -08:00
Kubernetes Submit Queue
a1db9436cd Merge pull request #56382 from mikkeloscar/e2e-psp-and-rbac
Automatic merge from submit-queue (batch tested with PRs 56382, 57549). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

e2e: Only create PSP if RBAC is enabled

**What this PR does / why we need it**: Creating privileged PSPs during e2e tests depends on RBAC being enabled in the target cluster. However it is possible to use PSPs without having RBAC enabled thus only enable creating PSPs during e2e tests if both PSP and RBAC is enabled.

Fix #57840

**Special notes for your reviewer**:

**Release note**:

```release-note
Only create Privileged PSP binding during e2e tests if RBAC is enabled.
```
2018-01-04 06:40:31 -08:00
Jeff Grafton
efee0704c6 Autogenerate BUILD files 2017-12-23 13:12:11 -08:00
Bowei Du
301488b235 Expose all GCE cloud proivder services versions, not just the GA API 2017-12-22 11:24:10 -08:00
Antonio Murdaca
ceb6dea1e0 test: e2e: support NFS test on overlayfs
TL;DR; NFS is not supported over overlayfs. In CRI-O we default to
overlayfs as the storage driver, therefore running the NFS test always
fails to export the directory. This patch adds a volume mount from the
host so /exports isn't on overlayfs anymore.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2017-12-20 12:30:43 +01:00
liz
b814c5677f Collect all the assorted image URLs from e2e tests in one place
utils/image/manifest has an additional `arch` parameter, which determines
whether an image ends in `-$ARCH` (like `-amd64`).

All locations that previously had gcr.io urls referenced in costants or inline
have been updated to refere test/utils/image.
2017-12-19 12:16:07 -05:00
Tim Hockin
eba5b6092a Use k8s.gcr.io vanity domain for container images 2017-12-18 09:18:34 -08:00
Kubernetes Submit Queue
94327c5f72 Merge pull request #56754 from dims/remove-hacks-for-mesos
Automatic merge from submit-queue (batch tested with PRs 57127, 57011, 56754, 56601, 56483). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove hacks added for mesos

**What this PR does / why we need it**:

Since Mesos is no longer in your main repository and since we have
things like dynamic kubelet configuration in progress, we should
drop these undocumented, untested, private hooks.

cmd/kubelet/app/server.go::CreateAPIServerClientConfig
CreateAPIServerClientConfig::getRuntime
pkg/kubelet/kubelet_pods.go::getPhase

Also remove stuff from Dependencies struct that were specific to
the Mesos integration (ContainerRuntimeOptions and Options)

Also remove stale references in test/e2e and and test owners file


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
Drop hacks used for Mesos integration that was already removed from main kubernetes repository
```
2017-12-17 06:25:56 -08:00
Kubernetes Submit Queue
45b5c0ec42 Merge pull request #57107 from liggitt/fix-e2e-panic
Automatic merge from submit-queue (batch tested with PRs 56386, 57204, 55692, 57107, 57177). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Check ns setup error during e2e

If namespace create fails during e2e setup, don't panic

/assign @tallclair 

```release-note
NONE
```
2017-12-17 04:19:44 -08:00
Mikkel Oscar Lyderik Larsen
a37d8ec1f9 Don't create PSP binding when RBAC is not enabled 2017-12-16 15:28:42 +01:00
Kubernetes Submit Queue
1433716af4 Merge pull request #56401 from porridge/log-actual-rc
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Log actual return code, not the default value.

**What this PR does / why we need it**:

Previously the code would log 127, no matter what, which is quite
useless..

**Release note**:
```release-note
NONE
```
2017-12-15 20:08:16 -08:00
Kubernetes Submit Queue
baf93c060e Merge pull request #55995 from supereagle/extensions-client-with-version
Automatic merge from submit-queue (batch tested with PRs 55900, 55995, 55913, 55467, 55376). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use extensions client with explicit version

**What this PR does / why we need it**:
Extensions client without explicit version has been deprecated, change them to the one with explicit version.

**Which issue(s) this PR fixes**:
Fixes partially #55993

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-12-14 00:45:15 -08:00
Kubernetes Submit Queue
876259435a Merge pull request #55991 from supereagle/core-client-with-version
Automatic merge from submit-queue (batch tested with PRs 54604, 55781, 55806, 55935, 55991). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use core client with explicit version

**What this PR does / why we need it**:
Core client without explicit version has been deprecated, change them to the one with explicit version.

**Which issue(s) this PR fixes**:
Fixes partially #55993

**Special notes for your reviewer**:
/cc @kubernetes/sig-api-machinery-pr-reviews
/cc @caesarxuchao @k82cn @sttts @kevin-wangzefeng 

**Release note**:
```release-note
NONE
```
2017-12-13 22:26:01 -08:00
Anthony Yeh
bbcf59bde0 gce/upgrade.sh: Prompt if etcd version is unspecified.
We shouldn't upgrade etcd without first warning the user that some etcd
version transitions can't be undone. We don't know what version the user
currently has, so we require either an explicit version and image, or an
interactive acknowledgement of this caveat.

This is modeled after the STORAGE_MEDIA_TYPE prompt just above.
2017-12-13 14:11:11 -08:00
Jordan Liggitt
3d5288ebf3 Check ns setup error during e2e 2017-12-12 13:20:56 -05:00
Kubernetes Submit Queue
f9f8dc30c5 Merge pull request #52748 from leblancd/v6_e2e_host_port
Automatic merge from submit-queue (batch tested with PRs 52748, 56623). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add brackets around IPv6 addrs in e2e test IP:port endpoints

There are several locations in the e2e tests where endpoints of the
form IPv6:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.

Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.

fixes #52746



**What this PR does / why we need it**:
This PR adds brackets around IPv6 addresses when they appear as part of an IPv6-addr:port endpoint
in the e2e tests. This is needed because any connections that attempt to use IPv6-addr:port
endpoint without brackets surrounding the IPv6-addr will fail.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52746

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-12-04 14:22:10 -08:00
Anirudh Ramanathan
c72fc5006c Revert "Fix for the network partition tests" 2017-12-04 04:26:14 -08:00
Davanum Srinivas
7568462ec3 Remove hacks added for mesos
Since Mesos is no longer in your main repository and since we have
things like dynamic kubelet configuration in progress, we should
drop these undocumented, untested, private hooks.

cmd/kubelet/app/server.go::CreateAPIServerClientConfig
CreateAPIServerClientConfig::getRuntime
pkg/kubelet/kubelet_pods.go::getPhase

Also remove stuff from Dependencies struct that were specific to
the Mesos integration (ContainerRuntimeOptions and Options)

Also remove stale references in test/e2e and and test owners file
2017-12-03 13:52:30 -05:00
Anirudh
32942f1462 Fix for the network partition tests (and cluster autoscaling too
potentially)
2017-12-01 13:24:23 -08:00
Kubernetes Submit Queue
3904cc7803 Merge pull request #54687 from andyzhangx/createvolume-fix
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix CreateVolume func: use search mode instead

**What this PR does / why we need it**:
This is a little fall back for CreateVolume func: use search mode for Dedicated kind as @rootfs suggested.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52396

**Special notes for your reviewer**:
I reference the implmentation of v1.6 in the same CreateVolume func
https://github.com/kubernetes/kubernetes/blob/release-1.6/pkg/cloudprovider/providers/azure/azure_storage.go#L213-L247

**Release note**:

```
fix azure storage account exhausting issue by using azure disk mount
```
/sig azure

@rootfs @feiskyer @karataliu
2017-11-30 13:55:29 -08:00
Marcin Owsiany
a8cfd22c16 Log actual return code, not the default value. 2017-11-27 10:44:08 +01:00
Mikkel Oscar Lyderik Larsen
3836857229 e2e: Only create PSP if RBAC is enabled
Using PSP in e2e tests depend on RBAC being enabled in the cluster and
thus PSP should only be used when RBAC is.
2017-11-27 01:11:38 +01:00
supereagle
032416c75d use core client with explicit version
fix more usage of deprecated core client
2017-11-25 08:14:10 +08:00
supereagle
9c02d7e38c use extensions client with explicit version 2017-11-22 21:18:14 +08:00
Kubernetes Submit Queue
61792ef482 Merge pull request #55786 from porridge/debug-e2e-pod-wait
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Improve messages around waiting for pods.

**What this PR does / why we need it**:

This is a step towards solving #55785

**Release note**:
```release-note
NONE
```
2017-11-21 15:04:31 -08:00
Kubernetes Submit Queue
d4724d7e43 Merge pull request #55056 from porridge/typo-percentil
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix a typo.

**Release note**:
```release-note
NONE
```
2017-11-20 01:40:50 -08:00
andyzhangx
310168c1d2 fix CreateVolume: search mode for Dedicated kind 2017-11-19 11:16:50 +00:00
Kubernetes Submit Queue
87d45a54bd Merge pull request #55940 from shyamjvs/reduce-spam-from-resource-gatherer
Automatic merge from submit-queue (batch tested with PRs 55233, 55927, 55903, 54867, 55940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Control logs verbosity in resource gatherer

PR https://github.com/kubernetes/kubernetes/pull/53541 added some logging in resource gatherer which is a bit too verbose for normal purposes.
As a result, we're seeing a lot of spam in our large cluster performance tests (e.g - https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/8046/build-log.txt)

This PR is making the verbosity of those logs controllable through an option. It's off by default, but turning it on for the gpu test to preserve behavior there.

/cc @jiayingz @mindprince
2017-11-18 12:26:18 -08:00
Kubernetes Submit Queue
2d972c19bf Merge pull request #55737 from mindprince/update-nvidia-urls
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update URLs for nvidia gpu device plugin and nvidia driver installer.

Device plugin is now an addon and its manifest is now in kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains device plugin.

This is needed after https://github.com/kubernetes/kubernetes/pull/54826 and https://github.com/GoogleCloudPlatform/container-engine-accelerators/pull/25

**Release note**:
```release-note
NONE
```

/sig scheduling
2017-11-18 09:36:05 -08:00
Shyam Jeedigunta
fce28995e1 Control logs verbosity in resource gatherer 2017-11-17 13:03:32 +01:00
Kubernetes Submit Queue
ebd3d68039 Merge pull request #55831 from Random-Liu/rename-log-dump-env
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`.

For https://github.com/kubernetes/features/issues/286.

Rename `SYSTEMD_SERVICES` to `LOG_DUMP_SYSTEMD_SERVICES`. test-infra disables log dump in our e2e framework, and uses a different log dump logic https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L480-L497. So the flags we added in https://github.com/kubernetes/kubernetes/pull/55288 will not work in test-infra.

Fortrunately, test-infra is using the same script `cluster/log-dump/log-dump.sh`, so we could still configure systemd services by setting the environment variable globally.

The original environment variable name is too general for setting globally, change it to a more specific name.

**Release note**:

```release-note
none
```
2017-11-17 00:18:25 -08:00
Shyam Jeedigunta
1ae56bbe2b Set resource-gathering and probe-duration period for kubemark 2017-11-16 12:02:56 +01:00
Lantao Liu
e504e5a316 Wait for server resources. 2017-11-16 01:38:35 +00:00
Lantao Liu
0085e2208d Rename log-dump env to LOG_DUMP_SYSTEMD_SERVICES. 2017-11-16 00:41:27 +00:00
Kubernetes Submit Queue
ded83878c1 Merge pull request #55820 from shyamjvs/restore-resource-gatherer-pollperiod-default
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Restore default polling period of resource-gatherer

Fixes https://github.com/kubernetes/kubernetes/issues/55818

/cc @jiayingz @mindprince
2017-11-15 16:03:06 -08:00
Shyam Jeedigunta
a350825612 Restore default polling period of resource-gatherer 2017-11-15 23:15:28 +01:00
Kubernetes Submit Queue
cbdd18eee9 Merge pull request #55484 from bskiba/multizone-size-e2e
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support multizone clusters in GCE and GKE e2e tests

**What this PR does / why we need it**:
For multi-zone clusters we can't rely on zone parameter for fetching information on Instance Groups. Instead we first fetch the zone the group is in to use in subsequent calls.

Note that current version of the code does not work for multi zone clusters at all.

**Release note**:
```
NONE
```
2017-11-15 12:58:11 -08:00
Marcin Owsiany
9b6590e7ae Improve messages around waiting for pods. 2017-11-15 11:29:52 +01:00
Rohit Agarwal
3ac94a57eb Update URLs for nvidia gpu device plugin and nvidia driver installer.
Device plugin is now an addon and its manifest is now in
kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains
device plugin.
2017-11-14 15:31:22 -08:00
Dane LeBlanc
2827b7ffb7 Add brackets around IPv6 addrs in e2e test IP:port endpoints
There are several locations in the e2e tests where endpoints of the
form IP:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.

Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.

fixes #52746
2017-11-14 10:55:09 -05:00
Kubernetes Submit Queue
ea66c00522 Merge pull request #54509 from vmware/node_poweroff_test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E test to verify pod failover during node power-off

**What this PR does / why we need it**:

This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.

Test performs following tasks:

1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass

**Which issue this PR fixes**:

Fixes https://github.com/vmware/kubernetes/issues/272

**Special notes for your reviewer**:

Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
  -get
                go get -u kubetest if old or not installed (default true)
  -old duration
                Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436     409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
 
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161     409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
 
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
 
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
 
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb 

**Release note**:

```release-note
NONE
```
2017-11-14 00:56:26 -08:00
Shaomin Chen
3db4f2b843 E2E test to verify pod failover during node power-off 2017-11-13 21:52:54 -08:00
Jiaying Zhang
ae36f8ee95 Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
2017-11-13 16:24:41 -08:00
Kubernetes Submit Queue
74ec8d0fe8 Merge pull request #55288 from Random-Liu/e2e-log-for-alternative-runtime
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support collecting log for alternative container runtime in e2e test.

Fixes https://github.com/kubernetes/kubernetes/issues/55629.

Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```

```release-note
none
```

/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
2017-11-13 12:32:24 -08:00