Merge pull request #21 from MeinhardZhou/chore/update_release

Construct: make release as sub-tree
This commit is contained in:
Meinhard Zhou 2023-11-20 20:53:15 +08:00 committed by GitHub
commit 998801c53d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 311 additions and 64 deletions

View File

@ -22,16 +22,19 @@ aliases:
- ggriffiths - ggriffiths
- gnufied - gnufied
- humblec - humblec
- mauriciopoppe
- j-griffith - j-griffith
- Jiawei0227
- jingxu97 - jingxu97
- jsafrane - jsafrane
- pohly - pohly
- RaunakShah
- sunnylovestiramisu
- xing-yang - xing-yang
# This documents who previously contributed to Kubernetes-CSI # This documents who previously contributed to Kubernetes-CSI
# as approver. # as approver.
emeritus_approvers: emeritus_approvers:
- Jiawei0227
- lpabon - lpabon
- sbezverk - sbezverk
- vladimirvivien - vladimirvivien

View File

@ -17,7 +17,7 @@ The release manager must:
Whenever a new Kubernetes minor version is released, our kubernetes-csi CI jobs Whenever a new Kubernetes minor version is released, our kubernetes-csi CI jobs
must be updated. must be updated.
[Our CI jobs](https://k8s-testgrid.appspot.com/sig-storage-csi-ci) have the [Our CI jobs](https://testgrid.k8s.io/sig-storage-csi-ci) have the
naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`. naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
1. Jobs should be actively monitored to find and fix failures in sidecars and 1. Jobs should be actively monitored to find and fix failures in sidecars and
@ -90,8 +90,10 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
1. Submit a PR for README changes, in particular, Compatibility, Feature status, 1. Submit a PR for README changes, in particular, Compatibility, Feature status,
and any other sections that may need updating. and any other sections that may need updating.
1. Check that all [canary CI 1. Check that all [canary CI
jobs](https://k8s-testgrid.appspot.com/sig-storage-csi-ci) are passing, jobs](https://testgrid.k8s.io/sig-storage-csi-ci) are passing,
and that test coverage is adequate for the changes that are going into the release. and that test coverage is adequate for the changes that are going into the release.
1. Check that the post-\<sidecar\>-push-images builds are succeeding.
[Example](https://testgrid.k8s.io/sig-storage-image-build#post-external-snapshotter-push-images)
1. Make sure that no new PRs have merged in the meantime, and no PRs are in 1. Make sure that no new PRs have merged in the meantime, and no PRs are in
flight and soon to be merged. flight and soon to be merged.
1. Create a new release following a previous release as a template. Be sure to select the correct 1. Create a new release following a previous release as a template. Be sure to select the correct
@ -99,10 +101,10 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
[external-provisioner example](https://github.com/kubernetes-csi/external-provisioner/releases/new) [external-provisioner example](https://github.com/kubernetes-csi/external-provisioner/releases/new)
1. If release was a new major/minor version, create a new `release-<minor>` 1. If release was a new major/minor version, create a new `release-<minor>`
branch at that commit. branch at that commit.
1. Check [image build status](https://k8s-testgrid.appspot.com/sig-storage-image-build). 1. Check [image build status](https://testgrid.k8s.io/sig-storage-image-build).
1. Promote images from k8s-staging-sig-storage to k8s.gcr.io/sig-storage. From 1. Promote images from k8s-staging-sig-storage to registry.k8s.io/sig-storage. From
the [k8s image the [k8s image
repo](https://github.com/kubernetes/k8s.io/tree/HEAD/k8s.gcr.io/images/k8s-staging-sig-storage), repo](https://github.com/kubernetes/k8s.io/tree/HEAD/registry.k8s.io/images/k8s-staging-sig-storage),
run `./generate.sh > images.yaml`, and send a PR with the updated images. run `./generate.sh > images.yaml`, and send a PR with the updated images.
Once merged, the image promoter will copy the images from staging to prod. Once merged, the image promoter will copy the images from staging to prod.
1. Update [kubernetes-csi/docs](https://github.com/kubernetes-csi/docs) sidecar 1. Update [kubernetes-csi/docs](https://github.com/kubernetes-csi/docs) sidecar
@ -118,7 +120,7 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
The following jobs are triggered after tagging to produce the corresponding The following jobs are triggered after tagging to produce the corresponding
image(s): image(s):
https://k8s-testgrid.appspot.com/sig-storage-image-build https://testgrid.k8s.io/sig-storage-image-build
Clicking on a failed build job opens that job in https://prow.k8s.io. Next to Clicking on a failed build job opens that job in https://prow.k8s.io. Next to
the job title is a rerun icon (circle with arrow). Clicking it opens a popup the job title is a rerun icon (circle with arrow). Clicking it opens a popup

View File

@ -148,7 +148,7 @@ DOCKER_BUILDX_CREATE_ARGS ?=
$(CMDS:%=push-multiarch-%): push-multiarch-%: check-pull-base-ref build-% $(CMDS:%=push-multiarch-%): push-multiarch-%: check-pull-base-ref build-%
set -ex; \ set -ex; \
export DOCKER_CLI_EXPERIMENTAL=enabled; \ export DOCKER_CLI_EXPERIMENTAL=enabled; \
docker buildx create $(DOCKER_BUILDX_CREATE_ARGS) --use --name multiarchimage-buildertest; \ docker buildx create $(DOCKER_BUILDX_CREATE_ARGS) --use --name multiarchimage-buildertest --driver-opt image=moby/buildkit:v0.10.6; \
trap "docker buildx rm multiarchimage-buildertest" EXIT; \ trap "docker buildx rm multiarchimage-buildertest" EXIT; \
dockerfile_linux=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile ]; then echo ./$(CMDS_DIR)/$*/Dockerfile; else echo Dockerfile; fi); \ dockerfile_linux=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile ]; then echo ./$(CMDS_DIR)/$*/Dockerfile; else echo Dockerfile; fi); \
dockerfile_windows=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile.Windows ]; then echo ./$(CMDS_DIR)/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \ dockerfile_windows=$$(if [ -e ./$(CMDS_DIR)/$*/Dockerfile.Windows ]; then echo ./$(CMDS_DIR)/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \

View File

@ -13,7 +13,7 @@
# See https://github.com/kubernetes/test-infra/blob/HEAD/config/jobs/image-pushing/README.md # See https://github.com/kubernetes/test-infra/blob/HEAD/config/jobs/image-pushing/README.md
# for more details on image pushing process in Kubernetes. # for more details on image pushing process in Kubernetes.
# #
# To promote release images, see https://github.com/kubernetes/k8s.io/tree/HEAD/k8s.gcr.io/images/k8s-staging-sig-storage. # To promote release images, see https://github.com/kubernetes/k8s.io/tree/HEAD/registry.k8s.io/images/k8s-staging-sig-storage.
# This must be specified in seconds. If omitted, defaults to 600s (10 mins). # This must be specified in seconds. If omitted, defaults to 600s (10 mins).
# Building three images in external-snapshotter takes more than an hour. # Building three images in external-snapshotter takes more than an hour.
@ -26,7 +26,7 @@ steps:
# The image must contain bash and curl. Ideally it should also contain # The image must contain bash and curl. Ideally it should also contain
# the desired version of Go (currently defined in release-tools/prow.sh), # the desired version of Go (currently defined in release-tools/prow.sh),
# but that just speeds up the build and is not required. # but that just speeds up the build and is not required.
- name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20210917-12df099d55' - name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20230623-56e06d7c18'
entrypoint: ./.cloudbuild.sh entrypoint: ./.cloudbuild.sh
env: env:
- GIT_TAG=${_GIT_TAG} - GIT_TAG=${_GIT_TAG}

View File

@ -0,0 +1,170 @@
# Copyright 2023 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import datetime
import re
from collections import defaultdict
import subprocess
import shutil
from dateutil.relativedelta import relativedelta
def check_gh_command():
"""
Pretty much everything is processed from `gh`
Check that the `gh` command is in the path before anything else
"""
if not shutil.which('gh'):
print("Error: The `gh` command is not available in the PATH.")
print("Please install the GitHub CLI (https://cli.github.com/) and try again.")
exit(1)
def duration_ago(dt):
"""
Humanize duration outputs
"""
delta = relativedelta(datetime.datetime.now(), dt)
if delta.years > 0:
return f"{delta.years} year{'s' if delta.years > 1 else ''} ago"
elif delta.months > 0:
return f"{delta.months} month{'s' if delta.months > 1 else ''} ago"
elif delta.days > 0:
return f"{delta.days} day{'s' if delta.days > 1 else ''} ago"
elif delta.hours > 0:
return f"{delta.hours} hour{'s' if delta.hours > 1 else ''} ago"
elif delta.minutes > 0:
return f"{delta.minutes} minute{'s' if delta.minutes > 1 else ''} ago"
else:
return "just now"
def parse_version(version):
"""
Parse version assuming it is in the form of v1.2.3
"""
pattern = r"v(\d+)\.(\d+)\.(\d+)"
match = re.match(pattern, version)
if match:
major, minor, patch = map(int, match.groups())
return (major, minor, patch)
def end_of_life_grouped_versions(versions):
"""
Calculate the end of life date for a minor release version according to : https://kubernetes-csi.github.io/docs/project-policies.html#support
The input is an array of tuples of:
* grouped versions (e.g. 1.0, 1.1)
* array of that contains all versions and their release date (e.g. 1.0.0, 01-01-2013)
versions structure example :
[((3, 5), [('v3.5.0', datetime.datetime(2023, 4, 27, 22, 28, 6))]),
((3, 4),
[('v3.4.1', datetime.datetime(2023, 4, 5, 17, 41, 15)),
('v3.4.0', datetime.datetime(2022, 12, 27, 23, 43, 41))])]
"""
supported_versions = []
# Prepare dates for later calculation
now = datetime.datetime.now()
one_year = datetime.timedelta(days=365)
three_months = datetime.timedelta(days=90)
# get the newer versions on top
sorted_versions_list = sorted(versions.items(), key=lambda x: x[0], reverse=True)
# the latest version is always supported no matter the release date
latest = sorted_versions_list.pop(0)
supported_versions.append(latest[1][-1])
for v in sorted_versions_list:
first_release = v[1][-1]
last_release = v[1][0]
# if the release is less than a year old we support the latest patch version
if now - first_release[1] < one_year:
supported_versions.append(last_release)
# if the main release is older than a year and has a recent path, this is supported
elif now - last_release[1] < three_months:
supported_versions.append(last_release)
return supported_versions
def get_release_docker_image(repo, version):
"""
Extract docker image name from the release page documentation
"""
output = subprocess.check_output(['gh', 'release', '-R', repo, 'view', version], text=True)
#Extract matching image name excluding `
match = re.search(r"docker pull ([\.\/\-\:\w\d]*)", output)
docker_image = match.group(1) if match else ''
return((version, docker_image))
def get_versions_from_releases(repo):
"""
Using `gh` cli get the github releases page details then
create a list of grouped version on major.minor
and for each give all major.minor.patch with release dates
"""
# Run the `gh release` command to get the release list
output = subprocess.check_output(['gh', 'release', '-R', repo, 'list'], text=True)
# Parse the output and group by major and minor version numbers
versions = defaultdict(lambda: [])
for line in output.strip().split('\n'):
parts = line.split('\t')
# pprint.pprint(parts)
version = parts[0]
parsed_version = parse_version(version)
if parsed_version is None:
continue
major, minor, patch = parsed_version
published = datetime.datetime.strptime(parts[3], '%Y-%m-%dT%H:%M:%SZ')
versions[(major, minor)].append((version, published))
return(versions)
def main():
manual = """
This script lists the supported versions Github releases according to https://kubernetes-csi.github.io/docs/project-policies.html#support
It has been designed to help to update the tables from : https://kubernetes-csi.github.io/docs/sidecar-containers.html\n\n
It can take multiple repos as argument, for all CSI sidecars details you can run:
./get_supported_version_csi-sidecar.py -R kubernetes-csi/external-attacher -R kubernetes-csi/external-provisioner -R kubernetes-csi/external-resizer -R kubernetes-csi/external-snapshotter -R kubernetes-csi/livenessprobe -R kubernetes-csi/node-driver-registrar -R kubernetes-csi/external-health-monitor\n
With the output you can then update the documentation manually.
"""
parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description=manual)
parser.add_argument('--repo', '-R', required=True, action='append', dest='repos', help='The name of the repository in the format owner/repo.')
parser.add_argument('--display', '-d', action='store_true', help='(default) Display EOL versions with their dates', default=True)
parser.add_argument('--doc', '-D', action='store_true', help='Helper to https://kubernetes-csi.github.io/docs/ that prints Docker image for each EOL version')
args = parser.parse_args()
# Verify pre-reqs
check_gh_command()
# Process all repos
for repo in args.repos:
versions = get_versions_from_releases(repo)
eol_versions = end_of_life_grouped_versions(versions)
if args.display:
print(f"Supported versions with release date and age of `{repo}`:\n")
for version in eol_versions:
print(f"{version[0]}\t{version[1].strftime('%Y-%m-%d')}\t{duration_ago(version[1])}")
# TODO : generate proper doc output for the tables of: https://kubernetes-csi.github.io/docs/sidecar-containers.html
if args.doc:
print("\nSupported Versions with docker images for each end of life version:\n")
for version in eol_versions:
_, image = get_release_docker_image(repo, version[0])
print(f"{version[0]}\t{image}")
print()
if __name__ == '__main__':
main()

View File

@ -24,7 +24,6 @@ package main
import ( import (
"encoding/xml" "encoding/xml"
"flag" "flag"
"io/ioutil"
"os" "os"
"regexp" "regexp"
) )
@ -35,10 +34,18 @@ var (
) )
/* /*
* TestSuite represents a JUnit file. Due to how encoding/xml works, we have * TestResults represents a JUnit file. Due to how encoding/xml works, we have
* represent all fields that we want to be passed through. It's therefore * represent all fields that we want to be passed through. It's therefore
* not a complete solution, but good enough for Ginkgo + Spyglass. * not a complete solution, but good enough for Ginkgo + Spyglass.
*
* Before Kubernetes 1.25 and ginkgo v2, we directly had <testsuite> in the
* JUnit file. Now we get <testsuites> and inside it the <testsuite>.
*/ */
type TestResults struct {
XMLName string `xml:"testsuites"`
TestSuite TestSuite `xml:"testsuite"`
}
type TestSuite struct { type TestSuite struct {
XMLName string `xml:"testsuite"` XMLName string `xml:"testsuite"`
TestCases []TestCase `xml:"testcase"` TestCases []TestCase `xml:"testcase"`
@ -48,6 +55,7 @@ type TestCase struct {
Name string `xml:"name,attr"` Name string `xml:"name,attr"`
Time string `xml:"time,attr"` Time string `xml:"time,attr"`
SystemOut string `xml:"system-out,omitempty"` SystemOut string `xml:"system-out,omitempty"`
SystemErr string `xml:"system-err,omitempty"`
Failure string `xml:"failure,omitempty"` Failure string `xml:"failure,omitempty"`
Skipped SkipReason `xml:"skipped,omitempty"` Skipped SkipReason `xml:"skipped,omitempty"`
} }
@ -87,14 +95,22 @@ func main() {
} }
} else { } else {
var err error var err error
data, err = ioutil.ReadFile(input) data, err = os.ReadFile(input)
if err != nil { if err != nil {
panic(err) panic(err)
} }
} }
if err := xml.Unmarshal(data, &junit); err != nil { if err := xml.Unmarshal(data, &junit); err != nil {
if err.Error() != "expected element type <testsuite> but have <testsuites>" {
panic(err) panic(err)
} }
// Fall back to Ginkgo v2 format.
var junitv2 TestResults
if err := xml.Unmarshal(data, &junitv2); err != nil {
panic(err)
}
junit.TestCases = append(junit.TestCases, junitv2.TestSuite.TestCases...)
}
} }
// Keep only matching testcases. Testcases skipped in all test runs are only stored once. // Keep only matching testcases. Testcases skipped in all test runs are only stored once.
@ -126,7 +142,7 @@ func main() {
panic(err) panic(err)
} }
} else { } else {
if err := ioutil.WriteFile(*output, data, 0644); err != nil { if err := os.WriteFile(*output, data, 0644); err != nil {
panic(err) panic(err)
} }
} }

View File

@ -13,7 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
#
# This script can be used while converting a repo from "dep" to "go mod" # This script can be used while converting a repo from "dep" to "go mod"
# by calling it after "go mod init" or to update the Kubernetes packages # by calling it after "go mod init" or to update the Kubernetes packages
# in a repo that has already been converted. Only packages that are # in a repo that has already been converted. Only packages that are

View File

@ -78,7 +78,7 @@ version_to_git () {
# the list of windows versions was matched from: # the list of windows versions was matched from:
# - https://hub.docker.com/_/microsoft-windows-nanoserver # - https://hub.docker.com/_/microsoft-windows-nanoserver
# - https://hub.docker.com/_/microsoft-windows-servercore # - https://hub.docker.com/_/microsoft-windows-servercore
configvar CSI_PROW_BUILD_PLATFORMS "linux amd64 amd64; linux ppc64le ppc64le -ppc64le; linux s390x s390x -s390x; linux arm arm -arm; linux arm64 arm64 -arm64; linux arm arm/v7 -armv7; windows amd64 amd64 .exe nanoserver:1809 servercore:ltsc2019; windows amd64 amd64 .exe nanoserver:20H2 servercore:20H2; windows amd64 amd64 .exe nanoserver:ltsc2022 servercore:ltsc2022" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries" configvar CSI_PROW_BUILD_PLATFORMS "linux amd64 amd64; linux ppc64le ppc64le -ppc64le; linux s390x s390x -s390x; linux arm arm -arm; linux arm64 arm64 -arm64; linux arm arm/v7 -armv7; windows amd64 amd64 .exe nanoserver:1809 servercore:ltsc2019; windows amd64 amd64 .exe nanoserver:ltsc2022 servercore:ltsc2022" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"
# If we have a vendor directory, then use it. We must be careful to only # If we have a vendor directory, then use it. We must be careful to only
# use this for "make" invocations inside the project's repo itself because # use this for "make" invocations inside the project's repo itself because
@ -86,19 +86,25 @@ configvar CSI_PROW_BUILD_PLATFORMS "linux amd64 amd64; linux ppc64le ppc64le -pp
# which is disabled with GOFLAGS=-mod=vendor). # which is disabled with GOFLAGS=-mod=vendor).
configvar GOFLAGS_VENDOR "$( [ -d vendor ] && echo '-mod=vendor' )" "Go flags for using the vendor directory" configvar GOFLAGS_VENDOR "$( [ -d vendor ] && echo '-mod=vendor' )" "Go flags for using the vendor directory"
configvar CSI_PROW_GO_VERSION_BUILD "1.17.3" "Go version for building the component" # depends on component's source code configvar CSI_PROW_GO_VERSION_BUILD "1.20" "Go version for building the component" # depends on component's source code
configvar CSI_PROW_GO_VERSION_E2E "" "override Go version for building the Kubernetes E2E test suite" # normally doesn't need to be set, see install_e2e configvar CSI_PROW_GO_VERSION_E2E "" "override Go version for building the Kubernetes E2E test suite" # normally doesn't need to be set, see install_e2e
configvar CSI_PROW_GO_VERSION_SANITY "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building the csi-sanity test suite" # depends on CSI_PROW_SANITY settings below configvar CSI_PROW_GO_VERSION_SANITY "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building the csi-sanity test suite" # depends on CSI_PROW_SANITY settings below
configvar CSI_PROW_GO_VERSION_KIND "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building 'kind'" # depends on CSI_PROW_KIND_VERSION below configvar CSI_PROW_GO_VERSION_KIND "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building 'kind'" # depends on CSI_PROW_KIND_VERSION below
configvar CSI_PROW_GO_VERSION_GINKGO "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building ginkgo" # depends on CSI_PROW_GINKGO_VERSION below configvar CSI_PROW_GO_VERSION_GINKGO "${CSI_PROW_GO_VERSION_BUILD}" "Go version for building ginkgo" # depends on CSI_PROW_GINKGO_VERSION below
# ginkgo test runner version to use. If the pre-installed version is # ginkgo test runner version to use. If the pre-installed version is
# different, the desired version is built from source. # different, the desired version is built from source. For Kubernetes,
# the version built via "make WHAT=vendor/github.com/onsi/ginkgo/ginkgo" is
# used, which is guaranteed to match what the Kubernetes e2e.test binary
# needs.
configvar CSI_PROW_GINKGO_VERSION v1.7.0 "Ginkgo" configvar CSI_PROW_GINKGO_VERSION v1.7.0 "Ginkgo"
# Ginkgo runs the E2E test in parallel. The default is based on the number # Ginkgo runs the E2E test in parallel. The default is based on the number
# of CPUs, but typically this can be set to something higher in the job. # of CPUs, but typically this can be set to something higher in the job.
configvar CSI_PROW_GINKO_PARALLEL "-p" "Ginko parallelism parameter(s)" configvar CSI_PROW_GINKGO_PARALLEL "-p" "Ginkgo parallelism parameter(s)"
# Timeout value for the overall ginkgo test suite.
configvar CSI_PROW_GINKGO_TIMEOUT "1h" "Ginkgo timeout"
# Enables building the code in the repository. On by default, can be # Enables building the code in the repository. On by default, can be
# disabled in jobs which only use pre-built components. # disabled in jobs which only use pre-built components.
@ -118,7 +124,7 @@ configvar CSI_PROW_BUILD_JOB true "building code in repo enabled"
# use the same settings as for "latest" Kubernetes. This works # use the same settings as for "latest" Kubernetes. This works
# as long as there are no breaking changes in Kubernetes, like # as long as there are no breaking changes in Kubernetes, like
# deprecating or changing the implementation of an alpha feature. # deprecating or changing the implementation of an alpha feature.
configvar CSI_PROW_KUBERNETES_VERSION 1.17.0 "Kubernetes" configvar CSI_PROW_KUBERNETES_VERSION 1.22.0 "Kubernetes"
# CSI_PROW_KUBERNETES_VERSION reduced to first two version numbers and # CSI_PROW_KUBERNETES_VERSION reduced to first two version numbers and
# with underscore (1_13 instead of 1.13.3) and in uppercase (LATEST # with underscore (1_13 instead of 1.13.3) and in uppercase (LATEST
@ -138,7 +144,7 @@ kind_version_default () {
latest|master) latest|master)
echo main;; echo main;;
*) *)
echo v0.11.1;; echo v0.14.0;;
esac esac
} }
@ -149,16 +155,13 @@ configvar CSI_PROW_KIND_VERSION "$(kind_version_default)" "kind"
# kind images to use. Must match the kind version. # kind images to use. Must match the kind version.
# The release notes of each kind release list the supported images. # The release notes of each kind release list the supported images.
configvar CSI_PROW_KIND_IMAGES "kindest/node:v1.23.0@sha256:49824ab1727c04e56a21a5d8372a402fcd32ea51ac96a2706a12af38934f81ac configvar CSI_PROW_KIND_IMAGES "kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e
kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 kindest/node:v1.23.6@sha256:b1fa224cc6c7ff32455e0b1fd9cbfd3d3bc87ecaa8fcb06961ed1afb3db0f9ae
kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6 kindest/node:v1.22.9@sha256:8135260b959dfe320206eb36b3aeda9cffcb262f4b44cda6b33f7bb73f453105
kindest/node:v1.20.7@sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c67c671ff9 kindest/node:v1.21.12@sha256:f316b33dd88f8196379f38feb80545ef3ed44d9197dca1bfd48bcb1583210207
kindest/node:v1.19.11@sha256:07db187ae84b4b7de440a73886f008cf903fcf5764ba8106a9fd5243d6f32729 kindest/node:v1.20.15@sha256:6f2d011dffe182bad80b85f6c00e8ca9d86b5b8922cdf433d53575c4c5212248
kindest/node:v1.18.19@sha256:7af1492e19b3192a79f606e43c35fb741e520d195f96399284515f077b3b622c kindest/node:v1.19.16@sha256:d9c819e8668de8d5030708e484a9fdff44d95ec4675d136ef0a0a584e587f65c
kindest/node:v1.17.17@sha256:66f1d0d91a88b8a001811e2f1054af60eef3b669a9a74f9b6db871f2f1eeed00 kindest/node:v1.18.20@sha256:738cdc23ed4be6cc0b7ea277a2ebcc454c8373d7d8fb991a7fcdbd126188e6d7" "kind images"
kindest/node:v1.16.15@sha256:83067ed51bf2a3395b24687094e283a7c7c865ccc12a8b1d7aa673ba0c5e8861
kindest/node:v1.15.12@sha256:b920920e1eda689d9936dfcf7332701e80be12566999152626b2c9d730397a95
kindest/node:v1.14.10@sha256:f8a66ef82822ab4f7569e91a5bccaf27bceee135c1457c512e54de8c6f7219f8" "kind images"
# By default, this script tests sidecars with the CSI hostpath driver, # By default, this script tests sidecars with the CSI hostpath driver,
# using the install_csi_driver function. That function depends on # using the install_csi_driver function. That function depends on
@ -196,7 +199,7 @@ kindest/node:v1.14.10@sha256:f8a66ef82822ab4f7569e91a5bccaf27bceee135c1457c512e5
# If the deployment script is called with CSI_PROW_TEST_DRIVER=<file name> as # If the deployment script is called with CSI_PROW_TEST_DRIVER=<file name> as
# environment variable, then it must write a suitable test driver configuration # environment variable, then it must write a suitable test driver configuration
# into that file in addition to installing the driver. # into that file in addition to installing the driver.
configvar CSI_PROW_DRIVER_VERSION "v1.3.0" "CSI driver version" configvar CSI_PROW_DRIVER_VERSION "v1.12.0" "CSI driver version"
configvar CSI_PROW_DRIVER_REPO https://github.com/kubernetes-csi/csi-driver-host-path "CSI driver repo" configvar CSI_PROW_DRIVER_REPO https://github.com/kubernetes-csi/csi-driver-host-path "CSI driver repo"
configvar CSI_PROW_DEPLOYMENT "" "deployment" configvar CSI_PROW_DEPLOYMENT "" "deployment"
configvar CSI_PROW_DEPLOYMENT_SUFFIX "" "additional suffix in kubernetes-x.yy[suffix].yaml files" configvar CSI_PROW_DEPLOYMENT_SUFFIX "" "additional suffix in kubernetes-x.yy[suffix].yaml files"
@ -228,13 +231,16 @@ configvar CSI_PROW_E2E_VERSION "$(version_to_git "${CSI_PROW_KUBERNETES_VERSION}
configvar CSI_PROW_E2E_REPO "https://github.com/kubernetes/kubernetes" "E2E repo" configvar CSI_PROW_E2E_REPO "https://github.com/kubernetes/kubernetes" "E2E repo"
configvar CSI_PROW_E2E_IMPORT_PATH "k8s.io/kubernetes" "E2E package" configvar CSI_PROW_E2E_IMPORT_PATH "k8s.io/kubernetes" "E2E package"
# Local path for e2e tests. Set to "none" to disable.
configvar CSI_PROW_SIDECAR_E2E_IMPORT_PATH "none" "CSI Sidecar E2E package"
# csi-sanity testing from the csi-test repo can be run against the installed # csi-sanity testing from the csi-test repo can be run against the installed
# CSI driver. For this to work, deploying the driver must expose the Unix domain # CSI driver. For this to work, deploying the driver must expose the Unix domain
# csi.sock as a TCP service for use by the csi-sanity command, which runs outside # csi.sock as a TCP service for use by the csi-sanity command, which runs outside
# of the cluster. The alternative would have been to (cross-)compile csi-sanity # of the cluster. The alternative would have been to (cross-)compile csi-sanity
# and install it inside the cluster, which is not necessarily easier. # and install it inside the cluster, which is not necessarily easier.
configvar CSI_PROW_SANITY_REPO https://github.com/kubernetes-csi/csi-test "csi-test repo" configvar CSI_PROW_SANITY_REPO https://github.com/kubernetes-csi/csi-test "csi-test repo"
configvar CSI_PROW_SANITY_VERSION v4.3.0 "csi-test version" configvar CSI_PROW_SANITY_VERSION v5.0.0 "csi-test version"
configvar CSI_PROW_SANITY_PACKAGE_PATH github.com/kubernetes-csi/csi-test "csi-test package" configvar CSI_PROW_SANITY_PACKAGE_PATH github.com/kubernetes-csi/csi-test "csi-test package"
configvar CSI_PROW_SANITY_SERVICE "hostpath-service" "Kubernetes TCP service name that exposes csi.sock" configvar CSI_PROW_SANITY_SERVICE "hostpath-service" "Kubernetes TCP service name that exposes csi.sock"
configvar CSI_PROW_SANITY_POD "csi-hostpathplugin-0" "Kubernetes pod with CSI driver" configvar CSI_PROW_SANITY_POD "csi-hostpathplugin-0" "Kubernetes pod with CSI driver"
@ -242,7 +248,7 @@ configvar CSI_PROW_SANITY_CONTAINER "hostpath" "Kubernetes container with CSI dr
# The version of dep to use for 'make test-vendor'. Ignored if the project doesn't # The version of dep to use for 'make test-vendor'. Ignored if the project doesn't
# use dep. Only binary releases of dep are supported (https://github.com/golang/dep/releases). # use dep. Only binary releases of dep are supported (https://github.com/golang/dep/releases).
configvar CSI_PROW_DEP_VERSION v0.5.1 "golang dep version to be used for vendor checking" configvar CSI_PROW_DEP_VERSION v0.5.4 "golang dep version to be used for vendor checking"
# Each job can run one or more of the following tests, identified by # Each job can run one or more of the following tests, identified by
# a single word: # a single word:
@ -282,13 +288,18 @@ tests_enabled () {
sanity_enabled () { sanity_enabled () {
[ "${CSI_PROW_TESTS_SANITY}" = "sanity" ] && tests_enabled "sanity" [ "${CSI_PROW_TESTS_SANITY}" = "sanity" ] && tests_enabled "sanity"
} }
sidecar_tests_enabled () {
[ "${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}" != "none" ]
}
tests_need_kind () { tests_need_kind () {
tests_enabled "parallel" "serial" "serial-alpha" "parallel-alpha" || tests_enabled "parallel" "serial" "serial-alpha" "parallel-alpha" ||
sanity_enabled sanity_enabled || sidecar_tests_enabled
} }
tests_need_non_alpha_cluster () { tests_need_non_alpha_cluster () {
tests_enabled "parallel" "serial" || tests_enabled "parallel" "serial" ||
sanity_enabled sanity_enabled || sidecar_tests_enabled
} }
tests_need_alpha_cluster () { tests_need_alpha_cluster () {
tests_enabled "parallel-alpha" "serial-alpha" tests_enabled "parallel-alpha" "serial-alpha"
@ -346,15 +357,23 @@ configvar CSI_PROW_E2E_ALPHA "$(get_versioned_variable CSI_PROW_E2E_ALPHA "${csi
# kubernetes-csi components must be updated, either by disabling # kubernetes-csi components must be updated, either by disabling
# the failing test for "latest" or by updating the test and not running # the failing test for "latest" or by updating the test and not running
# it anymore for older releases. # it anymore for older releases.
configvar CSI_PROW_E2E_ALPHA_GATES_LATEST 'GenericEphemeralVolume=true,CSIStorageCapacity=true' "alpha feature gates for latest Kubernetes" configvar CSI_PROW_E2E_ALPHA_GATES_LATEST '' "alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates" configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates"
configvar CSI_PROW_E2E_GATES_LATEST '' "non alpha feature gates for latest Kubernetes"
configvar CSI_PROW_E2E_GATES "$(get_versioned_variable CSI_PROW_E2E_GATES "${csi_prow_kubernetes_version_suffix}")" "non alpha E2E feature gates"
# Focus for local tests run in the sidecar E2E repo. Only used if CSI_PROW_SIDECAR_E2E_IMPORT_PATH
# is not set to "none". If empty, all tests in the sidecar repo will be run.
configvar CSI_PROW_SIDECAR_E2E_FOCUS '' "tags for local E2E tests"
configvar CSI_PROW_SIDECAR_E2E_SKIP '' "local tests that need to be skipped"
# Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment # Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment
default_csi_snapshotter_version () { default_csi_snapshotter_version () {
if [ "${CSI_PROW_KUBERNETES_VERSION}" = "latest" ] || [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then if [ "${CSI_PROW_KUBERNETES_VERSION}" = "latest" ] || [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then
echo "master" echo "master"
else else
echo "v3.0.2" echo "v4.0.0"
fi fi
} }
configvar CSI_SNAPSHOTTER_VERSION "$(default_csi_snapshotter_version)" "external-snapshotter version tag" configvar CSI_SNAPSHOTTER_VERSION "$(default_csi_snapshotter_version)" "external-snapshotter version tag"
@ -365,7 +384,7 @@ configvar CSI_SNAPSHOTTER_VERSION "$(default_csi_snapshotter_version)" "external
# whether they can run with the current cluster provider, but until # whether they can run with the current cluster provider, but until
# they are, we filter them out by name. Like the other test selection # they are, we filter them out by name. Like the other test selection
# variables, this is again a space separated list of regular expressions. # variables, this is again a space separated list of regular expressions.
configvar CSI_PROW_E2E_SKIP 'Disruptive' "tests that need to be skipped" configvar CSI_PROW_E2E_SKIP '\[Disruptive\]|\[Feature:SELinux\]' "tests that need to be skipped"
# This creates directories that are required for testing. # This creates directories that are required for testing.
ensure_paths () { ensure_paths () {
@ -437,14 +456,15 @@ install_kind () {
# Ensure that we have the desired version of the ginkgo test runner. # Ensure that we have the desired version of the ginkgo test runner.
install_ginkgo () { install_ginkgo () {
if [ -e "${CSI_PROW_BIN}/ginkgo" ]; then
return
fi
# CSI_PROW_GINKGO_VERSION contains the tag with v prefix, the command line output does not. # CSI_PROW_GINKGO_VERSION contains the tag with v prefix, the command line output does not.
if [ "v$(ginkgo version 2>/dev/null | sed -e 's/.* //')" = "${CSI_PROW_GINKGO_VERSION}" ]; then if [ "v$(ginkgo version 2>/dev/null | sed -e 's/.* //')" = "${CSI_PROW_GINKGO_VERSION}" ]; then
return return
fi fi
git_checkout https://github.com/onsi/ginkgo "$GOPATH/src/github.com/onsi/ginkgo" "${CSI_PROW_GINKGO_VERSION}" --depth=1 && run_with_go "${CSI_PROW_GO_VERSION_GINKGO}" env GOBIN="${CSI_PROW_BIN}" go install "github.com/onsi/ginkgo/ginkgo@${CSI_PROW_GINKGO_VERSION}" || die "building ginkgo failed"
# We have to get dependencies and hence can't call just "go build".
run_with_go "${CSI_PROW_GO_VERSION_GINKGO}" go get github.com/onsi/ginkgo/ginkgo || die "building ginkgo failed" &&
mv "$GOPATH/bin/ginkgo" "${CSI_PROW_BIN}"
} }
# Ensure that we have the desired version of dep. # Ensure that we have the desired version of dep.
@ -452,7 +472,7 @@ install_dep () {
if dep version 2>/dev/null | grep -q "version:.*${CSI_PROW_DEP_VERSION}$"; then if dep version 2>/dev/null | grep -q "version:.*${CSI_PROW_DEP_VERSION}$"; then
return return
fi fi
run curl --fail --location -o "${CSI_PROW_WORK}/bin/dep" "https://github.com/golang/dep/releases/download/v0.5.4/dep-linux-amd64" && run curl --fail --location -o "${CSI_PROW_WORK}/bin/dep" "https://github.com/golang/dep/releases/download/${CSI_PROW_DEP_VERSION}/dep-linux-amd64" &&
chmod u+x "${CSI_PROW_WORK}/bin/dep" chmod u+x "${CSI_PROW_WORK}/bin/dep"
} }
@ -814,7 +834,7 @@ install_snapshot_controller() {
modified="$(cat "$i" | while IFS= read -r line; do modified="$(cat "$i" | while IFS= read -r line; do
nocomments="$(echo "$line" | sed -e 's/ *#.*$//')" nocomments="$(echo "$line" | sed -e 's/ *#.*$//')"
if echo "$nocomments" | grep -q '^[[:space:]]*image:[[:space:]]*'; then if echo "$nocomments" | grep -q '^[[:space:]]*image:[[:space:]]*'; then
# Split 'image: k8s.gcr.io/sig-storage/snapshot-controller:v3.0.0' # Split 'image: registry.k8s.io/sig-storage/snapshot-controller:v3.0.0'
# into image (snapshot-controller:v3.0.0), # into image (snapshot-controller:v3.0.0),
# name (snapshot-controller), # name (snapshot-controller),
# tag (v3.0.0). # tag (v3.0.0).
@ -855,10 +875,17 @@ install_snapshot_controller() {
cnt=0 cnt=0
expected_running_pods=$(kubectl apply --dry-run=client -o "jsonpath={.spec.replicas}" -f "$SNAPSHOT_CONTROLLER_YAML") expected_running_pods=$(kubectl apply --dry-run=client -o "jsonpath={.spec.replicas}" -f "$SNAPSHOT_CONTROLLER_YAML")
expected_namespace=$(kubectl apply --dry-run=client -o "jsonpath={.metadata.namespace}" -f "$SNAPSHOT_CONTROLLER_YAML") expected_namespace=$(kubectl apply --dry-run=client -o "jsonpath={.metadata.namespace}" -f "$SNAPSHOT_CONTROLLER_YAML")
while [ "$(kubectl get pods -n "$expected_namespace" -l app=snapshot-controller | grep 'Running' -c)" -lt "$expected_running_pods" ]; do expect_key='app\.kubernetes\.io/name'
expected_label=$(kubectl apply --dry-run=client -o "jsonpath={.spec.template.metadata.labels['$expect_key']}" -f "$SNAPSHOT_CONTROLLER_YAML")
if [ -z "${expected_label}" ]; then
expect_key='app'
expected_label=$(kubectl apply --dry-run=client -o "jsonpath={.spec.template.metadata.labels['$expect_key']}" -f "$SNAPSHOT_CONTROLLER_YAML")
fi
expect_key=${expect_key//\\/}
while [ "$(kubectl get pods -n "$expected_namespace" -l "$expect_key"="$expected_label" | grep 'Running' -c)" -lt "$expected_running_pods" ]; do
if [ $cnt -gt 30 ]; then if [ $cnt -gt 30 ]; then
echo "snapshot-controller pod status:" echo "snapshot-controller pod status:"
kubectl describe pods -n "$expected_namespace" -l app=snapshot-controller kubectl describe pods -n "$expected_namespace" -l "$expect_key"="$expected_label"
echo >&2 "ERROR: snapshot controller not ready after over 5 min" echo >&2 "ERROR: snapshot controller not ready after over 5 min"
exit 1 exit 1
fi fi
@ -915,11 +942,11 @@ patch_kubernetes () {
local source="$1" target="$2" local source="$1" target="$2"
if [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then if [ "${CSI_PROW_DRIVER_CANARY}" = "canary" ]; then
# We cannot replace k8s.gcr.io/sig-storage with gcr.io/k8s-staging-sig-storage because # We cannot replace registry.k8s.io/sig-storage with gcr.io/k8s-staging-sig-storage because
# e2e.test does not support it (see test/utils/image/manifest.go). Instead we # e2e.test does not support it (see test/utils/image/manifest.go). Instead we
# invoke the e2e.test binary with KUBE_TEST_REPO_LIST set to a file that # invoke the e2e.test binary with KUBE_TEST_REPO_LIST set to a file that
# overrides that registry. # overrides that registry.
find "$source/test/e2e/testing-manifests/storage-csi/mock" -name '*.yaml' -print0 | xargs -0 sed -i -e 's;k8s.gcr.io/sig-storage/\(.*\):v.*;k8s.gcr.io/sig-storage/\1:canary;' find "$source/test/e2e/testing-manifests/storage-csi/mock" -name '*.yaml' -print0 | xargs -0 sed -i -e 's;registry.k8s.io/sig-storage/\(.*\):v.*;registry.k8s.io/sig-storage/\1:canary;'
cat >"$target/e2e-repo-list" <<EOF cat >"$target/e2e-repo-list" <<EOF
sigStorageRegistry: gcr.io/k8s-staging-sig-storage sigStorageRegistry: gcr.io/k8s-staging-sig-storage
EOF EOF
@ -938,12 +965,17 @@ install_e2e () {
return return
fi fi
if sidecar_tests_enabled; then
run_with_go "${CSI_PROW_GO_VERSION_BUILD}" go test -c -o "${CSI_PROW_WORK}/e2e-local.test" "${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}"
fi
git_checkout "${CSI_PROW_E2E_REPO}" "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}" --depth=1 && git_checkout "${CSI_PROW_E2E_REPO}" "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}" --depth=1 &&
if [ "${CSI_PROW_E2E_IMPORT_PATH}" = "k8s.io/kubernetes" ]; then if [ "${CSI_PROW_E2E_IMPORT_PATH}" = "k8s.io/kubernetes" ]; then
patch_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_WORK}" && patch_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_WORK}" &&
go_version="${CSI_PROW_GO_VERSION_E2E:-$(go_version_for_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}")}" && go_version="${CSI_PROW_GO_VERSION_E2E:-$(go_version_for_kubernetes "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" "${CSI_PROW_E2E_VERSION}")}" &&
run_with_go "$go_version" make WHAT=test/e2e/e2e.test "-C${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" && run_with_go "$go_version" make WHAT=test/e2e/e2e.test "-C${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/e2e.test" "${CSI_PROW_WORK}" ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/e2e.test" "${CSI_PROW_WORK}" &&
run_with_go "$go_version" make WHAT=vendor/github.com/onsi/ginkgo/ginkgo "-C${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
ln -s "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}/_output/bin/ginkgo" "${CSI_PROW_BIN}"
else else
run_with_go "${CSI_PROW_GO_VERSION_E2E}" go test -c -o "${CSI_PROW_WORK}/e2e.test" "${CSI_PROW_E2E_IMPORT_PATH}/test/e2e" run_with_go "${CSI_PROW_GO_VERSION_E2E}" go test -c -o "${CSI_PROW_WORK}/e2e.test" "${CSI_PROW_E2E_IMPORT_PATH}/test/e2e"
fi fi
@ -986,13 +1018,21 @@ run_e2e () (
# the full Kubernetes E2E testsuite while only running a few tests. # the full Kubernetes E2E testsuite while only running a few tests.
move_junit () { move_junit () {
if ls "${ARTIFACTS}"/junit_[0-9]*.xml 2>/dev/null >/dev/null; then if ls "${ARTIFACTS}"/junit_[0-9]*.xml 2>/dev/null >/dev/null; then
run_filter_junit -t="External.Storage|CSI.mock.volume" -o "${ARTIFACTS}/junit_${name}.xml" "${ARTIFACTS}"/junit_[0-9]*.xml && rm -f "${ARTIFACTS}"/junit_[0-9]*.xml mkdir -p "${ARTIFACTS}/junit/${name}" &&
mkdir -p "${ARTIFACTS}/junit/steps" &&
run_filter_junit -t="External.Storage|CSI.mock.volume" -o "${ARTIFACTS}/junit/steps/junit_${name}.xml" "${ARTIFACTS}"/junit_[0-9]*.xml &&
mv "${ARTIFACTS}"/junit_[0-9]*.xml "${ARTIFACTS}/junit/${name}/"
fi fi
} }
trap move_junit EXIT trap move_junit EXIT
if [ "${name}" == "local" ]; then
cd "${GOPATH}/src/${CSI_PROW_SIDECAR_E2E_IMPORT_PATH}" &&
run_with_loggers env KUBECONFIG="$KUBECONFIG" KUBE_TEST_REPO_LIST="$(if [ -e "${CSI_PROW_WORK}/e2e-repo-list" ]; then echo "${CSI_PROW_WORK}/e2e-repo-list"; fi)" ginkgo --timeout="${CSI_PROW_GINKGO_TIMEOUT}" -v "$@" "${CSI_PROW_WORK}/e2e-local.test" -- -report-dir "${ARTIFACTS}" -report-prefix local
else
cd "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" && cd "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
run_with_loggers env KUBECONFIG="$KUBECONFIG" KUBE_TEST_REPO_LIST="$(if [ -e "${CSI_PROW_WORK}/e2e-repo-list" ]; then echo "${CSI_PROW_WORK}/e2e-repo-list"; fi)" ginkgo -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/test-driver.yaml" run_with_loggers env KUBECONFIG="$KUBECONFIG" KUBE_TEST_REPO_LIST="$(if [ -e "${CSI_PROW_WORK}/e2e-repo-list" ]; then echo "${CSI_PROW_WORK}/e2e-repo-list"; fi)" ginkgo --timeout="${CSI_PROW_GINKGO_TIMEOUT}" -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/test-driver.yaml"
fi
) )
# Run csi-sanity against installed CSI driver. # Run csi-sanity against installed CSI driver.
@ -1058,13 +1098,14 @@ kubectl exec "$pod" -c "${CSI_PROW_SANITY_CONTAINER}" -- /bin/sh -c "\${CHECK_PA
EOF EOF
chmod u+x "${CSI_PROW_WORK}"/*dir_in_pod.sh chmod u+x "${CSI_PROW_WORK}"/*dir_in_pod.sh
mkdir -p "${ARTIFACTS}/junit/steps"
# This cannot run in parallel, because -csi.junitfile output # This cannot run in parallel, because -csi.junitfile output
# from different Ginkgo nodes would go to the same file. Also the # from different Ginkgo nodes would go to the same file. Also the
# staging and target directories are the same. # staging and target directories are the same.
run_with_loggers "${CSI_PROW_WORK}/csi-sanity" \ run_with_loggers "${CSI_PROW_WORK}/csi-sanity" \
-ginkgo.v \ -ginkgo.v \
-csi.junitfile "${ARTIFACTS}/junit_sanity.xml" \ -csi.junitfile "${ARTIFACTS}/junit/steps/junit_sanity.xml" \
-csi.endpoint "dns:///$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' csi-prow-control-plane):$(kubectl get "services/${CSI_PROW_SANITY_SERVICE}" -o "jsonpath={..nodePort}")" \ -csi.endpoint "dns:///$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' csi-prow-control-plane):$(kubectl get "services/${CSI_PROW_SANITY_SERVICE}" -o "jsonpath={..nodePort}")" \
-csi.stagingdir "/tmp/staging" \ -csi.stagingdir "/tmp/staging" \
-csi.mountdir "/tmp/mount" \ -csi.mountdir "/tmp/mount" \
@ -1094,7 +1135,8 @@ make_test_to_junit () {
# Plain make-test.xml was not delivered as text/xml by the web # Plain make-test.xml was not delivered as text/xml by the web
# server and ignored by spyglass. It seems that the name has to # server and ignored by spyglass. It seems that the name has to
# match junit*.xml. # match junit*.xml.
out="${ARTIFACTS}/junit_make_test.xml" out="${ARTIFACTS}/junit/steps/junit_make_test.xml"
mkdir -p "$(dirname "$out")"
testname= testname=
echo "<testsuite>" >>"$out" echo "<testsuite>" >>"$out"
@ -1257,7 +1299,8 @@ main () {
fi fi
if tests_need_non_alpha_cluster; then if tests_need_non_alpha_cluster; then
start_cluster || die "starting the non-alpha cluster failed" # Need to (re)create the cluster.
start_cluster "${CSI_PROW_E2E_GATES}" || die "starting the non-alpha cluster failed"
# Install necessary snapshot CRDs and snapshot controller # Install necessary snapshot CRDs and snapshot controller
install_snapshot_crds install_snapshot_crds
@ -1277,7 +1320,7 @@ main () {
if tests_enabled "parallel"; then if tests_enabled "parallel"; then
# Ignore: Double quote to prevent globbing and word splitting. # Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086 # shellcheck disable=SC2086
if ! run_e2e parallel ${CSI_PROW_GINKO_PARALLEL} \ if ! run_e2e parallel ${CSI_PROW_GINKGO_PARALLEL} \
-focus="$focus" \ -focus="$focus" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_ALPHA}" "${CSI_PROW_E2E_SKIP}")"; then -skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_ALPHA}" "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E parallel failed" warn "E2E parallel failed"
@ -1287,7 +1330,7 @@ main () {
# Run tests that are feature tagged, but non-alpha # Run tests that are feature tagged, but non-alpha
# Ignore: Double quote to prevent globbing and word splitting. # Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086 # shellcheck disable=SC2086
if ! run_e2e parallel-features ${CSI_PROW_GINKO_PARALLEL} \ if ! run_e2e parallel-features ${CSI_PROW_GINKGO_PARALLEL} \
-focus="$focus.*($(regex_join "${CSI_PROW_E2E_FOCUS}"))" \ -focus="$focus.*($(regex_join "${CSI_PROW_E2E_FOCUS}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then -skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then
warn "E2E parallel features failed" warn "E2E parallel features failed"
@ -1303,11 +1346,24 @@ main () {
ret=1 ret=1
fi fi
fi fi
if sidecar_tests_enabled; then
if ! run_e2e local \
-focus="${CSI_PROW_SIDECAR_E2E_FOCUS}" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}")"; then
warn "E2E sidecar failed"
ret=1
fi
fi
fi fi
delete_cluster_inside_prow_job non-alpha delete_cluster_inside_prow_job non-alpha
fi fi
if tests_need_alpha_cluster && [ "${CSI_PROW_E2E_ALPHA_GATES}" ]; then # If the cluster for alpha tests doesn't need any feature gates, then we
# could reuse the same cluster as for the other tests. But that would make
# the flow in this script harder and wouldn't help in practice because
# we have separate Prow jobs for alpha and non-alpha tests.
if tests_need_alpha_cluster; then
# Need to (re)create the cluster. # Need to (re)create the cluster.
start_cluster "${CSI_PROW_E2E_ALPHA_GATES}" || die "starting alpha cluster failed" start_cluster "${CSI_PROW_E2E_ALPHA_GATES}" || die "starting alpha cluster failed"
@ -1322,7 +1378,7 @@ main () {
if tests_enabled "parallel-alpha"; then if tests_enabled "parallel-alpha"; then
# Ignore: Double quote to prevent globbing and word splitting. # Ignore: Double quote to prevent globbing and word splitting.
# shellcheck disable=SC2086 # shellcheck disable=SC2086
if ! run_e2e parallel-alpha ${CSI_PROW_GINKO_PARALLEL} \ if ! run_e2e parallel-alpha ${CSI_PROW_GINKGO_PARALLEL} \
-focus="$focus.*($(regex_join "${CSI_PROW_E2E_ALPHA}"))" \ -focus="$focus.*($(regex_join "${CSI_PROW_E2E_ALPHA}"))" \
-skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_SKIP}")"; then -skip="$(regex_join "${CSI_PROW_E2E_SERIAL}" "${CSI_PROW_E2E_SKIP}")"; then
warn "E2E parallel alpha failed" warn "E2E parallel alpha failed"
@ -1344,8 +1400,8 @@ main () {
fi fi
# Merge all junit files into one. This gets rid of duplicated "skipped" tests. # Merge all junit files into one. This gets rid of duplicated "skipped" tests.
if ls "${ARTIFACTS}"/junit_*.xml 2>/dev/null >&2; then if ls "${ARTIFACTS}"/junit/steps/junit_*.xml 2>/dev/null >&2; then
run_filter_junit -o "${CSI_PROW_WORK}/junit_final.xml" "${ARTIFACTS}"/junit_*.xml && rm "${ARTIFACTS}"/junit_*.xml && mv "${CSI_PROW_WORK}/junit_final.xml" "${ARTIFACTS}" run_filter_junit -o "${ARTIFACTS}/junit_final.xml" "${ARTIFACTS}"/junit/steps/junit_*.xml
fi fi
return "$ret" return "$ret"

View File

@ -41,7 +41,7 @@ if [[ -z "$(command -v misspell)" ]]; then
# perform go get in a temp dir as we are not tracking this version in a go module # perform go get in a temp dir as we are not tracking this version in a go module
# if we do the go get in the repo, it will create / update a go.mod and go.sum # if we do the go get in the repo, it will create / update a go.mod and go.sum
cd "${TMP_DIR}" cd "${TMP_DIR}"
GO111MODULE=on GOBIN="${TMP_DIR}" go get "github.com/client9/misspell/cmd/misspell@${TOOL_VERSION}" GO111MODULE=on GOBIN="${TMP_DIR}" go install "github.com/client9/misspell/cmd/misspell@${TOOL_VERSION}"
export PATH="${TMP_DIR}:${PATH}" export PATH="${TMP_DIR}:${PATH}"
fi fi