diff --git a/.gitignore b/.gitignore index 749fc02d70b..f94a30ee861 100644 --- a/.gitignore +++ b/.gitignore @@ -62,6 +62,9 @@ network_closure.sh .tags* +# Version file for dockerized build +.dockerized-kube-version-defs + # Web UI /www/master/node_modules/ /www/master/npm-debug.log diff --git a/build/BUILD_IMAGE_VERSION b/build/BUILD_IMAGE_VERSION new file mode 100644 index 00000000000..00750edc07d --- /dev/null +++ b/build/BUILD_IMAGE_VERSION @@ -0,0 +1 @@ +3 diff --git a/build/README.md b/build/README.md index 0a6bdfd9f0e..9d06c0ab2e9 100644 --- a/build/README.md +++ b/build/README.md @@ -4,12 +4,12 @@ Building Kubernetes is easy if you take advantage of the containerized build env ## Requirements -1. Docker, using one of the two following configurations: - 1. **Mac OS X** You can either use docker-machine or boot2docker. See installation instructions [here](https://docs.docker.com/installation/mac/). - **Note**: You will want to set the boot2docker vm to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)) and do not `make quick-release` from `/tmp/` (See: [#14773]( https://github.com/kubernetes/kubernetes/issues/14773)) - 2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS. The scripts here assume that they are using a local Docker server and that they can "reach around" docker and grab results directly from the file system. -2. [Python](https://www.python.org) -3. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/) +1. Docker, using one of the following configurations: + 1. **Mac OS X** You can either use Docker for Mac or docker-machine. See installation instructions [here](https://docs.docker.com/installation/mac/). + **Note**: You will want to set the Docker VM to have at least 3GB of initial memory or building will likely fail. (See: [#11852]( http://issue.k8s.io/11852)). + 2. **Linux with local Docker** Install Docker according to the [instructions](https://docs.docker.com/installation/#installation) for your OS. + 3. **Remote Docker engine** Use a big machine in the cloud to build faster. This is a little trickier so look at the section later on. +2. **Optional** [Google Cloud SDK](https://developers.google.com/cloud/sdk/) You must install and configure Google Cloud SDK if you want to upload your release to Google Cloud Storage and may safely omit this otherwise. @@ -17,8 +17,6 @@ You must install and configure Google Cloud SDK if you want to upload your relea While it is possible to build Kubernetes using a local golang installation, we have a build process that runs in a Docker container. This simplifies initial set up and provides for a very consistent build and test environment. -There is also early support for building Docker "run" containers - ## Key scripts The following scripts are found in the `build/` directory. Note that all scripts must be run from the Kubernetes root directory. @@ -29,54 +27,25 @@ The following scripts are found in the `build/` directory. Note that all scripts * `build/run.sh make test`: Run all unit tests * `build/run.sh make test-integration`: Run integration test * `build/run.sh make test-cmd`: Run CLI tests -* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from any remote Docker container to the local `_output/dockerized/bin`. Right now this is only necessary on Mac OS X with `boot2docker` when your git repo isn't under `/Users`. -* `build/make-clean.sh`: Clean out the contents of `_output/dockerized` and remove any local built container images. -* `build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code. -* `build/release.sh`: Build everything, test it, and (optionally) upload the results to a GCS bucket. - -## Releasing - -The `build/release.sh` script will build a release. It will build binaries, run tests, (optionally) build runtime Docker images and then (optionally) upload all build artifacts to a GCS bucket. - -The main output is a tar file: `kubernetes.tar.gz`. This includes: -* Cross compiled client utilities. -* Script (`kubectl`) for picking and running the right client binary based on platform. -* Examples -* Cluster deployment scripts for various clouds -* Tar file containing all server binaries -* Tar file containing salt deployment tree shared across multiple cloud deployments. - -In addition, there are some other tar files that are created: -* `kubernetes-client-*.tar.gz` Client binaries for a specific platform. -* `kubernetes-server-*.tar.gz` Server binaries for a specific platform. -* `kubernetes-salt.tar.gz` The salt script/tree shared across multiple deployment scripts. - -The release utilities grab a set of environment variables to modify behavior. Arguably, these should be command line flags: - -Env Variable | Default | Description --------------|---------|------------ -`KUBE_SKIP_CONFIRMATIONS` | `n` | If `y` then no questions are asked and the scripts just continue. -`KUBE_GCS_UPLOAD_RELEASE` | `n` | Upload release artifacts to GCS -`KUBE_GCS_RELEASE_BUCKET` | `kubernetes-releases-${project_hash}` | The bucket to upload releases to -`KUBE_GCS_RELEASE_PREFIX` | `devel` | The path under the release bucket to put releases -`KUBE_GCS_MAKE_PUBLIC` | `y` | Make GCS links readable from anywhere -`KUBE_GCS_NO_CACHING` | `y` | Disable HTTP caching of GCS release artifacts. By default GCS will cache public objects for up to an hour. When doing "devel" releases this can cause problems. -`KUBE_GCS_DOCKER_REG_PREFIX` | `docker-reg` | *Experimental* When uploading docker images, the bucket that backs the registry. +* `build/copy-output.sh`: This will copy the contents of `_output/dockerized/bin` from the Docker container to the local `_output/dockerized/bin`. It will also copy out specific file patterns that are generated as part of the build process. This is run automatically as part of `build/run.sh`. +* `build/make-clean.sh`: Clean out the contents of `_output`, remove any locally built container images and remove the data container. +* `/build/shell.sh`: Drop into a `bash` shell in a build container with a snapshot of the current repo code. ## Basic Flow -The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. If necessary (for Mac OS X), the scripts will also copy results out. +The scripts directly under `build/` are used to build and test. They will ensure that the `kube-build` Docker image is built (based on `build/build-image/Dockerfile`) and then execute the appropriate command in that container. These scripts will both ensure that the right data is cached from run to run for incremental builds and will copy the results back out of the container. The `kube-build` container image is built by first creating a "context" directory in `_output/images/build-image`. It is done there instead of at the root of the Kubernetes repo to minimize the amount of data we need to package up when building the image. -Everything in `build/build-image/` is meant to be run inside of the container. If it doesn't think it is running in the container it'll throw a warning. While you can run some of that stuff outside of the container, it wasn't built to do so. +There are 3 different containers instances that are run from this image. The first is a "data" container to store all data that needs to persist across to support incremental builds. Next there is an "rsync" container that is used to transfer data in and out to the data container. Lastly there is a "build" container that is used for actually doing build actions. The data container persists across runs while the rsync and build containers are deleted after each use. -When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`. +`rsync` is used transparently behind the scenes to efficiently move data in and and of the container. This will use an ephemeral port picked by Docker. You can modify this by setting the `KUBE_RSYNC_PORT` env variable. + +All Docker names are suffixed with a hash derived from the file path (to allow concurrent usage on things like CI machines) and a version number. When the version number changes all state is cleared and clean build is started. This allows the build infrastructure to be changed and signal to CI systems that old artifacts need to be deleted. ## Proxy Settings - -If you are behind a proxy, you need to export proxy settings for kubernetes build, the following environment variables should be defined. +If you are behind a proxy and you are letting these scripts use `docker-machine` to set up your local VM for you on macOS, you need to export proxy settings for kubernetes build, the following environment variables should be defined. ``` export KUBERNETES_HTTP_PROXY=http://username:password@proxyaddr:proxyport @@ -91,13 +60,53 @@ export KUBERNETES_NO_PROXY=127.0.0.1 If you are using sudo to make kubernetes build for example make quick-release, you need run `sudo -E make quick-release` to pass the environment variables. -## TODOs +## Really Remote Docker Engine -These are in no particular order +It is possible to use a Docker Engine that is running remotely (under your desk or in the cloud). Docker must be configured to connect to that machine and the local rsync port must be forwarded (via SSH or nc) from localhost to the remote machine. -* [X] Harmonize with scripts in `hack/`. How much do we support building outside of Docker and these scripts? -* [X] Deprecate/replace most of the stuff in the hack/ -* [ ] Finish support for the Dockerized runtime. Issue [#19](http://issue.k8s.io/19). A key issue here is to make this fast/light enough that we can use it for development workflows. +To do this easily with GCE and `docker-machine`, do something like this: +``` +# Create the remote docker machine on GCE. This is a pretty beefy machine with SSD disk. +KUBE_BUILD_VM=k8s-build +KUBE_BUILD_GCE_PROJECT= +docker-machine create \ + --driver=google \ + --google-project=${KUBE_BUILD_GCE_PROJECT} \ + --google-zone=us-west1-a \ + --google-machine-type=n1-standard-8 \ + --google-disk-size=50 \ + --google-disk-type=pd-ssd \ + ${KUBE_BUILD_VM} +# Set up local docker to talk to that machine +eval $(docker-machine env ${KUBE_BUILD_VM}) + +# Pin down the port that rsync will be exposed on on the remote machine +export KUBE_RSYNC_PORT=8370 + +# forward local 8730 to that machine so that rsync works +docker-machine ssh ${KUBE_BUILD_VM} -L ${KUBE_RSYNC_PORT}:localhost:8730 -N & +``` + +Look at `docker-machine stop`, `docker-machine start` and `docker-machine rm` to manage this VM. + +## Releasing + +The `build/release.sh` script will build a release. It will build binaries, run tests, (optionally) build runtime Docker images. + +The main output is a tar file: `kubernetes.tar.gz`. This includes: +* Cross compiled client utilities. +* Script (`kubectl`) for picking and running the right client binary based on platform. +* Examples +* Cluster deployment scripts for various clouds +* Tar file containing all server binaries +* Tar file containing salt deployment tree shared across multiple cloud deployments. + +In addition, there are some other tar files that are created: +* `kubernetes-client-*.tar.gz` Client binaries for a specific platform. +* `kubernetes-server-*.tar.gz` Server binaries for a specific platform. +* `kubernetes-salt.tar.gz` The salt script/tree shared across multiple deployment scripts. + +When building final release tars, they are first staged into `_output/release-stage` before being tar'd up and put into `_output/release-tars`. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/README.md?pixel)]() diff --git a/build/build-image/Dockerfile b/build/build-image/Dockerfile index 9a1e233de42..62a03b00a59 100644 --- a/build/build-image/Dockerfile +++ b/build/build-image/Dockerfile @@ -15,6 +15,8 @@ # This file creates a standard build environment for building Kubernetes FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG +ADD localtime /etc/localtime + # Mark this as a kube-build container RUN touch /kube-build-image @@ -25,19 +27,13 @@ RUN chmod -R a+rwx /usr/local/go/pkg ${K8S_PATCHED_GOROOT}/pkg # of operations. ENV HOME /go/src/k8s.io/kubernetes WORKDIR ${HOME} -# We have to mkdir the dirs we need, or else Docker will create them when we -# mount volumes, and it will create them with root-only permissions. The -# explicit chmod of _output is required, but I can't really explain why. -RUN mkdir -p ${HOME} ${HOME}/_output \ - && chmod -R a+rwx ${HOME} ${HOME}/_output - -# Propagate the git tree version into the build image -ADD kube-version-defs /kube-version-defs -RUN chmod a+r /kube-version-defs -ENV KUBE_GIT_VERSION_FILE /kube-version-defs # Make output from the dockerized build go someplace else ENV KUBE_OUTPUT_SUBPATH _output/dockerized -# Upload Kubernetes source -ADD kube-source.tar.gz /go/src/k8s.io/kubernetes/ +# Pick up version stuff here as we don't copy our .git over. +ENV KUBE_GIT_VERSION_FILE ${HOME}/.dockerized-kube-version-defs + +ADD rsyncd.password / +RUN chmod a+r /rsyncd.password +ADD rsyncd.sh / diff --git a/build/build-image/rsyncd.sh b/build/build-image/rsyncd.sh new file mode 100755 index 00000000000..33aba4e3c02 --- /dev/null +++ b/build/build-image/rsyncd.sh @@ -0,0 +1,83 @@ +#!/bin/bash + +# Copyright 2016 The Kubernetes Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This script will set up and run rsyncd to allow data to move into and out of +# our dockerized build system. This is used for syncing sources and changes of +# sources into the docker-build-container. It is also used to transfer built binaries +# and generated files back out. +# +# When run as root (rare) it'll preserve the file ids as sent from the client. +# Usually it'll be run as non-dockerized UID/GID and end up translating all file +# ownership to that. + + +set -o errexit +set -o nounset +set -o pipefail + +# The directory that gets sync'd +VOLUME=${HOME} + +# Assume that this is running in Docker on a bridge. Allow connections from +# anything on the local subnet. +ALLOW=$(ip route | awk '/^default via/ { reg = "^[0-9./]+ dev "$5 } ; $0 ~ reg { print $1 }') + +CONFDIR="/tmp/rsync.k8s" +PIDFILE="${CONFDIR}/rsyncd.pid" +CONFFILE="${CONFDIR}/rsyncd.conf" +SECRETS="${CONFDIR}/rsyncd.secrets" + +mkdir -p "${CONFDIR}" + +if [[ -f "${PIDFILE}" ]]; then + PID=$(cat "${PIDFILE}") + echo "Cleaning up old PID file: ${PIDFILE}" + kill $PID &> /dev/null || true + rm "${PIDFILE}" +fi + +PASSWORD=$("${SECRETS}" +k8s:${PASSWORD} +EOF +chmod go= "${SECRETS}" + +USER_CONFIG= +if [[ "$(id -u)" == "0" ]]; then + USER_CONFIG=" uid = 0"$'\n'" gid = 0" +fi + +cat <"${CONFFILE}" +pid file = ${PIDFILE} +use chroot = no +log file = /dev/stdout +reverse lookup = no +munge symlinks = no +port = 8730 +[k8s] + numeric ids = true + $USER_CONFIG + hosts deny = * + hosts allow = ${ALLOW} + auth users = k8s + secrets file = ${SECRETS} + read only = false + path = ${VOLUME} + filter = - /.make/ - /.git/ - /_tmp/ +EOF + +exec /usr/bin/rsync --no-detach --daemon --config="${CONFFILE}" "$@" diff --git a/build/common.sh b/build/common.sh index 633cb4c4ba4..92dca5e886c 100755 --- a/build/common.sh +++ b/build/common.sh @@ -20,7 +20,6 @@ set -o nounset set -o pipefail DOCKER_OPTS=${DOCKER_OPTS:-""} -DOCKER_NATIVE=${DOCKER_NATIVE:-""} DOCKER=(docker ${DOCKER_OPTS}) DOCKER_HOST=${DOCKER_HOST:-""} DOCKER_MACHINE_NAME=${DOCKER_MACHINE_NAME:-"kube-dev"} @@ -31,18 +30,6 @@ KUBE_ROOT=$(cd $(dirname "${BASH_SOURCE}")/.. && pwd -P) source "${KUBE_ROOT}/hack/lib/init.sh" -# Incoming options -# -readonly KUBE_SKIP_CONFIRMATIONS="${KUBE_SKIP_CONFIRMATIONS:-n}" -readonly KUBE_GCS_UPLOAD_RELEASE="${KUBE_GCS_UPLOAD_RELEASE:-n}" -readonly KUBE_GCS_NO_CACHING="${KUBE_GCS_NO_CACHING:-y}" -readonly KUBE_GCS_MAKE_PUBLIC="${KUBE_GCS_MAKE_PUBLIC:-y}" -# KUBE_GCS_RELEASE_BUCKET default: kubernetes-releases-${project_hash} -readonly KUBE_GCS_RELEASE_PREFIX=${KUBE_GCS_RELEASE_PREFIX-devel}/ -readonly KUBE_GCS_DOCKER_REG_PREFIX=${KUBE_GCS_DOCKER_REG_PREFIX-docker-reg}/ -readonly KUBE_GCS_PUBLISH_VERSION=${KUBE_GCS_PUBLISH_VERSION:-} -readonly KUBE_GCS_DELETE_EXISTING="${KUBE_GCS_DELETE_EXISTING:-n}" - # Set KUBE_BUILD_PPC64LE to y to build for ppc64le in addition to other # platforms. # TODO(IBM): remove KUBE_BUILD_PPC64LE and reenable ppc64le compilation by @@ -55,7 +42,15 @@ readonly KUBE_BUILD_PPC64LE="${KUBE_BUILD_PPC64LE:-n}" # Constants readonly KUBE_BUILD_IMAGE_REPO=kube-build readonly KUBE_BUILD_IMAGE_CROSS_TAG="$(cat ${KUBE_ROOT}/build/build-image/cross/VERSION)" -# KUBE_BUILD_DATA_CONTAINER_NAME=kube-build-data-" + +# This version number is used to cause everyone to rebuild their data containers +# and build image. This is especially useful for automated build systems like +# Jenkins. +# +# Increment/change this number if you change the build image (anything under +# build/build-image) or change the set of volumes in the data container. +readonly KUBE_BUILD_IMAGE_VERSION_BASE="$(cat ${KUBE_ROOT}/build/BUILD_IMAGE_VERSION)" +readonly KUBE_BUILD_IMAGE_VERSION="${KUBE_BUILD_IMAGE_VERSION_BASE}-${KUBE_BUILD_IMAGE_CROSS_TAG}" # Here we map the output directories across both the local and remote _output # directories: @@ -76,22 +71,19 @@ readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images" # This is a symlink to binaries for "this platform" (e.g. build tools). readonly THIS_PLATFORM_BIN="${LOCAL_OUTPUT_ROOT}/bin" -readonly REMOTE_OUTPUT_ROOT="/go/src/${KUBE_GO_PACKAGE}/_output" +readonly REMOTE_ROOT="/go/src/${KUBE_GO_PACKAGE}" +readonly REMOTE_OUTPUT_ROOT="${REMOTE_ROOT}/_output" readonly REMOTE_OUTPUT_SUBPATH="${REMOTE_OUTPUT_ROOT}/dockerized" readonly REMOTE_OUTPUT_BINPATH="${REMOTE_OUTPUT_SUBPATH}/bin" readonly REMOTE_OUTPUT_GOPATH="${REMOTE_OUTPUT_SUBPATH}/go" -readonly DOCKER_MOUNT_ARGS_BASE=( - # where the container build will drop output - --volume "${LOCAL_OUTPUT_BINPATH}:${REMOTE_OUTPUT_BINPATH}" - # timezone - --volume /etc/localtime:/etc/localtime:ro -) +# This is the port on the workstation host to expose RSYNC on. Set this if you +# are doing something fancy with ssh tunneling. +readonly KUBE_RSYNC_PORT="${KUBE_RSYNC_PORT:-}" -# This is where the final release artifacts are created locally -readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage" -readonly RELEASE_DIR="${LOCAL_OUTPUT_ROOT}/release-tars" -readonly GCS_STAGE="${LOCAL_OUTPUT_ROOT}/gcs-stage" +# This is the port that rsync is running on *inside* the container. This may be +# mapped to KUBE_RSYNC_PORT via docker networking. +readonly KUBE_CONTAINER_RSYNC_PORT=8730 # Get the set of master binaries that run in Docker (on Linux) # Entry format is ",". @@ -141,10 +133,15 @@ kube::build::get_docker_wrapped_binaries() { # # Vars set: # KUBE_ROOT_HASH +# KUBE_BUILD_IMAGE_TAG_BASE # KUBE_BUILD_IMAGE_TAG # KUBE_BUILD_IMAGE +# KUBE_BUILD_CONTAINER_NAME_BASE # KUBE_BUILD_CONTAINER_NAME -# KUBE_BUILD_DATA_CONTAINER_NAME +# KUBE_DATA_CONTAINER_NAME_BASE +# KUBE_DATA_CONTAINER_NAME +# KUBE_RSYNC_CONTAINER_NAME_BASE +# KUBE_RSYNC_CONTAINER_NAME # DOCKER_MOUNT_ARGS # LOCAL_OUTPUT_BUILD_CONTEXT function kube::build::verify_prereqs() { @@ -156,13 +153,26 @@ function kube::build::verify_prereqs() { fi kube::build::ensure_docker_daemon_connectivity || return 1 + if (( ${KUBE_VERBOSE} > 6 )); then + kube::log::status "Docker Version:" + "${DOCKER[@]}" version | kube::log::info_from_stdin + fi + KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${KUBE_ROOT}") - KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" + KUBE_BUILD_IMAGE_TAG_BASE="build-${KUBE_ROOT_HASH}" + KUBE_BUILD_IMAGE_TAG="${KUBE_BUILD_IMAGE_TAG_BASE}-${KUBE_BUILD_IMAGE_VERSION}" KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" - KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" - KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" - DOCKER_MOUNT_ARGS=("${DOCKER_MOUNT_ARGS_BASE[@]}" --volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}") + KUBE_BUILD_CONTAINER_NAME_BASE="kube-build-${KUBE_ROOT_HASH}" + KUBE_BUILD_CONTAINER_NAME="${KUBE_BUILD_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}" + KUBE_RSYNC_CONTAINER_NAME_BASE="kube-rsync-${KUBE_ROOT_HASH}" + KUBE_RSYNC_CONTAINER_NAME="${KUBE_RSYNC_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}" + KUBE_DATA_CONTAINER_NAME_BASE="kube-build-data-${KUBE_ROOT_HASH}" + KUBE_DATA_CONTAINER_NAME="${KUBE_DATA_CONTAINER_NAME_BASE}-${KUBE_BUILD_IMAGE_VERSION}" + DOCKER_MOUNT_ARGS=(--volumes-from "${KUBE_DATA_CONTAINER_NAME}") LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" + + kube::version::get_version_vars + kube::version::save_version_vars "${KUBE_ROOT}/.dockerized-kube-version-defs" } # --------------------------------------------------------------------------- @@ -176,14 +186,12 @@ function kube::build::docker_available_on_osx() { fi kube::log::status "No docker host is set. Checking options for setting one..." - if [[ -z "$(which docker-machine)" && -z "$(which boot2docker)" ]]; then - kube::log::status "It looks like you're running Mac OS X, yet none of Docker for Mac, docker-machine or boot2docker are on the path." - kube::log::status "See: https://docs.docker.com/machine/ for installation instructions." + if [[ -z "$(which docker-machine)" ]]; then + kube::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found." + kube::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions." return 1 elif [[ -n "$(which docker-machine)" ]]; then kube::build::prepare_docker_machine - elif [[ -n "$(which boot2docker)" ]]; then - kube::build::prepare_boot2docker fi fi } @@ -219,29 +227,6 @@ function kube::build::prepare_docker_machine() { return 0 } -function kube::build::prepare_boot2docker() { - kube::log::status "boot2docker cli has been deprecated in favor of docker-machine." - kube::log::status "See: https://github.com/boot2docker/boot2docker-cli for more details." - if [[ $(boot2docker status) != "running" ]]; then - kube::log::status "boot2docker isn't running. We'll try to start it." - boot2docker up || { - kube::log::error "Can't start boot2docker." - kube::log::error "You may need to 'boot2docker init' to create your VM." - return 1 - } - fi - - # Reach over and set the clock. After sleep/resume the clock will skew. - kube::log::status "Setting boot2docker clock" - boot2docker ssh sudo date -u -D "%Y%m%d%H%M.%S" --set "$(date -u +%Y%m%d%H%M.%S)" >/dev/null - - kube::log::status "Setting boot2docker env variables" - $(boot2docker shellinit) - kube::log::status "boot2docker-vm has been successfully started." - - return 0 -} - function kube::build::is_osx() { [[ "$(uname)" == "Darwin" ]] } @@ -269,25 +254,23 @@ function kube::build::ensure_docker_in_path() { function kube::build::ensure_docker_daemon_connectivity { if ! "${DOCKER[@]}" info > /dev/null 2>&1 ; then - { - echo "Can't connect to 'docker' daemon. please fix and retry." - echo - echo "Possible causes:" - echo " - On Mac OS X, DOCKER_HOST hasn't been set. You may need to: " - echo " - Create and start your VM using docker-machine or boot2docker: " - echo " - docker-machine create -d ${DOCKER_MACHINE_DRIVER} ${DOCKER_MACHINE_NAME}" - echo " - boot2docker init && boot2docker start" - echo " - Set your environment variables using: " - echo " - eval \$(docker-machine env ${DOCKER_MACHINE_NAME})" - echo " - \$(boot2docker shellinit)" - echo " - Update your Docker VM" - echo " - Error Message: 'Error response from daemon: client is newer than server (...)' " - echo " - docker-machine upgrade ${DOCKER_MACHINE_NAME}" - echo " - On Linux, user isn't in 'docker' group. Add and relogin." - echo " - Something like 'sudo usermod -a -G docker ${USER-user}'" - echo " - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8" - echo " - On Linux, Docker daemon hasn't been started or has crashed." - } >&2 + cat <<'EOF' >&2 +Can't connect to 'docker' daemon. please fix and retry. + +Possible causes: + - Docker Daemon not started + - Linux: confirm via your init system + - macOS w/ docker-machine: run `docker-machine ls` and `docker-machine start ` + - macOS w/ Docker for Mac: Check the menu bar and start the Docker application + - DOCKER_HOST hasn't been set of is set incorrectly + - Linux: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}` + - macOS w/ docker-machine: run `eval "$(docker-machine env )"` + - macOS w/ Docker for Mac: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}` + - Other things to check: + - Linux: User isn't in 'docker' group. Add and relogin. + - Something like 'sudo usermod -a -G docker ${USER}' + - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8 +EOF return 1 fi } @@ -313,57 +296,6 @@ function kube::build::ensure_tar() { fi } -function kube::build::clean_output() { - # Clean out the output directory if it exists. - if kube::build::has_docker ; then - if kube::build::build_image_built ; then - kube::log::status "Cleaning out _output/dockerized/bin/ via docker build image" - kube::build::run_build_command bash -c "rm -rf '${REMOTE_OUTPUT_BINPATH}'/*" - else - kube::log::error "Build image not built. Cannot clean via docker build image." - fi - - kube::log::status "Removing data container ${KUBE_BUILD_DATA_CONTAINER_NAME}" - "${DOCKER[@]}" rm -v "${KUBE_BUILD_DATA_CONTAINER_NAME}" >/dev/null 2>&1 || true - fi - - kube::log::status "Removing _output directory" - rm -rf "${LOCAL_OUTPUT_ROOT}" -} - -# Make sure the _output directory is created and mountable by docker -function kube::build::prepare_output() { - # See auto-creation of host mounts: https://github.com/docker/docker/pull/21666 - # if selinux is enabled, docker run -v /foo:/foo:Z will not autocreate the host dir - mkdir -p "${LOCAL_OUTPUT_SUBPATH}" - mkdir -p "${LOCAL_OUTPUT_BINPATH}" - # On RHEL/Fedora SELinux is enabled by default and currently breaks docker - # volume mounts. We can work around this by explicitly adding a security - # context to the _output directory. - # Details: http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/ - if which selinuxenabled &>/dev/null && \ - selinuxenabled && \ - which chcon >/dev/null ; then - if [[ ! $(ls -Zd "${LOCAL_OUTPUT_ROOT}") =~ svirt_sandbox_file_t ]] ; then - kube::log::status "Applying SELinux policy to '_output' directory." - if ! chcon -Rt svirt_sandbox_file_t "${LOCAL_OUTPUT_ROOT}"; then - echo " ***Failed***. This may be because you have root owned files under _output." - echo " Continuing, but this build may fail later if SELinux prevents access." - fi - fi - number=${#DOCKER_MOUNT_ARGS[@]} - for (( i=0; i /dev/null } @@ -378,9 +310,51 @@ function kube::build::docker_image_exists() { exit 2 } - # We cannot just specify the IMAGE here as `docker images` doesn't behave as - # expected. See: https://github.com/docker/docker/issues/8048 - "${DOCKER[@]}" images | grep -Eq "^(\S+/)?${1}\s+${2}\s+" + [[ $("${DOCKER[@]}" images -q "${1}:${2}") ]] +} + +# Delete all images that match a tag prefix except for the "current" version +# +# $1: The image repo/name +# $2: The tag base. We consider any image that matches $2* +# $3: The current image not to delete if provided +function kube::build::docker_delete_old_images() { + # In Docker 1.12, we can replace this with + # docker images "$1" --format "{{.Tag}}" + for tag in $("${DOCKER[@]}" images ${1} | tail -n +2 | awk '{print $2}') ; do + if [[ "${tag}" != "${2}"* ]] ; then + V=6 kube::log::status "Keeping image ${1}:${tag}" + continue + fi + + if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then + V=2 kube::log::status "Deleting image ${1}:${tag}" + "${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null + else + V=6 kube::log::status "Keeping image ${1}:${tag}" + fi + done +} + +# Stop and delete all containers that match a pattern +# +# $1: The base container prefix +# $2: The current container to keep, if provided +function kube::build::docker_delete_old_containers() { + # In Docker 1.12 we can replace this line with + # docker ps -a --format="{{.Names}}" + for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do + if [[ "${container}" != "${1}"* ]] ; then + V=6 kube::log::status "Keeping container ${container}" + continue + fi + if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then + V=2 kube::log::status "Deleting container ${container}" + kube::build::destroy_container "${container}" + else + V=6 kube::log::status "Keeping container ${container}" + fi + done } # Takes $1 and computes a short has for it. Useful for unique tag generation @@ -409,70 +383,54 @@ function kube::build::destroy_container() { "${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true } -# Validate a ci version -# -# Globals: -# None -# Arguments: -# version -# Returns: -# If version is a valid ci version -# Sets: (e.g. for '1.2.3-alpha.4.56+abcdef12345678') -# VERSION_MAJOR (e.g. '1') -# VERSION_MINOR (e.g. '2') -# VERSION_PATCH (e.g. '3') -# VERSION_PRERELEASE (e.g. 'alpha') -# VERSION_PRERELEASE_REV (e.g. '4') -# VERSION_BUILD_INFO (e.g. '.56+abcdef12345678') -# VERSION_COMMITS (e.g. '56') -function kube::release::parse_and_validate_ci_version() { - # Accept things like "v1.2.3-alpha.4.56+abcdef12345678" or "v1.2.3-beta.4" - local -r version_regex="^v(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)-(beta|alpha)\\.(0|[1-9][0-9]*)(\\.(0|[1-9][0-9]*)\\+[0-9a-f]{7,40})?$" - local -r version="${1-}" - [[ "${version}" =~ ${version_regex} ]] || { - kube::log::error "Invalid ci version: '${version}', must match regex ${version_regex}" - return 1 - } - VERSION_MAJOR="${BASH_REMATCH[1]}" - VERSION_MINOR="${BASH_REMATCH[2]}" - VERSION_PATCH="${BASH_REMATCH[3]}" - VERSION_PRERELEASE="${BASH_REMATCH[4]}" - VERSION_PRERELEASE_REV="${BASH_REMATCH[5]}" - VERSION_BUILD_INFO="${BASH_REMATCH[6]}" - VERSION_COMMITS="${BASH_REMATCH[7]}" -} - # --------------------------------------------------------------------------- # Building + +function kube::build::clean() { + if kube::build::has_docker ; then + kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" + kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" + kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" + kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" + + V=2 kube::log::status "Cleaning all untagged docker images" + "${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true + fi + + kube::log::status "Removing _output directory" + rm -rf "${LOCAL_OUTPUT_ROOT}" +} + function kube::build::build_image_built() { kube::build::docker_image_exists "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG}" } -# The set of source targets to include in the kube-build image -function kube::build::source_targets() { - local targets=( - $(find . -mindepth 1 -maxdepth 1 -not \( \ - \( -path ./_\* -o -path ./.git\* \) -prune \ - \)) - ) - echo "${targets[@]}" -} - # Set up the context directory for the kube-build image and build it. function kube::build::build_image() { - kube::build::ensure_tar + if ! kube::build::build_image_built; then + mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}" - mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}" - "${TAR}" czf "${LOCAL_OUTPUT_BUILD_CONTEXT}/kube-source.tar.gz" $(kube::build::source_targets) + cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/" - kube::version::get_version_vars - kube::version::save_version_vars "${LOCAL_OUTPUT_BUILD_CONTEXT}/kube-version-defs" + cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile" + cp build/build-image/rsyncd.sh "${LOCAL_OUTPUT_BUILD_CONTEXT}/" + dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" + chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" - cp build/build-image/Dockerfile "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile" - kube::build::update_dockerfile + kube::build::update_dockerfile - kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' + kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' + fi + + # Clean up old versions of everything + kube::build::docker_delete_old_containers "${KUBE_BUILD_CONTAINER_NAME_BASE}" "${KUBE_BUILD_CONTAINER_NAME}" + kube::build::docker_delete_old_containers "${KUBE_RSYNC_CONTAINER_NAME_BASE}" "${KUBE_RSYNC_CONTAINER_NAME}" + kube::build::docker_delete_old_containers "${KUBE_DATA_CONTAINER_NAME_BASE}" "${KUBE_DATA_CONTAINER_NAME}" + kube::build::docker_delete_old_images "${KUBE_BUILD_IMAGE_REPO}" "${KUBE_BUILD_IMAGE_TAG_BASE}" "${KUBE_BUILD_IMAGE_TAG}" + + kube::build::ensure_data_container + kube::build::sync_to_container } # Build a docker image from a Dockerfile. @@ -502,70 +460,101 @@ EOF } } -function kube::build::clean_image() { - local -r image=$1 - - kube::log::status "Deleting docker image ${image}" - "${DOCKER[@]}" rmi ${image} 2> /dev/null || true -} - -function kube::build::clean_images() { - kube::build::has_docker || return 0 - - kube::build::clean_image "${KUBE_BUILD_IMAGE}" - - kube::log::status "Cleaning all other untagged docker images" - "${DOCKER[@]}" rmi $("${DOCKER[@]}" images -q --filter 'dangling=true') 2> /dev/null || true -} - function kube::build::ensure_data_container() { # If the data container exists AND exited successfully, we can use it. # Otherwise nuke it and start over. local ret=0 local code=$(docker inspect \ -f '{{.State.ExitCode}}' \ - "${KUBE_BUILD_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?) + "${KUBE_DATA_CONTAINER_NAME}" 2>/dev/null || ret=$?) if [[ "${ret}" == 0 && "${code}" != 0 ]]; then - kube::build::destroy_container "${KUBE_BUILD_DATA_CONTAINER_NAME}" + kube::build::destroy_container "${KUBE_DATA_CONTAINER_NAME}" ret=1 fi if [[ "${ret}" != 0 ]]; then - kube::log::status "Creating data container ${KUBE_BUILD_DATA_CONTAINER_NAME}" + kube::log::status "Creating data container ${KUBE_DATA_CONTAINER_NAME}" # We have to ensure the directory exists, or else the docker run will # create it as root. mkdir -p "${LOCAL_OUTPUT_GOPATH}" # We want this to run as root to be able to chown, so non-root users can # later use the result as a data container. This run both creates the data # container and chowns the GOPATH. + # + # The data container creates volumes for all of the directories that store + # intermediates for the Go build. This enables incremental builds across + # Docker sessions. The *_cgo paths are re-compiled versions of the go std + # libraries for true static building. local -ra docker_cmd=( "${DOCKER[@]}" run - --name "${KUBE_BUILD_DATA_CONTAINER_NAME}" + --volume "${REMOTE_ROOT}" # white-out the whole output dir + --volume /usr/local/go/pkg/linux_386_cgo + --volume /usr/local/go/pkg/linux_amd64_cgo + --volume /usr/local/go/pkg/linux_arm_cgo + --volume /usr/local/go/pkg/linux_arm64_cgo + --volume /usr/local/go/pkg/linux_ppc64le_cgo + --volume /usr/local/go/pkg/darwin_amd64_cgo + --volume /usr/local/go/pkg/darwin_386_cgo + --volume /usr/local/go/pkg/windows_amd64_cgo + --volume /usr/local/go/pkg/windows_386_cgo + --name "${KUBE_DATA_CONTAINER_NAME}" --hostname "${HOSTNAME}" - --volume "${REMOTE_OUTPUT_ROOT}" # white-out the whole output dir - --volume "${REMOTE_OUTPUT_GOPATH}" # make a non-root owned mountpoint "${KUBE_BUILD_IMAGE}" - chown -R $(id -u).$(id -g) "${REMOTE_OUTPUT_ROOT}" + chown -R $(id -u).$(id -g) + "${REMOTE_ROOT}" + /usr/local/go/pkg/ ) "${docker_cmd[@]}" fi } # Run a command in the kube-build image. This assumes that the image has -# already been built. This will sync out all output data from the build. +# already been built. function kube::build::run_build_command() { - kube::log::status "Running build command...." - [[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; } + kube::log::status "Running build command..." + kube::build::run_build_command_ex "${KUBE_BUILD_CONTAINER_NAME}" -- "$@" +} - kube::build::ensure_data_container - kube::build::prepare_output +# Run a command in the kube-build image. This assumes that the image has +# already been built. +# +# Arguments are in the form of +# -- +function kube::build::run_build_command_ex() { + [[ $# != 0 ]] || { echo "Invalid input - please specify a container name." >&2; return 4; } + local container_name="${1}" + shift local -a docker_run_opts=( - "--name=${KUBE_BUILD_CONTAINER_NAME}" + "--name=${container_name}" "--user=$(id -u):$(id -g)" "--hostname=${HOSTNAME}" "${DOCKER_MOUNT_ARGS[@]}" ) + local detach=false + + [[ $# != 0 ]] || { echo "Invalid input - please specify docker arguments followed by --." >&2; return 4; } + # Everything before "--" is an arg to docker + until [ -z "${1-}" ] ; do + if [[ "$1" == "--" ]]; then + shift + break + fi + docker_run_opts+=("$1") + if [[ "$1" == "-d" || "$1" == "--detach" ]] ; then + detach=true + fi + shift + done + + # Everything after "--" is the command to run + [[ $# != 0 ]] || { echo "Invalid input - please specify a command to run." >&2; return 4; } + local -a cmd=() + until [ -z "${1-}" ] ; do + cmd+=("$1") + shift + done + docker_run_opts+=( --env "KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}" --env "KUBE_BUILDER_OS=${OSTYPE:-notdetected}" @@ -578,7 +567,7 @@ function kube::build::run_build_command() { # attach stderr/stdout but don't bother asking for a tty. if [[ -t 0 ]]; then docker_run_opts+=(--interactive --tty) - else + elif [[ "${detach}" == false ]]; then docker_run_opts+=(--attach=stdout --attach=stderr) fi @@ -586,454 +575,152 @@ function kube::build::run_build_command() { "${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}") # Clean up container from any previous run - kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}" - "${docker_cmd[@]}" "$@" - kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}" + kube::build::destroy_container "${container_name}" + "${docker_cmd[@]}" "${cmd[@]}" + if [[ "${detach}" == false ]]; then + kube::build::destroy_container "${container_name}" + fi } -# Test if the output directory is remote (and can only be accessed through -# docker) or if it is "local" and we can access the output without going through -# docker. -function kube::build::is_output_remote() { - rm -f "${LOCAL_OUTPUT_SUBPATH}/test_for_remote" - kube::build::run_build_command touch "${REMOTE_OUTPUT_BINPATH}/test_for_remote" +function kube::build::probe_address { + # Apple has an ancient version of netcat with custom timeout flags. This is + # the best way I (jbeda) could find to test for that. + local netcat + if nc 2>&1 | grep -e 'apple' >/dev/null ; then + netcat="nc -G 1" + else + netcat="nc -w 1" + fi - [[ ! -e "${LOCAL_OUTPUT_BINPATH}/test_for_remote" ]] + # Wait unil rsync is up and running. + if ! which nc >/dev/null ; then + V=6 kube::log::info "netcat not installed, waiting for 1s" + sleep 1 + return 0 + fi + + local tries=10 + while (( ${tries} > 0 )) ; do + if ${netcat} -z "$1" "$2" 2> /dev/null ; then + return 0 + fi + tries=$(( ${tries} - 1)) + sleep 0.1 + done + + return 1 } -# If the Docker server is remote, copy the results back out. +# Start up the rsync container in the backgound. This should be explicitly +# stoped with kube::build::stop_rsyncd_container. +# +# This will set the global var KUBE_RSYNC_ADDR to the effective port that the +# rsync daemon can be reached out. +function kube::build::start_rsyncd_container() { + kube::build::stop_rsyncd_container + V=6 kube::log::status "Starting rsyncd container" + kube::build::run_build_command_ex \ + "${KUBE_RSYNC_CONTAINER_NAME}" -p 127.0.0.1:${KUBE_RSYNC_PORT}:${KUBE_CONTAINER_RSYNC_PORT} -d \ + -- /rsyncd.sh >/dev/null + + local mapped_port + if ! mapped_port=$("${DOCKER[@]}" port "${KUBE_RSYNC_CONTAINER_NAME}" ${KUBE_CONTAINER_RSYNC_PORT} 2> /dev/null | cut -d: -f 2) ; then + kube:log:error "Could not get effective rsync port" + return 1 + fi + + local container_ip + container_ip=$("${DOCKER[@]}" inspect --format '{{ .NetworkSettings.IPAddress }}' "${KUBE_RSYNC_CONTAINER_NAME}") + + # Sometimes we can reach rsync through localhost and a NAT'd port. Other + # times (when we are running in another docker container on the Jenkins + # machines) we have to talk directly to the container IP. There is no one + # strategy that works in all cases so we test to figure out which situation we + # are in. + if kube::build::probe_address 127.0.0.1 ${mapped_port}; then + KUBE_RSYNC_ADDR="127.0.0.1:${mapped_port}" + sleep 0.5 + return 0 + elif kube::build::probe_address "${container_ip}" ${KUBE_CONTAINER_RSYNC_PORT}; then + KUBE_RSYNC_ADDR="${container_ip}:${KUBE_CONTAINER_RSYNC_PORT}" + sleep 0.5 + return 0 + fi + + kube::log::error "Could not connect to rsync container. See build/README.md for setting up remote Docker engine." + return 1 +} + +function kube::build::stop_rsyncd_container() { + V=6 kube::log::status "Stopping any currently running rsyncd container" + unset KUBE_RSYNC_ADDR + kube::build::destroy_container "${KUBE_RSYNC_CONTAINER_NAME}" +} + +# This will launch rsyncd in a container and then sync the source tree to the +# container over the local network. +function kube::build::sync_to_container() { + kube::log::status "Syncing sources to container" + + kube::build::start_rsyncd_container + + local rsync_extra="" + if (( ${KUBE_VERBOSE} >= 6 )); then + rsync_extra="-iv" + fi + + # rsync filters are a bit confusing. Here we are syncing everything except + # output only directories and things that are not necessary like the git + # directory. The '- /' filter prevents rsync from trying to set the + # uid/gid/perms on the root of the sync tree. + V=6 kube::log::status "Running rsync" + rsync ${rsync_extra} \ + --archive \ + --prune-empty-dirs \ + --password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \ + --filter='- /.git/' \ + --filter='- /.make/' \ + --filter='- /_tmp/' \ + --filter='- /_output/' \ + --filter='- /' \ + "${KUBE_ROOT}/" "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" + + kube::build::stop_rsyncd_container +} + +# Copy all build results back out. function kube::build::copy_output() { - if kube::build::is_output_remote; then - # At time of this code, docker cp does not work when copying from a volume. - # As a workaround, the binaries are first copied to a local filesystem, - # /tmp, then docker cp'd to the local binaries output directory. - # The fix for the volume bug has been accepted and once it's widely - # deployed the code below should be simplified to a simple docker cp - # Bug: https://github.com/docker/docker/pull/8509 - local -a docker_run_opts=( - "--name=${KUBE_BUILD_CONTAINER_NAME}" - "--user=$(id -u):$(id -g)" - "${DOCKER_MOUNT_ARGS[@]}" - -d - ) + kube::log::status "Syncing out of container" - local -ra docker_cmd=( - "${DOCKER[@]}" run "${docker_run_opts[@]}" "${KUBE_BUILD_IMAGE}" - ) + kube::build::start_rsyncd_container - kube::log::status "Syncing back _output/dockerized/bin directory from remote Docker" - rm -rf "${LOCAL_OUTPUT_BINPATH}" - mkdir -p "${LOCAL_OUTPUT_BINPATH}" - rm -f "${THIS_PLATFORM_BIN}" - ln -s "${LOCAL_OUTPUT_BINPATH}" "${THIS_PLATFORM_BIN}" - - kube::build::destroy_container "${KUBE_BUILD_CONTAINER_NAME}" - "${docker_cmd[@]}" bash -c "cp -r ${REMOTE_OUTPUT_BINPATH} /tmp/bin;touch /tmp/finished;rm /tmp/bin/test_for_remote;/bin/sleep 600" > /dev/null 2>&1 - - # Wait until binaries have finished coppying - count=0 - while true;do - if "${DOCKER[@]}" cp "${KUBE_BUILD_CONTAINER_NAME}:/tmp/finished" "${LOCAL_OUTPUT_BINPATH}" > /dev/null 2>&1;then - "${DOCKER[@]}" cp "${KUBE_BUILD_CONTAINER_NAME}:/tmp/bin" "${LOCAL_OUTPUT_SUBPATH}" - break; - fi - - let count=count+1 - if [[ $count -eq 60 ]]; then - # break after 5m - kube::log::error "Timed out waiting for binaries..." - break - fi - sleep 5 - done - - "${DOCKER[@]}" rm -f -v "${KUBE_BUILD_CONTAINER_NAME}" >/dev/null 2>&1 || true - else - kube::log::status "Output directory is local. No need to copy results out." + local rsync_extra="" + if (( ${KUBE_VERBOSE} >= 6 )); then + rsync_extra="-iv" fi + + # The filter syntax for rsync is a little obscure. It filters on files and + # directories. If you don't go in to a directory you won't find any files + # there. Rules are evaluated in order. The last two rules are a little + # magic. '+ */' says to go in to every directory and '- /**' says to ignore + # any file or directory that isn't already specifically allowed. + # + # We are looking to copy out all of the built binaries along with various + # generated files. + V=6 kube::log::status "Running rsync" + rsync ${rsync_extra} \ + --archive \ + --prune-empty-dirs \ + --password-file="${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password" \ + --filter='- /vendor/' \ + --filter='- /_temp/' \ + --filter='+ /_output/dockerized/bin/**' \ + --filter='+ zz_generated.*' \ + --filter='+ generated.proto' \ + --filter='+ *.pb.go' \ + --filter='+ */' \ + --filter='- /**' \ + "rsync://k8s@${KUBE_RSYNC_ADDR}/k8s/" "${KUBE_ROOT}" + + kube::build::stop_rsyncd_container } - -# --------------------------------------------------------------------------- -# Build final release artifacts -function kube::release::clean_cruft() { - # Clean out cruft - find ${RELEASE_STAGE} -name '*~' -exec rm {} \; - find ${RELEASE_STAGE} -name '#*#' -exec rm {} \; - find ${RELEASE_STAGE} -name '.DS*' -exec rm {} \; -} - -function kube::release::package_hyperkube() { - # If we have these variables set then we want to build all docker images. - if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then - for arch in "${KUBE_SERVER_PLATFORMS[@]##*/}"; do - kube::log::status "Building hyperkube image for arch: ${arch}" - REGISTRY="${KUBE_DOCKER_REGISTRY}" VERSION="${KUBE_DOCKER_IMAGE_TAG}" ARCH="${arch}" make -C cluster/images/hyperkube/ build - done - fi -} - -function kube::release::package_tarballs() { - # Clean out any old releases - rm -rf "${RELEASE_DIR}" - mkdir -p "${RELEASE_DIR}" - kube::release::package_build_image_tarball & - kube::release::package_client_tarballs & - kube::release::package_server_tarballs & - kube::release::package_salt_tarball & - kube::release::package_kube_manifests_tarball & - kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; } - - kube::release::package_full_tarball & # _full depends on all the previous phases - kube::release::package_test_tarball & # _test doesn't depend on anything - kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; } -} - -# Package the build image we used from the previous stage, for compliance/licensing/audit/yadda. -function kube::release::package_build_image_tarball() { - kube::log::status "Building tarball: src" - "${TAR}" czf "${RELEASE_DIR}/kubernetes-src.tar.gz" -C "${LOCAL_OUTPUT_BUILD_CONTEXT}" . -} - -# Package up all of the cross compiled clients. Over time this should grow into -# a full SDK -function kube::release::package_client_tarballs() { - # Find all of the built client binaries - local platform platforms - platforms=($(cd "${LOCAL_OUTPUT_BINPATH}" ; echo */*)) - for platform in "${platforms[@]}"; do - local platform_tag=${platform/\//-} # Replace a "/" for a "-" - kube::log::status "Starting tarball: client $platform_tag" - - ( - local release_stage="${RELEASE_STAGE}/client/${platform_tag}/kubernetes" - rm -rf "${release_stage}" - mkdir -p "${release_stage}/client/bin" - - local client_bins=("${KUBE_CLIENT_BINARIES[@]}") - if [[ "${platform%/*}" == "windows" ]]; then - client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") - fi - - # This fancy expression will expand to prepend a path - # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the - # KUBE_CLIENT_BINARIES array. - cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/client/bin/" - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." - ) & - done - - kube::log::status "Waiting on tarballs" - kube::util::wait-for-jobs || { kube::log::error "client tarball creation failed"; exit 1; } -} - -# Package up all of the server binaries -function kube::release::package_server_tarballs() { - local platform - for platform in "${KUBE_SERVER_PLATFORMS[@]}"; do - local platform_tag=${platform/\//-} # Replace a "/" for a "-" - local arch=$(basename ${platform}) - kube::log::status "Building tarball: server $platform_tag" - - local release_stage="${RELEASE_STAGE}/server/${platform_tag}/kubernetes" - rm -rf "${release_stage}" - mkdir -p "${release_stage}/server/bin" - mkdir -p "${release_stage}/addons" - - # This fancy expression will expand to prepend a path - # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the - # KUBE_SERVER_BINARIES array. - cp "${KUBE_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/server/bin/" - - kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}" - - # Include the client binaries here too as they are useful debugging tools. - local client_bins=("${KUBE_CLIENT_BINARIES[@]}") - if [[ "${platform%/*}" == "windows" ]]; then - client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") - fi - cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/server/bin/" - - cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/" - - cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/" - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes-server-${platform_tag}.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." - done -} - -function kube::release::md5() { - if which md5 >/dev/null 2>&1; then - md5 -q "$1" - else - md5sum "$1" | awk '{ print $1 }' - fi -} - -function kube::release::sha1() { - if which shasum >/dev/null 2>&1; then - shasum -a1 "$1" | awk '{ print $1 }' - else - sha1sum "$1" | awk '{ print $1 }' - fi -} - -# This will take binaries that run on master and creates Docker images -# that wrap the binary in them. (One docker image per binary) -# Args: -# $1 - binary_dir, the directory to save the tared images to. -# $2 - arch, architecture for which we are building docker images. -function kube::release::create_docker_images_for_server() { - # Create a sub-shell so that we don't pollute the outer environment - ( - local binary_dir="$1" - local arch="$2" - local binary_name - local binaries=($(kube::build::get_docker_wrapped_binaries ${arch})) - - for wrappable in "${binaries[@]}"; do - - local oldifs=$IFS - IFS="," - set $wrappable - IFS=$oldifs - - local binary_name="$1" - local base_image="$2" - - kube::log::status "Starting Docker build for image: ${binary_name}" - - ( - local md5_sum - md5_sum=$(kube::release::md5 "${binary_dir}/${binary_name}") - - local docker_build_path="${binary_dir}/${binary_name}.dockerbuild" - local docker_file_path="${docker_build_path}/Dockerfile" - local binary_file_path="${binary_dir}/${binary_name}" - - rm -rf ${docker_build_path} - mkdir -p ${docker_build_path} - ln ${binary_dir}/${binary_name} ${docker_build_path}/${binary_name} - printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path} - - if [[ ${arch} == "amd64" ]]; then - # If we are building a amd64 docker image, preserve the original image name - local docker_image_tag=gcr.io/google_containers/${binary_name}:${md5_sum} - else - # If we are building a docker image for another architecture, append the arch in the image tag - local docker_image_tag=gcr.io/google_containers/${binary_name}-${arch}:${md5_sum} - fi - - "${DOCKER[@]}" build -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null - "${DOCKER[@]}" save ${docker_image_tag} > ${binary_dir}/${binary_name}.tar - echo $md5_sum > ${binary_dir}/${binary_name}.docker_tag - - rm -rf ${docker_build_path} - - # If we are building an official/alpha/beta release we want to keep docker images - # and tag them appropriately. - if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then - local release_docker_image_tag="${KUBE_DOCKER_REGISTRY}/${binary_name}-${arch}:${KUBE_DOCKER_IMAGE_TAG}" - kube::log::status "Tagging docker image ${docker_image_tag} as ${release_docker_image_tag}" - "${DOCKER[@]}" tag -f "${docker_image_tag}" "${release_docker_image_tag}" 2>/dev/null - fi - - kube::log::status "Deleting docker image ${docker_image_tag}" - "${DOCKER[@]}" rmi ${docker_image_tag} 2>/dev/null || true - ) & - done - - kube::util::wait-for-jobs || { kube::log::error "previous Docker build failed"; return 1; } - kube::log::status "Docker builds done" - ) - -} - -# Package up the salt configuration tree. This is an optional helper to getting -# a cluster up and running. -function kube::release::package_salt_tarball() { - kube::log::status "Building tarball: salt" - - local release_stage="${RELEASE_STAGE}/salt/kubernetes" - rm -rf "${release_stage}" - mkdir -p "${release_stage}" - - cp -R "${KUBE_ROOT}/cluster/saltbase" "${release_stage}/" - - # TODO(#3579): This is a temporary hack. It gathers up the yaml, - # yaml.in, json files in cluster/addons (minus any demos) and overlays - # them into kube-addons, where we expect them. (This pipeline is a - # fancy copy, stripping anything but the files we don't want.) - local objects - objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo) - tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${release_stage}/saltbase/salt/kube-addons" - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes-salt.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." -} - -# This will pack kube-system manifests files for distros without using salt -# such as GCI and Ubuntu Trusty. We directly copy manifests from -# cluster/addons and cluster/saltbase/salt. The script of cluster initialization -# will remove the salt configuration and evaluate the variables in the manifests. -function kube::release::package_kube_manifests_tarball() { - kube::log::status "Building tarball: manifests" - - local release_stage="${RELEASE_STAGE}/manifests/kubernetes" - rm -rf "${release_stage}" - local dst_dir="${release_stage}/gci-trusty" - mkdir -p "${dst_dir}" - - local salt_dir="${KUBE_ROOT}/cluster/saltbase/salt" - cp "${salt_dir}/cluster-autoscaler/cluster-autoscaler.manifest" "${dst_dir}/" - cp "${salt_dir}/fluentd-es/fluentd-es.yaml" "${release_stage}/" - cp "${salt_dir}/fluentd-gcp/fluentd-gcp.yaml" "${release_stage}/" - cp "${salt_dir}/kube-registry-proxy/kube-registry-proxy.yaml" "${release_stage}/" - cp "${salt_dir}/kube-proxy/kube-proxy.manifest" "${release_stage}/" - cp "${salt_dir}/etcd/etcd.manifest" "${dst_dir}" - cp "${salt_dir}/kube-scheduler/kube-scheduler.manifest" "${dst_dir}" - cp "${salt_dir}/kube-apiserver/kube-apiserver.manifest" "${dst_dir}" - cp "${salt_dir}/kube-apiserver/abac-authz-policy.jsonl" "${dst_dir}" - cp "${salt_dir}/kube-controller-manager/kube-controller-manager.manifest" "${dst_dir}" - cp "${salt_dir}/kube-addons/kube-addon-manager.yaml" "${dst_dir}" - cp "${salt_dir}/l7-gcp/glbc.manifest" "${dst_dir}" - cp "${salt_dir}/rescheduler/rescheduler.manifest" "${dst_dir}/" - cp "${salt_dir}/e2e-image-puller/e2e-image-puller.manifest" "${dst_dir}/" - cp "${KUBE_ROOT}/cluster/gce/trusty/configure-helper.sh" "${dst_dir}/trusty-configure-helper.sh" - cp "${KUBE_ROOT}/cluster/gce/gci/configure-helper.sh" "${dst_dir}/gci-configure-helper.sh" - cp "${KUBE_ROOT}/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh" - cp -r "${salt_dir}/kube-admission-controls/limit-range" "${dst_dir}" - local objects - objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo) - tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${dst_dir}" - - # This is for coreos only. ContainerVM, GCI, or Trusty does not use it. - cp -r "${KUBE_ROOT}/cluster/gce/coreos/kube-manifests"/* "${release_stage}/" - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes-manifests.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." -} - -# This is the stuff you need to run tests from the binary distribution. -function kube::release::package_test_tarball() { - kube::log::status "Building tarball: test" - - local release_stage="${RELEASE_STAGE}/test/kubernetes" - rm -rf "${release_stage}" - mkdir -p "${release_stage}" - - local platform - for platform in "${KUBE_TEST_PLATFORMS[@]}"; do - local test_bins=("${KUBE_TEST_BINARIES[@]}") - if [[ "${platform%/*}" == "windows" ]]; then - test_bins=("${KUBE_TEST_BINARIES_WIN[@]}") - fi - mkdir -p "${release_stage}/platforms/${platform}" - cp "${test_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/platforms/${platform}" - done - for platform in "${KUBE_TEST_SERVER_PLATFORMS[@]}"; do - mkdir -p "${release_stage}/platforms/${platform}" - cp "${KUBE_TEST_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/platforms/${platform}" - done - - # Add the test image files - mkdir -p "${release_stage}/test/images" - cp -fR "${KUBE_ROOT}/test/images" "${release_stage}/test/" - tar c ${KUBE_TEST_PORTABLE[@]} | tar x -C ${release_stage} - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes-test.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." -} - -# This is all the stuff you need to run/install kubernetes. This includes: -# - precompiled binaries for client -# - Cluster spin up/down scripts and configs for various cloud providers -# - tarballs for server binary and salt configs that are ready to be uploaded -# to master by whatever means appropriate. -function kube::release::package_full_tarball() { - kube::log::status "Building tarball: full" - - local release_stage="${RELEASE_STAGE}/full/kubernetes" - rm -rf "${release_stage}" - mkdir -p "${release_stage}" - - # Copy all of the client binaries in here, but not test or server binaries. - # The server binaries are included with the server binary tarball. - local platform - for platform in "${KUBE_CLIENT_PLATFORMS[@]}"; do - local client_bins=("${KUBE_CLIENT_BINARIES[@]}") - if [[ "${platform%/*}" == "windows" ]]; then - client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") - fi - mkdir -p "${release_stage}/platforms/${platform}" - cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ - "${release_stage}/platforms/${platform}" - done - - # We want everything in /cluster except saltbase. That is only needed on the - # server. - cp -R "${KUBE_ROOT}/cluster" "${release_stage}/" - rm -rf "${release_stage}/cluster/saltbase" - - mkdir -p "${release_stage}/server" - cp "${RELEASE_DIR}/kubernetes-salt.tar.gz" "${release_stage}/server/" - cp "${RELEASE_DIR}"/kubernetes-server-*.tar.gz "${release_stage}/server/" - cp "${RELEASE_DIR}/kubernetes-manifests.tar.gz" "${release_stage}/server/" - - mkdir -p "${release_stage}/third_party" - cp -R "${KUBE_ROOT}/third_party/htpasswd" "${release_stage}/third_party/htpasswd" - - # Include only federation/cluster, federation/manifests and federation/deploy - mkdir "${release_stage}/federation" - cp -R "${KUBE_ROOT}/federation/cluster" "${release_stage}/federation/" - cp -R "${KUBE_ROOT}/federation/manifests" "${release_stage}/federation/" - cp -R "${KUBE_ROOT}/federation/deploy" "${release_stage}/federation/" - - cp -R "${KUBE_ROOT}/examples" "${release_stage}/" - cp -R "${KUBE_ROOT}/docs" "${release_stage}/" - cp "${KUBE_ROOT}/README.md" "${release_stage}/" - cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/" - cp "${KUBE_ROOT}/Vagrantfile" "${release_stage}/" - - echo "${KUBE_GIT_VERSION}" > "${release_stage}/version" - - kube::release::clean_cruft - - local package_name="${RELEASE_DIR}/kubernetes.tar.gz" - kube::release::create_tarball "${package_name}" "${release_stage}/.." -} - -# Build a release tarball. $1 is the output tar name. $2 is the base directory -# of the files to be packaged. This assumes that ${2}/kubernetes is what is -# being packaged. -function kube::release::create_tarball() { - kube::build::ensure_tar - - local tarfile=$1 - local stagingdir=$2 - - "${TAR}" czf "${tarfile}" -C "${stagingdir}" kubernetes --owner=0 --group=0 -} - -############################################################################### -# Most of the ::release:: namespace functions have been moved to -# github.com/kubernetes/release. Have a look in that repo and specifically in -# lib/releaselib.sh for ::release::-related functionality. -############################################################################### diff --git a/build/copy-output.sh b/build/copy-output.sh index 7b455f8c216..c4429cfbf04 100755 --- a/build/copy-output.sh +++ b/build/copy-output.sh @@ -14,10 +14,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -# Copies any built binaries off the Docker machine. -# -# This is a no-op on Linux when the Docker daemon is local. This is only -# necessary on Mac OS X with boot2docker. +# Copies any built binaries (and other generated files) out of the Docker build contianer. set -o errexit set -o nounset set -o pipefail diff --git a/build/json-extractor.py b/build/json-extractor.py deleted file mode 100755 index dfc0422859f..00000000000 --- a/build/json-extractor.py +++ /dev/null @@ -1,70 +0,0 @@ -#!/usr/bin/env python - -# Copyright 2014 The Kubernetes Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This is a very simple utility that reads a JSON document from stdin, parses it -# and returns the specified value. The value is described using a simple dot -# notation. If any errors are encountered along the way, an error is output and -# a failure value is returned. - -from __future__ import print_function - -import json -import sys - -def PrintError(*err): - print(*err, file=sys.stderr) - -def main(): - try: - obj = json.load(sys.stdin) - except Exception, e: - PrintError("Error loading JSON: {0}".format(str(e))) - - if len(sys.argv) == 1: - # if we don't have a query string, return success - return 0 - elif len(sys.argv) > 2: - PrintError("Usage: {0} ".format(sys.args[0])) - return 1 - - query_list = sys.argv[1].split('.') - for q in query_list: - if isinstance(obj, dict): - if q not in obj: - PrintError("Couldn't find '{0}' in dict".format(q)) - return 1 - obj = obj[q] - elif isinstance(obj, list): - try: - index = int(q) - except: - PrintError("Can't use '{0}' to index into array".format(q)) - return 1 - if index >= len(obj): - PrintError("Index ({0}) is greater than length of list ({1})".format(q, len(obj))) - return 1 - obj = obj[index] - else: - PrintError("Trying to query non-queryable object: {0}".format(q)) - return 1 - - if isinstance(obj, str): - print(obj) - else: - print(json.dumps(obj, indent=2)) - -if __name__ == "__main__": - sys.exit(main()) diff --git a/build/lib/release.sh b/build/lib/release.sh new file mode 100644 index 00000000000..fa3975562ec --- /dev/null +++ b/build/lib/release.sh @@ -0,0 +1,442 @@ +#!/bin/bash + +# Copyright 2016 The Kubernetes Authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This file creates release artifacts (tar files, container images) that are +# ready to distribute to install or distribute to end users. + +############################################################################### +# Most of the ::release:: namespace functions have been moved to +# github.com/kubernetes/release. Have a look in that repo and specifically in +# lib/releaselib.sh for ::release::-related functionality. +############################################################################### + +# This is where the final release artifacts are created locally +readonly RELEASE_STAGE="${LOCAL_OUTPUT_ROOT}/release-stage" +readonly RELEASE_DIR="${LOCAL_OUTPUT_ROOT}/release-tars" +readonly GCS_STAGE="${LOCAL_OUTPUT_ROOT}/gcs-stage" + + +# Validate a ci version +# +# Globals: +# None +# Arguments: +# version +# Returns: +# If version is a valid ci version +# Sets: (e.g. for '1.2.3-alpha.4.56+abcdef12345678') +# VERSION_MAJOR (e.g. '1') +# VERSION_MINOR (e.g. '2') +# VERSION_PATCH (e.g. '3') +# VERSION_PRERELEASE (e.g. 'alpha') +# VERSION_PRERELEASE_REV (e.g. '4') +# VERSION_BUILD_INFO (e.g. '.56+abcdef12345678') +# VERSION_COMMITS (e.g. '56') +function kube::release::parse_and_validate_ci_version() { + # Accept things like "v1.2.3-alpha.4.56+abcdef12345678" or "v1.2.3-beta.4" + local -r version_regex="^v(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)\\.(0|[1-9][0-9]*)-(beta|alpha)\\.(0|[1-9][0-9]*)(\\.(0|[1-9][0-9]*)\\+[0-9a-f]{7,40})?$" + local -r version="${1-}" + [[ "${version}" =~ ${version_regex} ]] || { + kube::log::error "Invalid ci version: '${version}', must match regex ${version_regex}" + return 1 + } + VERSION_MAJOR="${BASH_REMATCH[1]}" + VERSION_MINOR="${BASH_REMATCH[2]}" + VERSION_PATCH="${BASH_REMATCH[3]}" + VERSION_PRERELEASE="${BASH_REMATCH[4]}" + VERSION_PRERELEASE_REV="${BASH_REMATCH[5]}" + VERSION_BUILD_INFO="${BASH_REMATCH[6]}" + VERSION_COMMITS="${BASH_REMATCH[7]}" +} + +# --------------------------------------------------------------------------- +# Build final release artifacts +function kube::release::clean_cruft() { + # Clean out cruft + find ${RELEASE_STAGE} -name '*~' -exec rm {} \; + find ${RELEASE_STAGE} -name '#*#' -exec rm {} \; + find ${RELEASE_STAGE} -name '.DS*' -exec rm {} \; +} + +function kube::release::package_hyperkube() { + # If we have these variables set then we want to build all docker images. + if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then + for arch in "${KUBE_SERVER_PLATFORMS[@]##*/}"; do + kube::log::status "Building hyperkube image for arch: ${arch}" + REGISTRY="${KUBE_DOCKER_REGISTRY}" VERSION="${KUBE_DOCKER_IMAGE_TAG}" ARCH="${arch}" make -C cluster/images/hyperkube/ build + done + fi +} + +function kube::release::package_tarballs() { + # Clean out any old releases + rm -rf "${RELEASE_DIR}" + mkdir -p "${RELEASE_DIR}" + kube::release::package_build_image_tarball & + kube::release::package_client_tarballs & + kube::release::package_server_tarballs & + kube::release::package_salt_tarball & + kube::release::package_kube_manifests_tarball & + kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; } + + kube::release::package_full_tarball & # _full depends on all the previous phases + kube::release::package_test_tarball & # _test doesn't depend on anything + kube::util::wait-for-jobs || { kube::log::error "previous tarball phase failed"; return 1; } +} + +# Package the build image we used from the previous stage, for compliance/licensing/audit/yadda. +function kube::release::package_build_image_tarball() { + kube::log::status "Building tarball: src" + "${TAR}" czf "${RELEASE_DIR}/kubernetes-src.tar.gz" -C "${LOCAL_OUTPUT_BUILD_CONTEXT}" . +} + +# Package up all of the cross compiled clients. Over time this should grow into +# a full SDK +function kube::release::package_client_tarballs() { + # Find all of the built client binaries + local platform platforms + platforms=($(cd "${LOCAL_OUTPUT_BINPATH}" ; echo */*)) + for platform in "${platforms[@]}"; do + local platform_tag=${platform/\//-} # Replace a "/" for a "-" + kube::log::status "Starting tarball: client $platform_tag" + + ( + local release_stage="${RELEASE_STAGE}/client/${platform_tag}/kubernetes" + rm -rf "${release_stage}" + mkdir -p "${release_stage}/client/bin" + + local client_bins=("${KUBE_CLIENT_BINARIES[@]}") + if [[ "${platform%/*}" == "windows" ]]; then + client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") + fi + + # This fancy expression will expand to prepend a path + # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the + # KUBE_CLIENT_BINARIES array. + cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/client/bin/" + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." + ) & + done + + kube::log::status "Waiting on tarballs" + kube::util::wait-for-jobs || { kube::log::error "client tarball creation failed"; exit 1; } +} + +# Package up all of the server binaries +function kube::release::package_server_tarballs() { + local platform + for platform in "${KUBE_SERVER_PLATFORMS[@]}"; do + local platform_tag=${platform/\//-} # Replace a "/" for a "-" + local arch=$(basename ${platform}) + kube::log::status "Building tarball: server $platform_tag" + + local release_stage="${RELEASE_STAGE}/server/${platform_tag}/kubernetes" + rm -rf "${release_stage}" + mkdir -p "${release_stage}/server/bin" + mkdir -p "${release_stage}/addons" + + # This fancy expression will expand to prepend a path + # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the + # KUBE_SERVER_BINARIES array. + cp "${KUBE_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/server/bin/" + + kube::release::create_docker_images_for_server "${release_stage}/server/bin" "${arch}" + + # Include the client binaries here too as they are useful debugging tools. + local client_bins=("${KUBE_CLIENT_BINARIES[@]}") + if [[ "${platform%/*}" == "windows" ]]; then + client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") + fi + cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/server/bin/" + + cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/" + + cp "${RELEASE_DIR}/kubernetes-src.tar.gz" "${release_stage}/" + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes-server-${platform_tag}.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." + done +} + +function kube::release::md5() { + if which md5 >/dev/null 2>&1; then + md5 -q "$1" + else + md5sum "$1" | awk '{ print $1 }' + fi +} + +function kube::release::sha1() { + if which shasum >/dev/null 2>&1; then + shasum -a1 "$1" | awk '{ print $1 }' + else + sha1sum "$1" | awk '{ print $1 }' + fi +} + +# This will take binaries that run on master and creates Docker images +# that wrap the binary in them. (One docker image per binary) +# Args: +# $1 - binary_dir, the directory to save the tared images to. +# $2 - arch, architecture for which we are building docker images. +function kube::release::create_docker_images_for_server() { + # Create a sub-shell so that we don't pollute the outer environment + ( + local binary_dir="$1" + local arch="$2" + local binary_name + local binaries=($(kube::build::get_docker_wrapped_binaries ${arch})) + + for wrappable in "${binaries[@]}"; do + + local oldifs=$IFS + IFS="," + set $wrappable + IFS=$oldifs + + local binary_name="$1" + local base_image="$2" + + kube::log::status "Starting Docker build for image: ${binary_name}" + + ( + local md5_sum + md5_sum=$(kube::release::md5 "${binary_dir}/${binary_name}") + + local docker_build_path="${binary_dir}/${binary_name}.dockerbuild" + local docker_file_path="${docker_build_path}/Dockerfile" + local binary_file_path="${binary_dir}/${binary_name}" + + rm -rf ${docker_build_path} + mkdir -p ${docker_build_path} + ln ${binary_dir}/${binary_name} ${docker_build_path}/${binary_name} + printf " FROM ${base_image} \n ADD ${binary_name} /usr/local/bin/${binary_name}\n" > ${docker_file_path} + + if [[ ${arch} == "amd64" ]]; then + # If we are building a amd64 docker image, preserve the original image name + local docker_image_tag=gcr.io/google_containers/${binary_name}:${md5_sum} + else + # If we are building a docker image for another architecture, append the arch in the image tag + local docker_image_tag=gcr.io/google_containers/${binary_name}-${arch}:${md5_sum} + fi + + "${DOCKER[@]}" build -q -t "${docker_image_tag}" ${docker_build_path} >/dev/null + "${DOCKER[@]}" save ${docker_image_tag} > ${binary_dir}/${binary_name}.tar + echo $md5_sum > ${binary_dir}/${binary_name}.docker_tag + + rm -rf ${docker_build_path} + + # If we are building an official/alpha/beta release we want to keep docker images + # and tag them appropriately. + if [[ -n "${KUBE_DOCKER_IMAGE_TAG-}" && -n "${KUBE_DOCKER_REGISTRY-}" ]]; then + local release_docker_image_tag="${KUBE_DOCKER_REGISTRY}/${binary_name}-${arch}:${KUBE_DOCKER_IMAGE_TAG}" + kube::log::status "Tagging docker image ${docker_image_tag} as ${release_docker_image_tag}" + "${DOCKER[@]}" tag -f "${docker_image_tag}" "${release_docker_image_tag}" 2>/dev/null + fi + + kube::log::status "Deleting docker image ${docker_image_tag}" + "${DOCKER[@]}" rmi ${docker_image_tag} 2>/dev/null || true + ) & + done + + kube::util::wait-for-jobs || { kube::log::error "previous Docker build failed"; return 1; } + kube::log::status "Docker builds done" + ) + +} + +# Package up the salt configuration tree. This is an optional helper to getting +# a cluster up and running. +function kube::release::package_salt_tarball() { + kube::log::status "Building tarball: salt" + + local release_stage="${RELEASE_STAGE}/salt/kubernetes" + rm -rf "${release_stage}" + mkdir -p "${release_stage}" + + cp -R "${KUBE_ROOT}/cluster/saltbase" "${release_stage}/" + + # TODO(#3579): This is a temporary hack. It gathers up the yaml, + # yaml.in, json files in cluster/addons (minus any demos) and overlays + # them into kube-addons, where we expect them. (This pipeline is a + # fancy copy, stripping anything but the files we don't want.) + local objects + objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo) + tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${release_stage}/saltbase/salt/kube-addons" + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes-salt.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." +} + +# This will pack kube-system manifests files for distros without using salt +# such as GCI and Ubuntu Trusty. We directly copy manifests from +# cluster/addons and cluster/saltbase/salt. The script of cluster initialization +# will remove the salt configuration and evaluate the variables in the manifests. +function kube::release::package_kube_manifests_tarball() { + kube::log::status "Building tarball: manifests" + + local release_stage="${RELEASE_STAGE}/manifests/kubernetes" + rm -rf "${release_stage}" + local dst_dir="${release_stage}/gci-trusty" + mkdir -p "${dst_dir}" + + local salt_dir="${KUBE_ROOT}/cluster/saltbase/salt" + cp "${salt_dir}/cluster-autoscaler/cluster-autoscaler.manifest" "${dst_dir}/" + cp "${salt_dir}/fluentd-es/fluentd-es.yaml" "${release_stage}/" + cp "${salt_dir}/fluentd-gcp/fluentd-gcp.yaml" "${release_stage}/" + cp "${salt_dir}/kube-registry-proxy/kube-registry-proxy.yaml" "${release_stage}/" + cp "${salt_dir}/kube-proxy/kube-proxy.manifest" "${release_stage}/" + cp "${salt_dir}/etcd/etcd.manifest" "${dst_dir}" + cp "${salt_dir}/kube-scheduler/kube-scheduler.manifest" "${dst_dir}" + cp "${salt_dir}/kube-apiserver/kube-apiserver.manifest" "${dst_dir}" + cp "${salt_dir}/kube-apiserver/abac-authz-policy.jsonl" "${dst_dir}" + cp "${salt_dir}/kube-controller-manager/kube-controller-manager.manifest" "${dst_dir}" + cp "${salt_dir}/kube-addons/kube-addon-manager.yaml" "${dst_dir}" + cp "${salt_dir}/l7-gcp/glbc.manifest" "${dst_dir}" + cp "${salt_dir}/rescheduler/rescheduler.manifest" "${dst_dir}/" + cp "${salt_dir}/e2e-image-puller/e2e-image-puller.manifest" "${dst_dir}/" + cp "${KUBE_ROOT}/cluster/gce/trusty/configure-helper.sh" "${dst_dir}/trusty-configure-helper.sh" + cp "${KUBE_ROOT}/cluster/gce/gci/configure-helper.sh" "${dst_dir}/gci-configure-helper.sh" + cp "${KUBE_ROOT}/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh" + cp -r "${salt_dir}/kube-admission-controls/limit-range" "${dst_dir}" + local objects + objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo) + tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${dst_dir}" + + # This is for coreos only. ContainerVM, GCI, or Trusty does not use it. + cp -r "${KUBE_ROOT}/cluster/gce/coreos/kube-manifests"/* "${release_stage}/" + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes-manifests.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." +} + +# This is the stuff you need to run tests from the binary distribution. +function kube::release::package_test_tarball() { + kube::log::status "Building tarball: test" + + local release_stage="${RELEASE_STAGE}/test/kubernetes" + rm -rf "${release_stage}" + mkdir -p "${release_stage}" + + local platform + for platform in "${KUBE_TEST_PLATFORMS[@]}"; do + local test_bins=("${KUBE_TEST_BINARIES[@]}") + if [[ "${platform%/*}" == "windows" ]]; then + test_bins=("${KUBE_TEST_BINARIES_WIN[@]}") + fi + mkdir -p "${release_stage}/platforms/${platform}" + cp "${test_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/platforms/${platform}" + done + for platform in "${KUBE_TEST_SERVER_PLATFORMS[@]}"; do + mkdir -p "${release_stage}/platforms/${platform}" + cp "${KUBE_TEST_SERVER_BINARIES[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/platforms/${platform}" + done + + # Add the test image files + mkdir -p "${release_stage}/test/images" + cp -fR "${KUBE_ROOT}/test/images" "${release_stage}/test/" + tar c ${KUBE_TEST_PORTABLE[@]} | tar x -C ${release_stage} + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes-test.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." +} + +# This is all the stuff you need to run/install kubernetes. This includes: +# - precompiled binaries for client +# - Cluster spin up/down scripts and configs for various cloud providers +# - tarballs for server binary and salt configs that are ready to be uploaded +# to master by whatever means appropriate. +function kube::release::package_full_tarball() { + kube::log::status "Building tarball: full" + + local release_stage="${RELEASE_STAGE}/full/kubernetes" + rm -rf "${release_stage}" + mkdir -p "${release_stage}" + + # Copy all of the client binaries in here, but not test or server binaries. + # The server binaries are included with the server binary tarball. + local platform + for platform in "${KUBE_CLIENT_PLATFORMS[@]}"; do + local client_bins=("${KUBE_CLIENT_BINARIES[@]}") + if [[ "${platform%/*}" == "windows" ]]; then + client_bins=("${KUBE_CLIENT_BINARIES_WIN[@]}") + fi + mkdir -p "${release_stage}/platforms/${platform}" + cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \ + "${release_stage}/platforms/${platform}" + done + + # We want everything in /cluster except saltbase. That is only needed on the + # server. + cp -R "${KUBE_ROOT}/cluster" "${release_stage}/" + rm -rf "${release_stage}/cluster/saltbase" + + mkdir -p "${release_stage}/server" + cp "${RELEASE_DIR}/kubernetes-salt.tar.gz" "${release_stage}/server/" + cp "${RELEASE_DIR}"/kubernetes-server-*.tar.gz "${release_stage}/server/" + cp "${RELEASE_DIR}/kubernetes-manifests.tar.gz" "${release_stage}/server/" + + mkdir -p "${release_stage}/third_party" + cp -R "${KUBE_ROOT}/third_party/htpasswd" "${release_stage}/third_party/htpasswd" + + # Include only federation/cluster, federation/manifests and federation/deploy + mkdir "${release_stage}/federation" + cp -R "${KUBE_ROOT}/federation/cluster" "${release_stage}/federation/" + cp -R "${KUBE_ROOT}/federation/manifests" "${release_stage}/federation/" + cp -R "${KUBE_ROOT}/federation/deploy" "${release_stage}/federation/" + + cp -R "${KUBE_ROOT}/examples" "${release_stage}/" + cp -R "${KUBE_ROOT}/docs" "${release_stage}/" + cp "${KUBE_ROOT}/README.md" "${release_stage}/" + cp "${KUBE_ROOT}/Godeps/LICENSES" "${release_stage}/" + cp "${KUBE_ROOT}/Vagrantfile" "${release_stage}/" + + echo "${KUBE_GIT_VERSION}" > "${release_stage}/version" + + kube::release::clean_cruft + + local package_name="${RELEASE_DIR}/kubernetes.tar.gz" + kube::release::create_tarball "${package_name}" "${release_stage}/.." +} + +# Build a release tarball. $1 is the output tar name. $2 is the base directory +# of the files to be packaged. This assumes that ${2}/kubernetes is what is +# being packaged. +function kube::release::create_tarball() { + kube::build::ensure_tar + + local tarfile=$1 + local stagingdir=$2 + + "${TAR}" czf "${tarfile}" -C "${stagingdir}" kubernetes --owner=0 --group=0 +} diff --git a/build/make-clean.sh b/build/make-clean.sh index 9a4ad7df13d..91aaac1c1bf 100755 --- a/build/make-clean.sh +++ b/build/make-clean.sh @@ -23,5 +23,4 @@ KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. source "${KUBE_ROOT}/build/common.sh" kube::build::verify_prereqs -kube::build::clean_output -kube::build::clean_images +kube::build::clean diff --git a/build/push-federation-images.sh b/build/push-federation-images.sh index 3ebf7d4dce2..7085f6cb47a 100755 --- a/build/push-federation-images.sh +++ b/build/push-federation-images.sh @@ -23,6 +23,7 @@ set -o pipefail KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. source "${KUBE_ROOT}/build/util.sh" +source "${KUBE_ROOT}/build/lib/release.sh" source "${KUBE_ROOT}/federation/cluster/common.sh" diff --git a/build/release.sh b/build/release.sh index ac9d2c3d1d4..3d09bc8f15a 100755 --- a/build/release.sh +++ b/build/release.sh @@ -16,8 +16,10 @@ # Build a Kubernetes release. This will build the binaries, create the Docker # images and other build artifacts. -# For pushing these artifacts publicly on Google Cloud Storage, see the -# associated build/push-* scripts. +# +# For pushing these artifacts publicly to Google Cloud Storage or to a registry +# please refer to the kubernetes/release repo at +# https://github.com/kubernetes/release. set -o errexit set -o nounset @@ -25,6 +27,7 @@ set -o pipefail KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. source "${KUBE_ROOT}/build/common.sh" +source "${KUBE_ROOT}/build/lib/release.sh" KUBE_RELEASE_RUN_TESTS=${KUBE_RELEASE_RUN_TESTS-y} diff --git a/build/shell.sh b/build/shell.sh index d4c2a7452c7..6f546b662de 100755 --- a/build/shell.sh +++ b/build/shell.sh @@ -24,6 +24,7 @@ set -o pipefail KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. source "${KUBE_ROOT}/build/common.sh" +source "${KUBE_ROOT}/build/lib/release.sh" kube::build::verify_prereqs kube::build::build_image diff --git a/cluster/mesos/docker/config-default.sh b/cluster/mesos/docker/config-default.sh index 5dcfa2feb7c..54a3cf19675 100755 --- a/cluster/mesos/docker/config-default.sh +++ b/cluster/mesos/docker/config-default.sh @@ -56,7 +56,7 @@ MESOS_DOCKER_ADDON_TIMEOUT="${MESOS_DOCKER_ADDON_TIMEOUT:-180}" # ${MESOS_DOCKER_WORK_DIR}/log - storage of component logs (written on deploy failure) # ${MESOS_DOCKER_WORK_DIR}/auth - storage of SSL certs/keys/tokens # ${MESOS_DOCKER_WORK_DIR}//mesos - storage of mesos slave work (e.g. task logs) -# If using docker-machine or boot2docker, should be under /Users (which is mounted from the host into the docker vm). +# If using docker-machine or Docker for Mac, should be under /Users (which is mounted from the host into the docker vm). # If running in a container, $HOME should be resolved outside of the container. MESOS_DOCKER_WORK_DIR="${MESOS_DOCKER_WORK_DIR:-${HOME}/tmp/kubernetes}" diff --git a/cluster/mesos/docker/util.sh b/cluster/mesos/docker/util.sh index 709b581a005..22b819642be 100644 --- a/cluster/mesos/docker/util.sh +++ b/cluster/mesos/docker/util.sh @@ -72,7 +72,7 @@ function cluster::mesos::docker::docker_compose_lazy_pull { } # Run kubernetes scripts inside docker. -# This bypasses the need to set up network routing when running docker in a VM (e.g. boot2docker). +# This bypasses the need to set up network routing when running docker in a VM (e.g. docker-machine). # Trap signals and kills the docker container for better signal handing function cluster::mesos::docker::run_in_docker_test { local entrypoint="$1" diff --git a/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile b/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile deleted file mode 100644 index ae7c6fc43da..00000000000 --- a/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright 2016 The Kubernetes Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This file creates a standard build environment for building Kubernetes -FROM gcr.io/google_containers/kube-cross:KUBE_BUILD_IMAGE_CROSS_TAG - -# Mark this as a kube-build container -RUN touch /kube-build-image - -WORKDIR /go/src/k8s.io/kubernetes - -# Install goimports tool -RUN go get golang.org/x/tools/cmd/goimports diff --git a/docs/design/clustering/Makefile b/docs/design/clustering/Makefile index d56401645dc..e72d441e286 100644 --- a/docs/design/clustering/Makefile +++ b/docs/design/clustering/Makefile @@ -39,7 +39,3 @@ docker: docker-clean: docker rmi clustering-seqdiag || true docker images -q --filter "dangling=true" | xargs docker rmi - -.PHONY: fix-clock-skew -fix-clock-skew: - boot2docker ssh sudo date -u -D "%Y%m%d%H%M.%S" --set "$(shell date -u +%Y%m%d%H%M.%S)" diff --git a/docs/design/clustering/README.md b/docs/design/clustering/README.md index 014b96c2ab4..d662b952205 100644 --- a/docs/design/clustering/README.md +++ b/docs/design/clustering/README.md @@ -56,10 +56,6 @@ The first run will be slow but things should be fast after that. To clean up the docker containers that are created (and other cruft that is left around) you can run `make docker-clean`. -If you are using boot2docker and get warnings about clock skew (or if things -aren't building for some reason) then you can fix that up with -`make fix-clock-skew`. - ## Automatically rebuild on file changes If you have the fswatch utility installed, you can have it monitor the file diff --git a/hack/jenkins/build.sh b/hack/jenkins/build.sh index 506ce668e4a..103dd0e8238 100755 --- a/hack/jenkins/build.sh +++ b/hack/jenkins/build.sh @@ -31,7 +31,6 @@ set -o xtrace # space. export HOME=${WORKSPACE} # Nothing should want Jenkins $HOME export PATH=$PATH:/usr/local/go/bin -export KUBE_SKIP_CONFIRMATIONS=y # Skip gcloud update checking export CLOUDSDK_COMPONENT_MANAGER_DISABLE_UPDATE_CHECK=true diff --git a/hack/lib/golang.sh b/hack/lib/golang.sh index c0af315a8c4..b15a170dcf7 100755 --- a/hack/lib/golang.sh +++ b/hack/lib/golang.sh @@ -144,12 +144,6 @@ readonly KUBE_TEST_SERVER_PLATFORMS=("${KUBE_SERVER_PLATFORMS[@]}") # Gigabytes desired for parallel platform builds. 11 is fairly # arbitrary, but is a reasonable splitting point for 2015 # laptops-versus-not. -# -# If you are using boot2docker, the following seems to work (note -# that 12000 rounds to 11G): -# boot2docker down -# VBoxManage modifyvm boot2docker-vm --memory 12000 -# boot2docker up readonly KUBE_PARALLEL_BUILD_MEMORY=11 readonly KUBE_ALL_TARGETS=( @@ -553,7 +547,7 @@ kube::golang::build_binaries_for_platform() { "${testpkg}" mkdir -p "$(dirname ${outfile})" - go test -c \ + go test -i -c \ "${goflags[@]:+${goflags[@]}}" \ -gcflags "${gogcflags}" \ -ldflags "${goldflags}" \ diff --git a/hack/local-up-cluster.sh b/hack/local-up-cluster.sh index caddd4f50c0..398811a4f56 100755 --- a/hack/local-up-cluster.sh +++ b/hack/local-up-cluster.sh @@ -18,7 +18,6 @@ # local-up.sh, but this one launches the three separate binaries. # You may need to run this as root to allow kubelet to open docker's socket. DOCKER_OPTS=${DOCKER_OPTS:-""} -DOCKER_NATIVE=${DOCKER_NATIVE:-""} DOCKER=(docker ${DOCKER_OPTS}) DOCKERIZE_KUBELET=${DOCKERIZE_KUBELET:-""} ALLOW_PRIVILEGED=${ALLOW_PRIVILEGED:-""} diff --git a/hack/update-bindata.sh b/hack/update-bindata.sh index 0adae8867c3..d5c87a22add 100755 --- a/hack/update-bindata.sh +++ b/hack/update-bindata.sh @@ -19,8 +19,7 @@ set -o pipefail set -o nounset if [[ -z "${KUBE_ROOT:-}" ]]; then - # Relative to test/e2e/generated/ - KUBE_ROOT="../../../" + KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. fi source "${KUBE_ROOT}/cluster/lib/logging.sh" @@ -37,7 +36,7 @@ if ! which go-bindata &>/dev/null ; then fi BINDATA_OUTPUT="${KUBE_ROOT}/test/e2e/generated/bindata.go" -go-bindata -nometadata -prefix "${KUBE_ROOT}" -o ${BINDATA_OUTPUT} -pkg generated \ +go-bindata -nometadata -prefix "${KUBE_ROOT}" -o "${BINDATA_OUTPUT}.tmp" -pkg generated \ -ignore .jpg -ignore .png -ignore .md \ "${KUBE_ROOT}/examples/..." \ "${KUBE_ROOT}/docs/user-guide/..." \ @@ -45,6 +44,16 @@ go-bindata -nometadata -prefix "${KUBE_ROOT}" -o ${BINDATA_OUTPUT} -pkg generate "${KUBE_ROOT}/test/images/..." \ "${KUBE_ROOT}/test/fixtures/..." -gofmt -s -w ${BINDATA_OUTPUT} +gofmt -s -w "${BINDATA_OUTPUT}.tmp" -V=2 kube::log::info "Generated bindata file : $(wc -l ${BINDATA_OUTPUT}) lines of lovely automated artifacts" +# Here we compare and overwrite only if different to avoid updating the +# timestamp and triggering a rebuild. The 'cat' redirect trick to preserve file +# permissions of the target file. +if ! cmp -s "${BINDATA_OUTPUT}.tmp" "${BINDATA_OUTPUT}" ; then + cat "${BINDATA_OUTPUT}.tmp" > "${BINDATA_OUTPUT}" + V=2 kube::log::info "Generated bindata file : ${BINDATA_OUTPUT} has $(wc -l ${BINDATA_OUTPUT}) lines of lovely automated artifacts" +else + V=2 kube::log::info "No changes in generated bindata file: ${BINDATA_OUTPUT}" +fi + +rm -f "${BINDATA_OUTPUT}.tmp" diff --git a/hack/update-generated-protobuf.sh b/hack/update-generated-protobuf.sh index 01080bb376a..f2cf01c0f0d 100755 --- a/hack/update-generated-protobuf.sh +++ b/hack/update-generated-protobuf.sh @@ -19,36 +19,11 @@ set -o nounset set -o pipefail KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. -source "${KUBE_ROOT}/build/common.sh" -kube::golang::setup_env +# NOTE: All output from this script needs to be copied back to the calling +# source tree. This is managed in kube::build::copy_output in build/common.sh. +# If the output set is changed update that function. -function prereqs() { - kube::log::status "Verifying Prerequisites...." - kube::build::ensure_docker_in_path || return 1 - if kube::build::is_osx; then - kube::build::docker_available_on_osx || return 1 - fi - kube::build::ensure_docker_daemon_connectivity || return 1 - - KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${REPO_DIR:-${KUBE_ROOT}}") - KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" - KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" - KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" - KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" - DOCKER_MOUNT_ARGS=( - --volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}" - --volume /etc/localtime:/etc/localtime:ro - --volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}" - ) - LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" -} - -prereqs -mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}" -cp "${KUBE_ROOT}/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile" -kube::build::update_dockerfile -kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' -kube::build::run_build_command hack/update-generated-protobuf-dockerized.sh "$@" +"${KUBE_ROOT}/build/run.sh" hack/update-generated-protobuf-dockerized.sh "$@" # ex: ts=2 sw=2 et filetype=sh diff --git a/hack/update-generated-runtime.sh b/hack/update-generated-runtime.sh index 4360cb81643..c347c01a69f 100755 --- a/hack/update-generated-runtime.sh +++ b/hack/update-generated-runtime.sh @@ -19,35 +19,11 @@ set -o nounset set -o pipefail KUBE_ROOT=$(dirname "${BASH_SOURCE}")/.. -source "${KUBE_ROOT}/build/common.sh" -kube::golang::setup_env +# NOTE: All output from this script needs to be copied back to the calling +# source tree. This is managed in kube::build::copy_output in build/common.sh. +# If the output set is changed update that function. -function prereqs() { - kube::log::status "Verifying Prerequisites...." - kube::build::ensure_docker_in_path || return 1 - if kube::build::is_osx; then - kube::build::docker_available_on_osx || return 1 - fi - kube::build::ensure_docker_daemon_connectivity || return 1 - - KUBE_ROOT_HASH=$(kube::build::short_hash "${HOSTNAME:-}:${REPO_DIR:-${KUBE_ROOT}}/go-to-protobuf") - KUBE_BUILD_IMAGE_TAG="build-${KUBE_ROOT_HASH}" - KUBE_BUILD_IMAGE="${KUBE_BUILD_IMAGE_REPO}:${KUBE_BUILD_IMAGE_TAG}" - KUBE_BUILD_CONTAINER_NAME="kube-build-${KUBE_ROOT_HASH}" - KUBE_BUILD_DATA_CONTAINER_NAME="kube-build-data-${KUBE_ROOT_HASH}" - DOCKER_MOUNT_ARGS=( - --volume "${REPO_DIR:-${KUBE_ROOT}}:/go/src/${KUBE_GO_PACKAGE}" - --volume /etc/localtime:/etc/localtime:ro - --volumes-from "${KUBE_BUILD_DATA_CONTAINER_NAME}" - ) - LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${KUBE_BUILD_IMAGE}" -} - -prereqs -mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}" -cp "${KUBE_ROOT}/cmd/libs/go2idl/go-to-protobuf/build-image/Dockerfile" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile" -kube::build::update_dockerfile -kube::build::docker_build "${KUBE_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false' -kube::build::run_build_command hack/update-generated-runtime-dockerized.sh "$@" +${KUBE_ROOT}/build/run.sh hack/update-generated-runtime-dockerized.sh "$@" +# ex: ts=2 sw=2 et filetype=sh diff --git a/hack/verify-generated-protobuf.sh b/hack/verify-generated-protobuf.sh index 07291520a33..e1192d77e8c 100755 --- a/hack/verify-generated-protobuf.sh +++ b/hack/verify-generated-protobuf.sh @@ -34,8 +34,8 @@ trap "cleanup" EXIT SIGINT cleanup for APIROOT in ${APIROOTS}; do - mkdir -p "${_tmp}/${APIROOT%/*}" - cp -a "${KUBE_ROOT}/${APIROOT}" "${_tmp}/${APIROOT}" + mkdir -p "${_tmp}/${APIROOT}" + cp -a -T "${KUBE_ROOT}/${APIROOT}" "${_tmp}/${APIROOT}" done "${KUBE_ROOT}/hack/update-generated-protobuf.sh" @@ -44,7 +44,7 @@ for APIROOT in ${APIROOTS}; do echo "diffing ${APIROOT} against freshly generated protobuf" ret=0 diff -Naupr -I 'Auto generated by' -x 'zz_generated.*' "${KUBE_ROOT}/${APIROOT}" "${TMP_APIROOT}" || ret=$? - cp -a "${TMP_APIROOT}" "${KUBE_ROOT}/${APIROOT%/*}" + cp -a -T "${TMP_APIROOT}" "${KUBE_ROOT}/${APIROOT}" if [[ $ret -eq 0 ]]; then echo "${APIROOT} up to date." else diff --git a/hack/verify-generated-swagger-docs.sh b/hack/verify-generated-swagger-docs.sh index 0212689e64d..9cd60c1808c 100755 --- a/hack/verify-generated-swagger-docs.sh +++ b/hack/verify-generated-swagger-docs.sh @@ -42,15 +42,21 @@ DIFFROOT="${KUBE_ROOT}/pkg" TMP_DIFFROOT="${KUBE_ROOT}/_tmp/pkg" _tmp="${KUBE_ROOT}/_tmp" +cleanup() { + rm -rf "${_tmp}" +} +trap "cleanup" EXIT SIGINT + +cleanup + mkdir -p "${_tmp}" -trap "rm -rf ${_tmp}" EXIT SIGINT -cp -a "${DIFFROOT}" "${TMP_DIFFROOT}" +cp -a -T "${DIFFROOT}" "${TMP_DIFFROOT}" "${KUBE_ROOT}/hack/update-generated-swagger-docs.sh" echo "diffing ${DIFFROOT} against freshly generated swagger type documentation" ret=0 diff -Naupr -I 'Auto generated by' "${DIFFROOT}" "${TMP_DIFFROOT}" || ret=$? -cp -a "${TMP_DIFFROOT}" "${KUBE_ROOT}/" +cp -a -T "${TMP_DIFFROOT}" "${DIFFROOT}" if [[ $ret -eq 0 ]] then echo "${DIFFROOT} up to date."